Skip to content

vllm.engine.llm_engine

_LOCAL_LOGGING_INTERVAL_SEC module-attribute

_LOCAL_LOGGING_INTERVAL_SEC = 5

_O module-attribute

_O = TypeVar('_O', RequestOutput, PoolingRequestOutput)

_R module-attribute

_R = TypeVar('_R', default=Any)

logger module-attribute

logger = init_logger(__name__)

LLMEngine

An LLM engine that receives requests and generates texts.

This is the main class for the vLLM engine. It receives requests from clients and generates texts from the LLM. It includes a tokenizer, a language model (possibly distributed across multiple GPUs), and GPU memory space allocated for intermediate states (aka KV cache). This class utilizes iteration-level scheduling and efficient memory management to maximize the serving throughput.

The LLM class wraps this class for offline batched inference and the AsyncLLMEngine class wraps this class for online serving.

The config arguments are derived from EngineArgs.

Parameters:

Name Type Description Default
vllm_config VllmConfig

The configuration for initializing and running vLLM.

required
executor_class Type[ExecutorBase]

The model executor class for managing distributed execution.

required
log_stats bool

Whether to log statistics.

required
usage_context UsageContext

Specified entry point, used for usage info collection.

ENGINE_CONTEXT
Source code in vllm/engine/llm_engine.py
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
class LLMEngine:
    """An LLM engine that receives requests and generates texts.

    This is the main class for the vLLM engine. It receives requests
    from clients and generates texts from the LLM. It includes a tokenizer, a
    language model (possibly distributed across multiple GPUs), and GPU memory
    space allocated for intermediate states (aka KV cache). This class utilizes
    iteration-level scheduling and efficient memory management to maximize the
    serving throughput.

    The [`LLM`][vllm.LLM] class wraps this class for offline batched inference
    and the [`AsyncLLMEngine`][vllm.engine.async_llm_engine.AsyncLLMEngine]
    class wraps this class for online serving.

    The config arguments are derived from [`EngineArgs`][vllm.EngineArgs].

    Args:
        vllm_config: The configuration for initializing and running vLLM.
        executor_class: The model executor class for managing distributed
            execution.
        log_stats: Whether to log statistics.
        usage_context: Specified entry point, used for usage info collection.
    """

    DO_VALIDATE_OUTPUT: ClassVar[bool] = False
    """A flag to toggle whether to validate the type of request output."""

    @classmethod
    @contextmanager
    def enable_output_validation(cls):
        cls.DO_VALIDATE_OUTPUT = True

        yield

        cls.DO_VALIDATE_OUTPUT = False

    @classmethod
    def validate_output(
        cls,
        output: object,
        output_type: Type[_O],
    ) -> _O:
        do_validate = cls.DO_VALIDATE_OUTPUT

        if ((TYPE_CHECKING or do_validate)
                and not isinstance(output, output_type)):
            raise TypeError(f"Expected output of type {output_type}, "
                            f"but found type {type(output)}")

        return cast(_O, output)

    @classmethod
    def validate_outputs(
        cls,
        outputs: GenericSequence[object],
        output_type: Type[_O],
    ) -> List[_O]:
        do_validate = cls.DO_VALIDATE_OUTPUT

        outputs_: List[_O]
        if TYPE_CHECKING or do_validate:
            outputs_ = []
            for output in outputs:
                if not isinstance(output, output_type):
                    raise TypeError(f"Expected output of type {output_type}, "
                                    f"but found type {type(output)}")

                outputs_.append(output)
        else:
            outputs_ = outputs

        return outputs_

    tokenizer: Optional[TokenizerGroup]

    def __init__(
        self,
        vllm_config: VllmConfig,
        executor_class: Type[ExecutorBase],
        log_stats: bool,
        usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
        stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
        mm_registry: MultiModalRegistry = MULTIMODAL_REGISTRY,
        use_cached_outputs: bool = False,
    ) -> None:
        if envs.VLLM_USE_V1:
            raise ValueError(
                "Using V0 LLMEngine, but envs.VLLM_USE_V1=True. "
                "This should not happen. As a workaround, try using "
                "LLMEngine.from_vllm_config(...) or explicitly set "
                "VLLM_USE_V1=0 or 1 and report this issue on Github.")

        self.vllm_config = vllm_config
        self.model_config = vllm_config.model_config
        self.cache_config = vllm_config.cache_config
        self.lora_config = vllm_config.lora_config
        self.parallel_config = vllm_config.parallel_config
        self.scheduler_config = vllm_config.scheduler_config
        self.device_config = vllm_config.device_config
        self.speculative_config = vllm_config.speculative_config  # noqa
        self.load_config = vllm_config.load_config
        self.decoding_config = vllm_config.decoding_config or DecodingConfig(  # noqa
        )
        self.prompt_adapter_config = vllm_config.prompt_adapter_config  # noqa
        self.observability_config = vllm_config.observability_config or ObservabilityConfig(  # noqa
        )

        logger.info(
            "Initializing a V0 LLM engine (v%s) with config: %s, "
            "use_cached_outputs=%s, ",
            VLLM_VERSION,
            vllm_config,
            use_cached_outputs,
        )

        self.log_stats = log_stats
        self.use_cached_outputs = use_cached_outputs

        if not self.model_config.skip_tokenizer_init:
            self.tokenizer = self._init_tokenizer()
            self.detokenizer = Detokenizer(self.tokenizer)
            tokenizer_group = self.get_tokenizer_group()
        else:
            self.tokenizer = None
            self.detokenizer = None
            tokenizer_group = None

        # Ensure that the function doesn't contain a reference to self,
        # to avoid engine GC issues
        def get_tokenizer_for_seq(sequence: Sequence) -> AnyTokenizer:
            assert tokenizer_group, ("tokenizer_group cannot be None, "
                                     "make sure skip_tokenizer_init is False")
            return tokenizer_group.get_lora_tokenizer(sequence.lora_request)

        self.seq_counter = Counter()
        self.generation_config_fields = (
            self.model_config.try_get_generation_config())

        self.input_preprocessor = InputPreprocessor(self.model_config,
                                                    self.tokenizer,
                                                    mm_registry)

        self.model_executor = executor_class(vllm_config=vllm_config)

        if self.model_config.runner_type != "pooling":
            self._initialize_kv_caches()

        # If usage stat is enabled, collect relevant info.
        if is_usage_stats_enabled():
            from vllm.model_executor.model_loader import (
                get_architecture_class_name)
            usage_message.report_usage(
                get_architecture_class_name(self.model_config),
                usage_context,
                extra_kvs={
                    # Common configuration
                    "dtype":
                    str(self.model_config.dtype),
                    "tensor_parallel_size":
                    self.parallel_config.tensor_parallel_size,
                    "block_size":
                    self.cache_config.block_size,
                    "gpu_memory_utilization":
                    self.cache_config.gpu_memory_utilization,

                    # Quantization
                    "quantization":
                    self.model_config.quantization,
                    "kv_cache_dtype":
                    str(self.cache_config.cache_dtype),

                    # Feature flags
                    "enable_lora":
                    bool(self.lora_config),
                    "enable_prompt_adapter":
                    bool(self.prompt_adapter_config),
                    "enable_prefix_caching":
                    self.cache_config.enable_prefix_caching,
                    "enforce_eager":
                    self.model_config.enforce_eager,
                    "disable_custom_all_reduce":
                    self.parallel_config.disable_custom_all_reduce,
                })

        self.cached_scheduler_outputs = [
            SchedulerOutputState()
            for _ in range(self.parallel_config.pipeline_parallel_size)
        ]

        self.scheduler_contexts = [
            SchedulerContext(multi_step_stream_outputs=self.scheduler_config.
                             multi_step_stream_outputs)
            for _ in range(self.parallel_config.pipeline_parallel_size)
        ]

        if self.model_config.use_async_output_proc:
            process_model_outputs = weak_bind(self._process_model_outputs)

            self.async_callbacks = [
                partial(process_model_outputs,
                        ctx=self.scheduler_contexts[v_id])
                for v_id in range(self.parallel_config.pipeline_parallel_size)
            ]
        else:
            self.async_callbacks = []

        # Currently used by AsyncLLMEngine to ensure quick append
        # of request outputs to asyncio queues
        self.process_request_outputs_callback: Optional[Callable] = None

        # Create the scheduler.
        # NOTE: the cache_config here have been updated with the numbers of
        # GPU and CPU blocks, which are profiled in the distributed executor.
        if isinstance(self.vllm_config.scheduler_config.scheduler_cls, str):
            Scheduler = resolve_obj_by_qualname(
                self.vllm_config.scheduler_config.scheduler_cls)
        else:
            Scheduler = self.vllm_config.scheduler_config.scheduler_cls
        self.scheduler = [
            Scheduler(
                self.scheduler_config, self.cache_config, self.lora_config,
                self.parallel_config.pipeline_parallel_size,
                self.async_callbacks[v_id]
                if self.model_config.use_async_output_proc else None)
            for v_id in range(self.parallel_config.pipeline_parallel_size)
        ]

        # Metric Logging.
        if self.log_stats:
            if stat_loggers is not None:
                self.stat_loggers = stat_loggers
            else:
                # Lazy import for prometheus multiprocessing.
                # We need to set PROMETHEUS_MULTIPROC_DIR environment variable
                # before prometheus_client is imported.
                # See https://prometheus.github.io/client_python/multiprocess/
                from vllm.engine.metrics import (LoggingStatLogger,
                                                 PrometheusStatLogger)

                self.stat_loggers = {
                    "logging":
                    LoggingStatLogger(
                        local_interval=_LOCAL_LOGGING_INTERVAL_SEC,
                        vllm_config=vllm_config),
                    "prometheus":
                    PrometheusStatLogger(
                        local_interval=_LOCAL_LOGGING_INTERVAL_SEC,
                        labels=dict(
                            model_name=self.model_config.served_model_name),
                        vllm_config=vllm_config),
                }
                self.stat_loggers["prometheus"].info("cache_config",
                                                     self.cache_config)

        self.tracer = None
        if self.observability_config.otlp_traces_endpoint:
            self.tracer = init_tracer(
                "vllm.llm_engine",
                self.observability_config.otlp_traces_endpoint)

        # Create sequence output processor, e.g. for beam search or
        # speculative decoding.
        self.output_processor = (
            SequenceGroupOutputProcessor.create_output_processor(
                self.scheduler_config,
                self.detokenizer,
                self.scheduler,
                self.seq_counter,
                get_tokenizer_for_seq,
                stop_checker=StopChecker(self.scheduler_config.max_model_len,
                                         get_tokenizer_for_seq),
            ))

        self.seq_id_to_seq_group: Dict[str, SequenceGroupBase] = {}

        # Flag to set when an input fails to process and the engine should run
        # the next step without re-scheduling.
        self._skip_scheduling_next_step = False

        # Don't keep the dummy data in memory
        self.reset_mm_cache()

    def _initialize_kv_caches(self) -> None:
        """Initialize the KV cache in the worker(s).

        The workers will determine the number of blocks in both the GPU cache
        and the swap CPU cache.
        """
        start = time.time()
        num_gpu_blocks, num_cpu_blocks = (
            self.model_executor.determine_num_available_blocks())

        if self.cache_config.num_gpu_blocks_override is not None:
            num_gpu_blocks_override = self.cache_config.num_gpu_blocks_override
            logger.info(
                "Overriding num_gpu_blocks=%d with "
                "num_gpu_blocks_override=%d", num_gpu_blocks,
                num_gpu_blocks_override)
            num_gpu_blocks = num_gpu_blocks_override

        self.cache_config.num_gpu_blocks = num_gpu_blocks
        self.cache_config.num_cpu_blocks = num_cpu_blocks

        self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
        elapsed = time.time() - start
        logger.info(("init engine (profile, create kv cache, "
                     "warmup model) took %.2f seconds"), elapsed)

    @classmethod
    def _get_executor_cls(cls,
                          engine_config: VllmConfig) -> Type[ExecutorBase]:
        # distributed_executor_backend must be set in VllmConfig.__post_init__
        distributed_executor_backend = (
            engine_config.parallel_config.distributed_executor_backend)
        # Initialize the cluster and specify the executor class.
        if isinstance(distributed_executor_backend, type):
            if not issubclass(distributed_executor_backend, ExecutorBase):
                raise TypeError(
                    "distributed_executor_backend must be a subclass of "
                    f"ExecutorBase. Got {distributed_executor_backend}.")
            executor_class = distributed_executor_backend
        elif distributed_executor_backend == "ray":
            from vllm.executor.ray_distributed_executor import (
                RayDistributedExecutor)
            executor_class = RayDistributedExecutor
        elif distributed_executor_backend == "mp":
            from vllm.executor.mp_distributed_executor import (
                MultiprocessingDistributedExecutor)
            assert not envs.VLLM_USE_RAY_SPMD_WORKER, (
                "multiprocessing distributed executor backend does not "
                "support VLLM_USE_RAY_SPMD_WORKER=1")
            executor_class = MultiprocessingDistributedExecutor
        elif distributed_executor_backend == "uni":
            # JAX-style, single-process, multi-device executor.
            from vllm.executor.uniproc_executor import UniProcExecutor
            executor_class = UniProcExecutor
        elif distributed_executor_backend == "external_launcher":
            # executor with external launcher
            from vllm.executor.uniproc_executor import (  # noqa
                ExecutorWithExternalLauncher)
            executor_class = ExecutorWithExternalLauncher
        else:
            raise ValueError("unrecognized distributed_executor_backend: "
                             f"{distributed_executor_backend}")
        return executor_class

    @classmethod
    def from_vllm_config(
        cls,
        vllm_config: VllmConfig,
        usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
        stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
        disable_log_stats: bool = False,
    ) -> "LLMEngine":
        return cls(
            vllm_config=vllm_config,
            executor_class=cls._get_executor_cls(vllm_config),
            log_stats=(not disable_log_stats),
            usage_context=usage_context,
            stat_loggers=stat_loggers,
        )

    @classmethod
    def from_engine_args(
        cls,
        engine_args: EngineArgs,
        usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
        stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
    ) -> "LLMEngine":
        """Creates an LLM engine from the engine arguments."""
        # Create the engine configs.
        vllm_config = engine_args.create_engine_config(usage_context)

        engine_cls = cls
        if envs.VLLM_USE_V1:
            from vllm.v1.engine.llm_engine import LLMEngine as V1LLMEngine
            engine_cls = V1LLMEngine

        return engine_cls.from_vllm_config(
            vllm_config=vllm_config,
            usage_context=usage_context,
            stat_loggers=stat_loggers,
            disable_log_stats=engine_args.disable_log_stats,
        )

    def __reduce__(self):
        # This is to ensure that the LLMEngine is not referenced in
        # the closure used to initialize Ray worker actors
        raise RuntimeError("LLMEngine should not be pickled!")

    def __del__(self):
        # Shutdown model executor when engine is garbage collected
        # Use getattr since __init__ can fail before the field is set
        if model_executor := getattr(self, "model_executor", None):
            model_executor.shutdown()

    def get_tokenizer_group(self) -> TokenizerGroup:
        if self.tokenizer is None:
            raise ValueError("Unable to get tokenizer because "
                             "skip_tokenizer_init is True")

        return self.tokenizer

    def get_tokenizer(
        self,
        lora_request: Optional[LoRARequest] = None,
    ) -> AnyTokenizer:
        return self.get_tokenizer_group().get_lora_tokenizer(lora_request)

    def _init_tokenizer(self) -> TokenizerGroup:
        return init_tokenizer_from_configs(
            model_config=self.model_config,
            scheduler_config=self.scheduler_config,
            lora_config=self.lora_config)

    def _verify_args(self) -> None:
        self.model_config.verify_with_parallel_config(self.parallel_config)
        self.cache_config.verify_with_parallel_config(self.parallel_config)
        if self.lora_config:
            self.lora_config.verify_with_model_config(self.model_config)
            self.lora_config.verify_with_scheduler_config(
                self.scheduler_config)
        if self.prompt_adapter_config:
            self.prompt_adapter_config.verify_with_model_config(
                self.model_config)

    def _add_processed_request(
        self,
        request_id: str,
        processed_inputs: ProcessorInputs,
        params: Union[SamplingParams, PoolingParams],
        arrival_time: float,
        lora_request: Optional[LoRARequest],
        prompt_adapter_request: Optional[PromptAdapterRequest],
        trace_headers: Optional[Mapping[str, str]] = None,
        priority: int = 0,
    ) -> Optional[SequenceGroup]:
        """Add a processed request to the engine's request pool.
        return the created sequence group.
        """
        if isinstance(params, SamplingParams) and params.n > 1:
            ParallelSampleSequenceGroup.add_request(
                request_id,
                self,
                params,
                processed_inputs=processed_inputs,
                arrival_time=arrival_time,
                lora_request=lora_request,
                trace_headers=trace_headers,
                prompt_adapter_request=prompt_adapter_request,
                priority=priority,
            )
            return None

        self._validate_model_inputs(processed_inputs, lora_request)
        # Create the sequences.
        block_size = self.cache_config.block_size
        seq_id = next(self.seq_counter)
        eos_token_id = self.input_preprocessor.get_eos_token_id(lora_request)

        encoder_inputs, decoder_inputs = split_enc_dec_inputs(processed_inputs)

        seq = Sequence(seq_id, decoder_inputs, block_size, eos_token_id,
                       lora_request, prompt_adapter_request)

        encoder_seq = (None if encoder_inputs is None else Sequence(
            seq_id, encoder_inputs, block_size, eos_token_id, lora_request,
            prompt_adapter_request))

        # Create a SequenceGroup based on SamplingParams or PoolingParams
        if isinstance(params, SamplingParams):
            seq_group = self._create_sequence_group_with_sampling(
                request_id,
                seq,
                params,
                arrival_time=arrival_time,
                lora_request=lora_request,
                trace_headers=trace_headers,
                prompt_adapter_request=prompt_adapter_request,
                encoder_seq=encoder_seq,
                priority=priority)
        elif isinstance(params, PoolingParams):
            seq_group = self._create_sequence_group_with_pooling(
                request_id,
                seq,
                params,
                arrival_time=arrival_time,
                lora_request=lora_request,
                prompt_adapter_request=prompt_adapter_request,
                encoder_seq=encoder_seq,
                priority=priority)
        else:
            raise ValueError(
                "Either SamplingParams or PoolingParams must be provided.")

        # Add the sequence group to the scheduler with least unfinished seqs.
        costs = [
            scheduler.get_num_unfinished_seq_groups()
            for scheduler in self.scheduler
        ]
        min_cost_scheduler = self.scheduler[costs.index(min(costs))]
        min_cost_scheduler.add_seq_group(seq_group)

        return seq_group

    def stop_remote_worker_execution_loop(self) -> None:
        self.model_executor.stop_remote_worker_execution_loop()

    def add_request(
        self,
        request_id: str,
        prompt: PromptType,
        params: Union[SamplingParams, PoolingParams],
        arrival_time: Optional[float] = None,
        lora_request: Optional[LoRARequest] = None,
        tokenization_kwargs: Optional[dict[str, Any]] = None,
        trace_headers: Optional[Mapping[str, str]] = None,
        prompt_adapter_request: Optional[PromptAdapterRequest] = None,
        priority: int = 0,
    ) -> None:
        """Add a request to the engine's request pool.

        The request is added to the request pool and will be processed by the
        scheduler as `engine.step()` is called. The exact scheduling policy is
        determined by the scheduler.

        Args:
            request_id: The unique ID of the request.
            prompt: The prompt to the LLM. See
                [PromptType][vllm.inputs.PromptType]
                for more details about the format of each input.
            params: Parameters for sampling or pooling.
                [SamplingParams][vllm.SamplingParams] for text generation.
                [PoolingParams][vllm.PoolingParams] for pooling.
            arrival_time: The arrival time of the request. If None, we use
                the current monotonic time.
            lora_request: The LoRA request to add.
            trace_headers: OpenTelemetry trace headers.
            prompt_adapter_request: The prompt adapter request to add.
            priority: The priority of the request.
                Only applicable with priority scheduling.

        Details:
            - Set arrival_time to the current time if it is None.
            - Set prompt_token_ids to the encoded prompt if it is None.
            - Create `n` number of [Sequence][vllm.Sequence] objects.
            - Create a [SequenceGroup][vllm.SequenceGroup] object
              from the list of [Sequence][vllm.Sequence].
            - Add the [SequenceGroup][vllm.SequenceGroup] object to the
              scheduler.

        Example:
            >>> # initialize engine
            >>> engine = LLMEngine.from_engine_args(engine_args)
            >>> # set request arguments
            >>> example_prompt = "Who is the president of the United States?"
            >>> sampling_params = SamplingParams(temperature=0.0)
            >>> request_id = 0
            >>>
            >>> # add the request to the engine
            >>> engine.add_request(
            >>>    str(request_id),
            >>>    example_prompt,
            >>>    SamplingParams(temperature=0.0))
            >>> # continue the request processing
            >>> ...
        """
        if not isinstance(request_id, str):
            raise TypeError(
                f"request_id must be a string, got {type(request_id)}")

        if lora_request is not None and not self.lora_config:
            raise ValueError(f"Got lora_request {lora_request} but LoRA is "
                             "not enabled!")

        if priority != 0 and not self.scheduler_config.policy == "priority":
            raise ValueError(f"Got priority {priority} but "
                             "Priority scheduling is not enabled.")

        if isinstance(params, SamplingParams) \
            and (params.guided_decoding or params.logits_processors) \
            and self.scheduler_config.num_scheduler_steps > 1:
            raise ValueError(
                "Guided decoding and logits processors are not supported "
                "in multi-step decoding")

        if arrival_time is None:
            arrival_time = time.time()

        if (isinstance(prompt, dict)
                and prompt.get("prompt_embeds", None) is not None
                and not prompt.get("prompt_token_ids", None)):
            seq_len = prompt["prompt_embeds"].shape[0]
            prompt["prompt_token_ids"] = [0] * seq_len

        processed_inputs = self.input_preprocessor.preprocess(
            prompt,
            tokenization_kwargs=tokenization_kwargs,
            lora_request=lora_request,
            prompt_adapter_request=prompt_adapter_request,
        )

        self._add_processed_request(
            request_id=request_id,
            processed_inputs=processed_inputs,
            params=params,
            arrival_time=arrival_time,
            lora_request=lora_request,
            prompt_adapter_request=prompt_adapter_request,
            trace_headers=trace_headers,
            priority=priority,
        )

    def _create_sequence_group_with_sampling(
        self,
        request_id: str,
        seq: Sequence,
        sampling_params: SamplingParams,
        arrival_time: float,
        lora_request: Optional[LoRARequest],
        trace_headers: Optional[Mapping[str, str]] = None,
        prompt_adapter_request: Optional[PromptAdapterRequest] = None,
        encoder_seq: Optional[Sequence] = None,
        priority: int = 0,
    ) -> SequenceGroup:
        """Creates a SequenceGroup with SamplingParams."""
        max_logprobs = self.get_model_config().max_logprobs
        if (sampling_params.logprobs
                and sampling_params.logprobs > max_logprobs) or (
                    sampling_params.prompt_logprobs
                    and sampling_params.prompt_logprobs > max_logprobs):
            raise ValueError(f"Cannot request more than "
                             f"{max_logprobs} logprobs.")

        sampling_params = self._build_logits_processors(
            sampling_params, lora_request)

        # Defensive copy of SamplingParams, which are used by the sampler,
        # this doesn't deep-copy LogitsProcessor objects
        sampling_params = sampling_params.clone()

        sampling_params.update_from_generation_config(
            self.generation_config_fields, seq.eos_token_id)

        # Create the sequence group.
        draft_size = 1
        if self.vllm_config.speculative_config is not None:
            draft_size = \
                self.vllm_config.speculative_config.num_speculative_tokens + 1
        seq_group = SequenceGroup(
            request_id=request_id,
            seqs=[seq],
            arrival_time=arrival_time,
            sampling_params=sampling_params,
            lora_request=lora_request,
            trace_headers=trace_headers,
            prompt_adapter_request=prompt_adapter_request,
            encoder_seq=encoder_seq,
            priority=priority,
            draft_size=draft_size)

        return seq_group

    def _create_sequence_group_with_pooling(
        self,
        request_id: str,
        seq: Sequence,
        pooling_params: PoolingParams,
        arrival_time: float,
        lora_request: Optional[LoRARequest],
        prompt_adapter_request: Optional[PromptAdapterRequest],
        encoder_seq: Optional[Sequence] = None,
        priority: int = 0,
    ) -> SequenceGroup:
        """Creates a SequenceGroup with PoolingParams."""
        # Defensive copy of PoolingParams, which are used by the pooler
        pooling_params = pooling_params.clone()
        # Create the sequence group.
        seq_group = SequenceGroup(
            request_id=request_id,
            seqs=[seq],
            arrival_time=arrival_time,
            lora_request=lora_request,
            pooling_params=pooling_params,
            prompt_adapter_request=prompt_adapter_request,
            encoder_seq=encoder_seq,
            priority=priority)
        return seq_group

    def abort_request(self, request_id: Union[str, Iterable[str]]) -> None:
        """Aborts a request(s) with the given ID.

        Args:
            request_id: The ID(s) of the request to abort.

        Details:
            - Refer to [vllm.core.scheduler.Scheduler.abort_seq_group][].

        Example:
            >>> # initialize engine and add a request with request_id
            >>> request_id = str(0)
            >>> # abort the request
            >>> engine.abort_request(request_id)
        """
        for scheduler in self.scheduler:
            scheduler.abort_seq_group(
                request_id, seq_id_to_seq_group=self.seq_id_to_seq_group)

    def get_vllm_config(self) -> VllmConfig:
        """Gets the vllm configuration."""
        return self.vllm_config

    def get_model_config(self) -> ModelConfig:
        """Gets the model configuration."""
        return self.model_config

    def get_parallel_config(self) -> ParallelConfig:
        """Gets the parallel configuration."""
        return self.parallel_config

    def get_decoding_config(self) -> DecodingConfig:
        """Gets the decoding configuration."""
        return self.decoding_config

    def get_scheduler_config(self) -> SchedulerConfig:
        """Gets the scheduler configuration."""
        return self.scheduler_config

    def get_lora_config(self) -> LoRAConfig:
        """Gets the LoRA configuration."""
        return self.lora_config

    def get_num_unfinished_requests(self) -> int:
        """Gets the number of unfinished requests."""
        return sum(scheduler.get_num_unfinished_seq_groups()
                   for scheduler in self.scheduler)

    def has_unfinished_requests(self) -> bool:
        """Returns True if there are unfinished requests."""
        return any(scheduler.has_unfinished_seqs()
                   for scheduler in self.scheduler)

    def has_unfinished_requests_for_virtual_engine(
            self, virtual_engine: int) -> bool:
        """
        Returns True if there are unfinished requests for the virtual engine.
        """
        return self.scheduler[virtual_engine].has_unfinished_seqs()

    def reset_mm_cache(self) -> bool:
        """Reset the multi-modal cache."""
        return self.input_preprocessor.mm_registry.reset_processor_cache()

    def reset_prefix_cache(self, device: Optional[Device] = None) -> bool:
        """Reset prefix cache for all devices."""

        success = True
        for scheduler in self.scheduler:
            success = success and scheduler.reset_prefix_cache(device)
        return success

    @staticmethod
    def _process_sequence_group_outputs(
        seq_group: SequenceGroup,
        outputs: List[PoolingSequenceGroupOutput],
    ) -> None:
        seq_group.pooled_data = outputs[0].data

        for seq in seq_group.get_seqs():
            seq.status = SequenceStatus.FINISHED_STOPPED

        return

    def _update_num_computed_tokens_for_multi_step_prefill(
            self, seq_group: SequenceGroup,
            seq_group_meta: SequenceGroupMetadata,
            is_first_step_output: Optional[bool]):
        """
        This function updates num_computed_tokens for prompt sequences
        when Multi-Step is enabled.

        seq_group: SequenceGroup to update the num_computed_tokens for.
        seq_group_meta: Metadata of the given SequenceGroup.
        is_first_step_output: Optional[bool] -
            When available, is_first_step_output indicates if the appended
            output token is the output of the first-step in multi-step.
            A value of None indicates that outputs from all steps in
            in multi-step are submitted in a single burst.
        """

        assert self.scheduler_config.is_multi_step

        if not seq_group_meta.is_prompt:
            # num_computed_token updates for multi-step decodes happen after
            # the tokens are appended to the sequence.
            return

        do_update: bool = False
        if self.scheduler_config.chunked_prefill_enabled:
            # In multi-step + chunked-prefill case, the prompt sequences
            # that are scheduled are fully processed in the first step.
            do_update = is_first_step_output is None or is_first_step_output
        else:
            # Normal multi-step decoding case. In this case prompt-sequences
            # are actually single-stepped. Always update in this case.
            assert seq_group.state.num_steps == 1
            do_update = True

        if do_update:
            seq_group.update_num_computed_tokens(
                seq_group_meta.token_chunk_size)

    def _process_model_outputs(self,
                               ctx: SchedulerContext,
                               request_id: Optional[str] = None) -> None:
        """Apply the model output to the sequences in the scheduled seq groups
        and return responses.

        ctx: The virtual engine context to work on
        request_id: If provided, then only this request is going to be processed
        """

        now = time.time()

        if len(ctx.output_queue) == 0:
            return None

        # Get pending async postprocessor
        if request_id:
            # When we process only one request, no pop is required
            # (since later we will process all of the rest)
            (outputs, seq_group_metadata_list, scheduler_outputs, is_async,
             is_last_step, is_first_step_output, skip) = ctx.output_queue[0]
        else:
            (outputs, seq_group_metadata_list, scheduler_outputs, is_async,
             is_last_step, is_first_step_output,
             skip) = ctx.output_queue.popleft()

        # Sanity check
        assert len(seq_group_metadata_list) == len(
            scheduler_outputs.scheduled_seq_groups)

        has_multiple_outputs: bool = len(outputs) > 1
        outputs_by_sequence_group: List[List[SequenceGroupOutput]]
        if has_multiple_outputs:
            assert self.scheduler_config.is_multi_step or \
                     self.speculative_config
            # Organize outputs by [step][sequence group] instead of
            # [sequence group][step].
            if self.scheduler_config.is_multi_step:
                outputs_by_sequence_group = create_output_by_sequence_group(
                    outputs, len(seq_group_metadata_list))
            elif self.speculative_config:
                # Decodes are multi-steps while prefills are not, outputting at
                # most 1 token. Separate them so that we can trigger chunk
                # processing without having to pad or copy over prompts K times
                # to match decodes structure (costly with prompt_logprobs).
                num_prefills = sum(sg.is_prompt
                                   for sg in seq_group_metadata_list)
                prefills, decodes = outputs[:num_prefills], outputs[
                    num_prefills:]
                outputs_by_sequence_group = create_output_by_sequence_group(
                    decodes,
                    num_seq_groups=len(seq_group_metadata_list) - num_prefills)
                outputs_by_sequence_group = [p.outputs for p in prefills
                                             ] + outputs_by_sequence_group
            # We have outputs for multiple steps submitted in a single burst,
            # so invalidate is_first_step_output.
            is_first_step_output = None
        else:
            outputs_by_sequence_group = outputs

        # Determine the requests we need to operate on
        if request_id:
            indices = []
            for i, seq_group_meta in enumerate(seq_group_metadata_list):
                if seq_group_meta.request_id == request_id:
                    assert i not in skip  # Cannot be called twice
                    indices.append(i)
                    break

            # If the request_id was not found, then it means that
            # this is a new request that has no pending async
            # postprocessor
            if not indices:
                return
        else:
            indices = range(len(seq_group_metadata_list))  # type: ignore

        finished_before: List[int] = []
        finished_now: List[int] = []
        for i in indices:
            if i in skip:
                continue

            seq_group_meta = seq_group_metadata_list[i]
            scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

            seq_group: SequenceGroup = scheduled_seq_group.seq_group

            if seq_group.is_finished():
                finished_before.append(i)
                continue

            output: List[SequenceGroupOutput]
            if has_multiple_outputs:
                output = outputs_by_sequence_group[i]
            else:
                output = [outputs_by_sequence_group[0][i]]

            if not is_async:
                if self.scheduler_config.is_multi_step:
                    # Updates happen only if the sequence is prefill
                    self._update_num_computed_tokens_for_multi_step_prefill(
                        seq_group, seq_group_meta, is_first_step_output)
                else:
                    seq_group.update_num_computed_tokens(
                        seq_group_meta.token_chunk_size or 0)

            if outputs:
                for o in outputs:
                    if (isinstance(o, SamplerOutput)
                            and seq_group.metrics is not None):
                        if seq_group.metrics.model_forward_time is not None:
                            seq_group.metrics.model_forward_time += (
                                o.model_forward_time or 0)
                        else:
                            seq_group.metrics.model_forward_time = (
                                o.model_forward_time)
                        if seq_group.metrics.model_execute_time is not None:
                            seq_group.metrics.model_execute_time += (
                                o.model_execute_time or 0)
                        else:
                            seq_group.metrics.model_execute_time = (
                                o.model_execute_time)

            if self.model_config.runner_type == "pooling":
                self._process_sequence_group_outputs(seq_group, output)
            else:
                self.output_processor.process_prompt_logprob(seq_group, output)
                if seq_group_meta.do_sample:
                    self.output_processor.process_outputs(
                        seq_group, output, is_async)

            if seq_group.is_finished():
                finished_now.append(i)

        # Generate outputs for the requests that finished this iteration
        for i in finished_now:
            scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

            seq_group = scheduled_seq_group.seq_group
            seq_group.maybe_set_first_token_time(now)
            if not seq_group.is_prefill():
                seq_group.set_last_token_time(now)
            request_output = RequestOutputFactory.create(
                seq_group,
                self.seq_id_to_seq_group,
                use_cache=self.use_cached_outputs)
            if request_output:
                ctx.request_outputs.append(request_output)

        # When we process a single request, we skip it for the next time,
        # and invoke the request output callback (if there was final output)
        if request_id:
            assert len(indices) == 1
            skip.append(indices[0])

            if (finished_now
                    and self.process_request_outputs_callback is not None):
                self.process_request_outputs_callback(ctx.request_outputs)
                ctx.request_outputs.clear()
            return

        # Free currently finished requests
        if finished_now:
            for scheduler in self.scheduler:
                scheduler.free_finished_seq_groups()

        # For multi-step without streaming, don't create outputs each iteration
        if not is_last_step and not ctx.multi_step_stream_outputs:
            # Immediately process request outputs here (if callback is given)
            if (finished_now
                    and self.process_request_outputs_callback is not None):
                self.process_request_outputs_callback(ctx.request_outputs)
                ctx.request_outputs.clear()
            return

        # Create the outputs
        for i in indices:
            if i in skip or i in finished_before or i in finished_now:
                continue  # Avoids double processing

            scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

            seq_group = scheduled_seq_group.seq_group
            seq_group.maybe_set_first_token_time(now)
            if not seq_group.is_prefill():
                seq_group.set_last_token_time(now)
            request_output = RequestOutputFactory.create(
                seq_group,
                self.seq_id_to_seq_group,
                use_cache=self.use_cached_outputs)
            if request_output:
                ctx.request_outputs.append(request_output)

        # For multi-step with streaming, create outputs each iteration
        if not is_last_step and ctx.multi_step_stream_outputs:
            # Immediately process request outputs here (if callback is given)
            if self.process_request_outputs_callback is not None:
                self.process_request_outputs_callback(ctx.request_outputs)
                ctx.request_outputs.clear()
            return

        for seq_group in scheduler_outputs.ignored_seq_groups:
            params = seq_group.sampling_params
            if params is not None and params.output_kind == (
                    RequestOutputKind.DELTA) and not seq_group.is_finished():
                continue

            request_output = RequestOutputFactory.create(
                seq_group,
                self.seq_id_to_seq_group,
                use_cache=self.use_cached_outputs,
            )
            if request_output:
                ctx.request_outputs.append(request_output)

        # Immediately process request outputs here (if callback is given)
        if (ctx.request_outputs
                and self.process_request_outputs_callback is not None):
            self.process_request_outputs_callback(ctx.request_outputs)
            ctx.request_outputs.clear()

        # For async case, we need to record the stats here.
        # For non-async case, the stats are done in the
        # LLMEngine/AsyncLLMEngine directly
        if is_async:
            # Log stats.
            self.do_log_stats(scheduler_outputs, outputs, finished_before,
                              skip)

            # Tracing
            self.do_tracing(scheduler_outputs, finished_before)

        return None

    def _advance_to_next_step(
            self, output: SamplerOutput,
            seq_group_metadata_list: List[SequenceGroupMetadata],
            scheduled_seq_groups: List[ScheduledSequenceGroup]) -> None:
        """Given model output from a single run, append the tokens to the
        sequences. This is normally done inside output processor, but it is
        required if the worker is to perform async forward pass to next step.
        """
        for seq_group_metadata, sequence_group_outputs, scheduled_seq_group in \
            zip(seq_group_metadata_list, output, scheduled_seq_groups):
            seq_group = scheduled_seq_group.seq_group

            if seq_group.is_finished():
                continue

            if self.scheduler_config.is_multi_step:
                # Updates happen only if the sequence is prefill
                self._update_num_computed_tokens_for_multi_step_prefill(
                    seq_group, seq_group_metadata,
                    seq_group.state.num_steps == 1)
            else:
                token_chunk_size = (seq_group_metadata.token_chunk_size
                                    if seq_group_metadata.token_chunk_size
                                    is not None else 0)
                seq_group.update_num_computed_tokens(token_chunk_size)

            if seq_group_metadata.do_sample:
                assert len(sequence_group_outputs.samples) == 1, (
                    "Async output processor expects a single sample"
                    " (i.e sampling_params.n == 1)")
                sample = sequence_group_outputs.samples[0]

                assert len(seq_group.seqs) == 1
                seq = seq_group.seqs[0]

                if self.scheduler_config.is_multi_step:
                    is_prefill_append = seq.data.get_num_uncomputed_tokens(
                    ) == 0
                    seq.append_token_id(sample.output_token, sample.logprobs,
                                        sample.output_embed)
                    if not is_prefill_append:
                        seq_group.update_num_computed_tokens(1)
                else:
                    seq.append_token_id(sample.output_token, sample.logprobs,
                                        sample.output_embed)

    def step(self) -> List[Union[RequestOutput, PoolingRequestOutput]]:
        """Performs one decoding iteration and returns newly generated results.

        <figure markdown="span">
        ![Overview of the step function](https://i.imgur.com/sv2HssD.png)
        <figcaption>Overview of the step function</figcaption>
        </figure>

        Details:
        - Step 1: Schedules the sequences to be executed in the next
            iteration and the token blocks to be swapped in/out/copy.

            - Depending on the scheduling policy,
                sequences may be `preempted/reordered`.
            - A Sequence Group (SG) refer to a group of sequences
                that are generated from the same prompt.

        - Step 2: Calls the distributed executor to execute the model.
        - Step 3: Processes the model output. This mainly includes:

            - Decodes the relevant outputs.
            - Updates the scheduled sequence groups with model outputs
                based on its `sampling parameters` (`use_beam_search` or not).
            - Frees the finished sequence groups.

        - Finally, it creates and returns the newly generated results.

        Example:
        ```
        # Please see the example/ folder for more detailed examples.

        # initialize engine and request arguments
        engine = LLMEngine.from_engine_args(engine_args)
        example_inputs = [(0, "What is LLM?",
        SamplingParams(temperature=0.0))]

        # Start the engine with an event loop
        while True:
            if example_inputs:
                req_id, prompt, sampling_params = example_inputs.pop(0)
                engine.add_request(str(req_id),prompt,sampling_params)

            # continue the request processing
            request_outputs = engine.step()
            for request_output in request_outputs:
                if request_output.finished:
                    # return or show the request output

            if not (engine.has_unfinished_requests() or example_inputs):
                break
        ```
        """
        if self.parallel_config.pipeline_parallel_size > 1:
            raise NotImplementedError(
                "Pipeline parallelism is only supported through AsyncLLMEngine "
                "as performance will be severely degraded otherwise.")

        # For llm_engine, there is no pipeline parallel support, so the engine
        # used is always 0.
        virtual_engine = 0

        # These are cached outputs from previous iterations. None if on first
        # iteration
        cached_outputs = self.cached_scheduler_outputs[virtual_engine]
        seq_group_metadata_list = cached_outputs.seq_group_metadata_list
        scheduler_outputs = cached_outputs.scheduler_outputs
        allow_async_output_proc = cached_outputs.allow_async_output_proc

        ctx = self.scheduler_contexts[virtual_engine]

        # Clear outputs for each new scheduler iteration
        ctx.request_outputs.clear()

        # Skip the scheduler if there are any remaining steps in the seq groups.
        # This ensures that the scheduler is only called again when the current
        # batch has completed.
        # The scheduler is also skipped if a single request caused the last
        # engine step to fail, and the previous schedule needs to be rerun.
        if not self._has_remaining_steps(
                seq_group_metadata_list
        ) and not self._skip_scheduling_next_step:
            # Schedule iteration
            (seq_group_metadata_list, scheduler_outputs,
             allow_async_output_proc
             ) = self.scheduler[virtual_engine].schedule()

            ctx.seq_group_metadata_list = seq_group_metadata_list
            ctx.scheduler_outputs = scheduler_outputs

            finished_requests_ids = self.scheduler[
                virtual_engine].get_and_reset_finished_requests_ids()
            # When n>1, elements in self.seq_id_to_seq_group should be deleted
            # here, otherwise memory leaks.
            for finished_request_id in finished_requests_ids:
                if finished_request_id in self.seq_id_to_seq_group:
                    del self.seq_id_to_seq_group[finished_request_id]

            # Maybe switch from async mode to sync mode
            if not allow_async_output_proc and len(ctx.output_queue) > 0:
                self._process_model_outputs(ctx=ctx)

            if (self.scheduler_config.is_multi_step
                    and scheduler_outputs.num_lookahead_slots > 0):
                # cache the scheduler outputs for the next iteration if we have
                # lookahead slots
                self._cache_scheduler_outputs_for_multi_step(
                    virtual_engine, seq_group_metadata_list, scheduler_outputs,
                    allow_async_output_proc)
        else:
            finished_requests_ids = list()

        assert seq_group_metadata_list is not None
        assert scheduler_outputs is not None

        if not scheduler_outputs.is_empty():

            # Check if we have a cached last_output from the previous iteration.
            # For supporting PP this is probably the best way to pass the
            # sampled_token_ids, as a separate broadcast over all the PP stages
            # will cause one virtual engine's microbatch to block the pipeline.
            last_sampled_token_ids = \
                self._get_last_sampled_token_ids(virtual_engine)

            execute_model_req = ExecuteModelRequest(
                seq_group_metadata_list=seq_group_metadata_list,
                blocks_to_swap_in=scheduler_outputs.blocks_to_swap_in,
                blocks_to_swap_out=scheduler_outputs.blocks_to_swap_out,
                blocks_to_copy=scheduler_outputs.blocks_to_copy,
                num_lookahead_slots=scheduler_outputs.num_lookahead_slots,
                running_queue_size=scheduler_outputs.running_queue_size,
                finished_requests_ids=finished_requests_ids,
                # We use ExecuteModelRequest to pass the last sampled_token_ids
                # to each of the non-last PP stages for in-place prepare_input.
                last_sampled_token_ids=last_sampled_token_ids)

            if allow_async_output_proc:
                execute_model_req.async_callback = self.async_callbacks[
                    virtual_engine]

            try:
                outputs = self.model_executor.execute_model(
                    execute_model_req=execute_model_req)
                self._skip_scheduling_next_step = False
            except InputProcessingError as e:
                # The input for this request cannot be processed, so we must
                # abort it. If there are remaining requests in the batch that
                # have been scheduled, they will be retried on the next step.
                invalid_request_id = e.request_id
                self._abort_and_cache_schedule(
                    request_id=invalid_request_id,
                    virtual_engine=virtual_engine,
                    seq_group_metadata_list=seq_group_metadata_list,
                    scheduler_outputs=scheduler_outputs,
                    allow_async_output_proc=allow_async_output_proc)
                # Raise so the caller is notified that this request failed
                raise

            # We need to do this here so that last step's sampled_token_ids can
            # be passed to the next iteration for PP.
            if self.scheduler_config.is_multi_step:
                self._update_cached_scheduler_output(virtual_engine, outputs)
        else:
            # Nothing scheduled => If there is pending async postprocessor,
            # then finish it here.
            if len(ctx.output_queue) > 0:
                self._process_model_outputs(ctx=ctx)
            # No outputs in this case
            outputs = []

        # Finish the current step for all the sequence groups.
        if self.scheduler_config.is_multi_step:
            for seq_group in seq_group_metadata_list:
                seq_group.finish_step()

        if not self._has_remaining_steps(seq_group_metadata_list):
            # clear the cache if we have finished all the steps.
            if self.scheduler_config.is_multi_step:
                self.cached_scheduler_outputs[0] = SchedulerOutputState()

            # is_first_step_output is True only when the num_steps of all
            # the sequences are 1. When the num_steps > 1,
            # multi_step_model_runner does the first-step output append.
            is_first_step_output: bool = False if not seq_group_metadata_list \
                else seq_group_metadata_list[0].state.num_steps == 1

            # Add results to the output_queue
            ctx.append_output(outputs=outputs,
                              seq_group_metadata_list=seq_group_metadata_list,
                              scheduler_outputs=scheduler_outputs,
                              is_async=allow_async_output_proc,
                              is_last_step=True,
                              is_first_step_output=is_first_step_output)

            if outputs and allow_async_output_proc:
                assert len(outputs) == 1, (
                    "Async postprocessor expects only a single output set")

                self._advance_to_next_step(
                    outputs[0], seq_group_metadata_list,
                    scheduler_outputs.scheduled_seq_groups)

            # Check if need to run the usual non-async path
            if not allow_async_output_proc:
                self._process_model_outputs(ctx=ctx)

                # Log stats.
                self.do_log_stats(scheduler_outputs, outputs)

                # Tracing
                self.do_tracing(scheduler_outputs)
        else:
            # Multi-step case
            return ctx.request_outputs

        if not self.has_unfinished_requests():
            # Drain async postprocessor (if exists)
            if len(ctx.output_queue) > 0:
                self._process_model_outputs(ctx=ctx)
            assert len(ctx.output_queue) == 0

            # Stop the execute model loop in parallel workers until there are
            # more requests to process. This avoids waiting indefinitely in
            # torch.distributed ops which may otherwise timeout, and unblocks
            # the RPC thread in the workers so that they can process any other
            # queued control plane messages, such as add/remove lora adapters.
            logger.debug("Stopping remote worker execution loop.")
            self.model_executor.stop_remote_worker_execution_loop()

        return ctx.request_outputs

    def _abort_and_cache_schedule(
            self, request_id: str, virtual_engine: int,
            seq_group_metadata_list: List[SequenceGroupMetadata],
            scheduler_outputs: SchedulerOutputs,
            allow_async_output_proc: bool) -> None:
        """Aborts a single request, and caches the scheduler outputs minus that
        request. This allows the next step to continue processing the remaining
        requests without having to re-run the scheduler."""

        # Abort the request and remove its sequence group from the current
        # schedule
        self.abort_request(request_id)
        for i, metadata in enumerate(seq_group_metadata_list):
            if metadata.request_id == request_id:
                del seq_group_metadata_list[i]
                break
        for i, group in enumerate(scheduler_outputs.scheduled_seq_groups):
            if group.seq_group.request_id == request_id:
                del scheduler_outputs.scheduled_seq_groups[i]
                break

        # If there are still other sequence groups left in the schedule, cache
        # them and flag the engine to reuse the schedule.
        if len(seq_group_metadata_list) > 0:
            self._skip_scheduling_next_step = True
            # Reuse multi-step caching logic
            self._cache_scheduler_outputs_for_multi_step(
                virtual_engine=virtual_engine,
                scheduler_outputs=scheduler_outputs,
                seq_group_metadata_list=seq_group_metadata_list,
                allow_async_output_proc=allow_async_output_proc)

    def _has_remaining_steps(
        self, seq_group_metadata_list: Optional[List[SequenceGroupMetadata]]
    ) -> bool:
        if (not self.scheduler_config.is_multi_step
                or not seq_group_metadata_list):
            return False

        # TODO(will) this is a sanity check for nowto make sure that all the
        # seqs are on the same steps. Eventually we will want to do some sort of
        # dynamic scheduling when doing multi-step decoding.
        ref_remaining_steps = seq_group_metadata_list[0].state.remaining_steps
        if any([
                seq_group.state.remaining_steps != ref_remaining_steps
                for seq_group in seq_group_metadata_list[1:]
        ]):
            raise AssertionError("All running sequence groups should "
                                 "have the same remaining steps.")

        return ref_remaining_steps > 0

    def _cache_scheduler_outputs_for_multi_step(
            self, virtual_engine: int,
            seq_group_metadata_list: Optional[List[SequenceGroupMetadata]],
            scheduler_outputs: SchedulerOutputs,
            allow_async_output_proc: bool) -> None:
        co = self.cached_scheduler_outputs[virtual_engine]

        co.seq_group_metadata_list = seq_group_metadata_list
        co.scheduler_outputs = scheduler_outputs
        co.allow_async_output_proc = allow_async_output_proc
        co.last_output = None

    def _update_cached_scheduler_output(
            self, virtual_engine: int,
            output: List[Optional[SamplerOutput]]) -> None:
        if (self.parallel_config.pipeline_parallel_size > 1 and len(output) > 0
                and output[0] is not None):
            last_output = output[-1]
            assert last_output is not None
            assert last_output.sampled_token_ids_cpu is not None
            assert last_output.sampled_token_ids is None
            assert last_output.sampled_token_probs is None
            self.cached_scheduler_outputs[
                virtual_engine].last_output = last_output

    def _get_last_sampled_token_ids(
            self, virtual_engine: int) -> Optional[torch.Tensor]:
        cached_last_output = self.cached_scheduler_outputs[
            virtual_engine].last_output
        if (self.scheduler_config.is_multi_step
                and self.parallel_config.pipeline_parallel_size > 1
                and cached_last_output is not None
                and cached_last_output.sampled_token_ids_cpu is not None):
            return cached_last_output.sampled_token_ids_cpu
        return None

    def add_logger(self, logger_name: str, logger: StatLoggerBase) -> None:
        if not self.log_stats:
            raise RuntimeError(
                "Stat logging is disabled. Set `disable_log_stats=False` "
                "argument to enable.")
        if logger_name in self.stat_loggers:
            raise KeyError(f"Logger with name {logger_name} already exists.")
        self.stat_loggers[logger_name] = logger

    def remove_logger(self, logger_name: str) -> None:
        if not self.log_stats:
            raise RuntimeError(
                "Stat logging is disabled. Set `disable_log_stats=False` "
                "argument to enable.")
        if logger_name not in self.stat_loggers:
            raise KeyError(f"Logger with name {logger_name} does not exist.")
        del self.stat_loggers[logger_name]

    def do_log_stats(self,
                     scheduler_outputs: Optional[SchedulerOutputs] = None,
                     model_output: Optional[List[SamplerOutput]] = None,
                     finished_before: Optional[List[int]] = None,
                     skip: Optional[List[int]] = None) -> None:
        """Forced log when no requests active."""
        if self.log_stats:
            stats = self._get_stats(scheduler_outputs, model_output,
                                    finished_before, skip)
            for logger in self.stat_loggers.values():
                logger.log(stats)

    def _get_stats(self,
                   scheduler_outputs: Optional[SchedulerOutputs],
                   model_output: Optional[List[SamplerOutput]] = None,
                   finished_before: Optional[List[int]] = None,
                   skip: Optional[List[int]] = None) -> Stats:
        """Get Stats to be Logged to Prometheus.

        Args:
            scheduler_outputs: Optional, used to populate metrics related to
                the scheduled batch,
            model_output: Optional, used to emit speculative decoding metrics
                which are created by the workers.
            finished_before: Optional, indices of sequences that were finished
                before. These sequences will be ignored.
            skip: Optional, indices of sequences that were preempted. These
                sequences will be ignored.
        """
        now = time.time()

        # System State
        #   Scheduler State
        num_running_sys = sum(
            len(scheduler.running) for scheduler in self.scheduler)
        num_swapped_sys = sum(
            len(scheduler.swapped) for scheduler in self.scheduler)
        num_waiting_sys = sum(
            len(scheduler.waiting) for scheduler in self.scheduler)

        # KV Cache Usage in %
        num_total_gpu = self.cache_config.num_gpu_blocks
        gpu_cache_usage_sys = 0.
        if num_total_gpu:  # Guard against both None and 0
            num_free_gpu = sum(
                scheduler.block_manager.get_num_free_gpu_blocks()
                for scheduler in self.scheduler)
            gpu_cache_usage_sys = 1.0 - (num_free_gpu / num_total_gpu)

        num_total_cpu = self.cache_config.num_cpu_blocks
        cpu_cache_usage_sys = 0.
        if num_total_cpu:  # Guard against both None and 0
            num_free_cpu = sum(
                scheduler.block_manager.get_num_free_cpu_blocks()
                for scheduler in self.scheduler)
            cpu_cache_usage_sys = 1.0 - (num_free_cpu / num_total_cpu)

        # Prefix Cache Hit Rate. Note that we always use
        # the cache hit rate of the first virtual engine.
        cpu_prefix_cache_hit_rate = self.scheduler[
            0].get_prefix_cache_hit_rate(Device.CPU)
        gpu_prefix_cache_hit_rate = self.scheduler[
            0].get_prefix_cache_hit_rate(Device.GPU)

        # Exchange the uasge and cache hit stats between gpu and cpu when
        # running on cpu because the cpu_worker.py intentionally reports the
        # number of cpu blocks as gpu blocks in favor of cache management.
        if self.device_config.device_type == "cpu":
            num_total_gpu, num_total_cpu = num_total_cpu, num_total_gpu
            gpu_cache_usage_sys, cpu_cache_usage_sys = (
                cpu_cache_usage_sys,
                gpu_cache_usage_sys,
            )
            gpu_prefix_cache_hit_rate, cpu_prefix_cache_hit_rate = (
                cpu_prefix_cache_hit_rate,
                gpu_prefix_cache_hit_rate,
            )

        # Iteration stats
        num_prompt_tokens_iter = 0
        num_generation_tokens_iter = 0
        num_tokens_iter = 0
        time_to_first_tokens_iter: List[float] = []
        time_per_output_tokens_iter: List[float] = []
        num_preemption_iter = (0 if scheduler_outputs is None else
                               scheduler_outputs.preempted)

        # Request stats
        #   Latency
        time_e2e_requests: List[float] = []
        time_queue_requests: List[float] = []
        time_inference_requests: List[float] = []
        time_prefill_requests: List[float] = []
        time_decode_requests: List[float] = []
        #   Metadata
        num_prompt_tokens_requests: List[int] = []
        num_generation_tokens_requests: List[int] = []
        n_requests: List[int] = []
        max_num_generation_tokens_requests: List[int] = []
        max_tokens_requests: List[int] = []
        finished_reason_requests: List[str] = []

        # LoRA requests
        running_lora_adapters = dict(
            collectionsCounter([
                running_request.lora_request.lora_name
                for scheduler in self.scheduler
                for running_request in scheduler.running
                if running_request.lora_request
            ]))
        waiting_lora_adapters = dict(
            collectionsCounter([
                waiting_request.lora_request.lora_name
                for scheduler in self.scheduler
                for waiting_request in scheduler.waiting
                if waiting_request.lora_request
            ]))
        max_lora_stat = "0"
        if self.lora_config:
            max_lora_stat = str(self.lora_config.max_loras)

        # NOTE: This loop assumes prefill seq_groups are before
        # decode seq_groups in scheduled_seq_groups.
        if scheduler_outputs is not None:
            # For async postprocessor, already finished sequences need to be
            # not counted (to avoid double counting)
            actual_num_batched_tokens = scheduler_outputs.num_batched_tokens  # type: ignore

            num_generation_tokens_from_prefill_groups = 0
            # NOTE: if scheduler_outputs.num_prefill_groups > 0 and
            # the len of scheduler_outputs.scheduled_seq_groups is !=
            # scheduler_outputs.num_prefill_groups, this means that
            # chunked prefills have been detected.

            for idx, scheduled_seq_group in enumerate(
                    scheduler_outputs.scheduled_seq_groups):
                # Skip double logging when using async output proc
                if finished_before and idx in finished_before:
                    actual_num_batched_tokens -= 1
                    continue

                # Currently, skip == preempted sequences, so we need to skip
                # their log stats
                if skip and idx in skip:
                    continue

                group_was_prefill = idx < scheduler_outputs.num_prefill_groups
                seq_group = scheduled_seq_group.seq_group

                # NOTE: a seq_group that completed all of its prefill tokens
                # in the last iteration will have seq_group.is_prefill() = False
                # with group_was_prefill = True
                if group_was_prefill:
                    # Number of prompt tokens.
                    num_prompt_tokens_iter += (
                        scheduled_seq_group.token_chunk_size)

                    # If the seq_group just finished the prefill state
                    # get TTFT.
                    if not seq_group.is_prefill():
                        latency = seq_group.get_last_token_latency()
                        time_to_first_tokens_iter.append(latency)

                        # One generation token per finished prefill.
                        num_generation_tokens_from_prefill_groups += (
                            seq_group.num_seqs())
                else:
                    # TPOTs.
                    latency = seq_group.get_last_token_latency()
                    time_per_output_tokens_iter.append(latency)
                    if seq_group.state.current_step == 0:
                        # For async_output_proc, the do_log_stats()
                        # is called following init_multi_step(), which
                        # sets the current_step to zero.
                        actual_num_batched_tokens +=\
                            seq_group.state.num_steps - 1
                    else:
                        actual_num_batched_tokens +=\
                            seq_group.state.current_step - 1

                # Because of chunked prefill, we can have a single sequence
                # group that does multiple prompt_runs. To prevent logging
                # the same metadata more than once per request, we standardize
                # on logging request level information for finished requests,
                # which can only happen once.
                if seq_group.is_finished():
                    # Latency timings
                    time_e2e_requests.append(now -
                                             seq_group.metrics.arrival_time)
                    if (seq_group.metrics.first_scheduled_time is not None and
                            seq_group.metrics.first_token_time is not None):
                        time_queue_requests.append(
                            seq_group.metrics.first_scheduled_time -
                            seq_group.metrics.arrival_time)
                        time_prefill_requests.append(
                            seq_group.metrics.first_token_time -
                            seq_group.metrics.first_scheduled_time)
                        time_decode_requests.append(
                            now - seq_group.metrics.first_token_time)
                        time_inference_requests.append(
                            now - seq_group.metrics.first_scheduled_time)
                    # Metadata
                    num_prompt_tokens_requests.append(
                        len(seq_group.prompt_token_ids))
                    num_generation_tokens_requests.extend([
                        seq.get_output_len()
                        for seq in seq_group.get_finished_seqs()
                    ])
                    max_num_generation_tokens_requests.append(
                        max(seq.get_output_len()
                            for seq in seq_group.get_seqs()))
                    if seq_group.sampling_params is not None:
                        n_requests.append(seq_group.sampling_params.n)
                        max_tokens_requests.append(
                            seq_group.sampling_params.max_tokens)
                    finished_reason_requests.extend([
                        SequenceStatus.get_finished_reason(seq.status)
                        for seq in seq_group.get_finished_seqs()
                    ])

            # Number of generation tokens.
            #   num_batched_tokens equals the number of prompt_tokens plus the
            #   number of decode_tokens in a single iteration. So,
            #   num_generation_tokens = num_batched_tokens - num_prompt_tokens
            #   + num_generation_tokens_from_prefill_groups (since we generate
            #   one token on prefills on iters where the prefill finishes).
            num_generation_tokens_iter = (
                actual_num_batched_tokens - num_prompt_tokens_iter +
                num_generation_tokens_from_prefill_groups)
            num_tokens_iter = (num_generation_tokens_iter +
                               num_prompt_tokens_iter)
        # Spec decode, if enabled, emits specialized metrics from the worker in
        # sampler output.
        if model_output and isinstance(model_output[0], SamplerOutput) and (
                model_output[0].spec_decode_worker_metrics is not None):
            spec_decode_metrics = model_output[0].spec_decode_worker_metrics
        else:
            spec_decode_metrics = None

        return Stats(
            now=now,
            # System stats
            #   Scheduler State
            num_running_sys=num_running_sys,
            num_swapped_sys=num_swapped_sys,
            num_waiting_sys=num_waiting_sys,
            #   KV Cache Usage in %
            gpu_cache_usage_sys=gpu_cache_usage_sys,
            cpu_cache_usage_sys=cpu_cache_usage_sys,
            #   Prefix Cache Hit Rate
            cpu_prefix_cache_hit_rate=cpu_prefix_cache_hit_rate,
            gpu_prefix_cache_hit_rate=gpu_prefix_cache_hit_rate,

            # Iteration stats
            num_prompt_tokens_iter=num_prompt_tokens_iter,
            num_generation_tokens_iter=num_generation_tokens_iter,
            num_tokens_iter=num_tokens_iter,
            time_to_first_tokens_iter=time_to_first_tokens_iter,
            time_per_output_tokens_iter=time_per_output_tokens_iter,
            spec_decode_metrics=spec_decode_metrics,
            num_preemption_iter=num_preemption_iter,

            # Request stats
            #   Latency
            time_e2e_requests=time_e2e_requests,
            time_queue_requests=time_queue_requests,
            time_inference_requests=time_inference_requests,
            time_prefill_requests=time_prefill_requests,
            time_decode_requests=time_decode_requests,
            #   Metadata
            num_prompt_tokens_requests=num_prompt_tokens_requests,
            num_generation_tokens_requests=num_generation_tokens_requests,
            max_num_generation_tokens_requests=
            max_num_generation_tokens_requests,
            n_requests=n_requests,
            max_tokens_requests=max_tokens_requests,
            finished_reason_requests=finished_reason_requests,
            max_lora=str(max_lora_stat),
            waiting_lora_adapters=list(waiting_lora_adapters.keys()),
            running_lora_adapters=list(running_lora_adapters.keys()))

    def add_lora(self, lora_request: LoRARequest) -> bool:
        return self.model_executor.add_lora(lora_request)

    def remove_lora(self, lora_id: int) -> bool:
        return self.model_executor.remove_lora(lora_id)

    def list_loras(self) -> Set[int]:
        return self.model_executor.list_loras()

    def pin_lora(self, lora_id: int) -> bool:
        return self.model_executor.pin_lora(lora_id)

    def add_prompt_adapter(
            self, prompt_adapter_request: PromptAdapterRequest) -> bool:
        return self.model_executor.add_prompt_adapter(prompt_adapter_request)

    def remove_prompt_adapter(self, prompt_adapter_id: int) -> bool:
        return self.model_executor.remove_prompt_adapter(prompt_adapter_id)

    def list_prompt_adapters(self) -> List[int]:
        return self.model_executor.list_prompt_adapters()

    def start_profile(self) -> None:
        self.model_executor.start_profile()

    def stop_profile(self) -> None:
        self.model_executor.stop_profile()

    def sleep(self, level: int = 1) -> None:
        assert self.vllm_config.model_config.enable_sleep_mode, (
            "Sleep mode is not enabled in the model config")
        self.model_executor.sleep(level=level)

    def wake_up(self, tags: Optional[list[str]] = None) -> None:
        assert self.vllm_config.model_config.enable_sleep_mode, (
            "Sleep mode is not enabled in the model config")
        self.model_executor.wake_up(tags)

    def is_sleeping(self) -> bool:
        return self.model_executor.is_sleeping

    def check_health(self) -> None:
        self.model_executor.check_health()

    def is_tracing_enabled(self) -> bool:
        return self.tracer is not None

    def do_tracing(self,
                   scheduler_outputs: SchedulerOutputs,
                   finished_before: Optional[List[int]] = None) -> None:
        if self.tracer is None:
            return

        for idx, scheduled_seq_group in enumerate(
                scheduler_outputs.scheduled_seq_groups):
            # Skip double tracing when using async output proc
            if finished_before and idx in finished_before:
                continue

            seq_group = scheduled_seq_group.seq_group
            if seq_group.is_finished():
                self.create_trace_span(seq_group)

    def create_trace_span(self, seq_group: SequenceGroup) -> None:
        if self.tracer is None or seq_group.sampling_params is None:
            return
        arrival_time_nano_seconds = int(seq_group.metrics.arrival_time * 1e9)

        trace_context = extract_trace_context(seq_group.trace_headers)

        with self.tracer.start_as_current_span(
                "llm_request",
                kind=SpanKind.SERVER,
                context=trace_context,
                start_time=arrival_time_nano_seconds) as seq_span:
            metrics = seq_group.metrics
            ttft = metrics.first_token_time - metrics.arrival_time
            e2e_time = metrics.finished_time - metrics.arrival_time
            seq_span.set_attribute(SpanAttributes.GEN_AI_RESPONSE_MODEL,
                                   self.model_config.model)
            seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_ID,
                                   seq_group.request_id)
            seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_TEMPERATURE,
                                   seq_group.sampling_params.temperature)
            seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_TOP_P,
                                   seq_group.sampling_params.top_p)
            seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_MAX_TOKENS,
                                   seq_group.sampling_params.max_tokens)
            seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_N,
                                   seq_group.sampling_params.n)
            seq_span.set_attribute(SpanAttributes.GEN_AI_USAGE_NUM_SEQUENCES,
                                   seq_group.num_seqs())
            seq_span.set_attribute(SpanAttributes.GEN_AI_USAGE_PROMPT_TOKENS,
                                   len(seq_group.prompt_token_ids))
            seq_span.set_attribute(
                SpanAttributes.GEN_AI_USAGE_COMPLETION_TOKENS,
                sum([
                    seq.get_output_len()
                    for seq in seq_group.get_finished_seqs()
                ]))
            seq_span.set_attribute(SpanAttributes.GEN_AI_LATENCY_TIME_IN_QUEUE,
                                   metrics.time_in_queue)
            seq_span.set_attribute(
                SpanAttributes.GEN_AI_LATENCY_TIME_TO_FIRST_TOKEN, ttft)
            seq_span.set_attribute(SpanAttributes.GEN_AI_LATENCY_E2E, e2e_time)
            if metrics.scheduler_time is not None:
                seq_span.set_attribute(
                    SpanAttributes.GEN_AI_LATENCY_TIME_IN_SCHEDULER,
                    metrics.scheduler_time)
            if metrics.model_forward_time is not None:
                seq_span.set_attribute(
                    SpanAttributes.GEN_AI_LATENCY_TIME_IN_MODEL_FORWARD,
                    metrics.model_forward_time / 1000.0)
            if metrics.model_execute_time is not None:
                seq_span.set_attribute(
                    SpanAttributes.GEN_AI_LATENCY_TIME_IN_MODEL_EXECUTE,
                    metrics.model_execute_time)

    def _validate_model_inputs(self, inputs: ProcessorInputs,
                               lora_request: Optional[LoRARequest]):
        encoder_inputs, decoder_inputs = split_enc_dec_inputs(inputs)

        if encoder_inputs is not None:
            self._validate_model_input(encoder_inputs,
                                       lora_request,
                                       prompt_type="encoder")

        self._validate_model_input(decoder_inputs,
                                   lora_request,
                                   prompt_type="decoder")

    def _validate_model_input(
        self,
        prompt_inputs: SingletonInputs,
        lora_request: Optional[LoRARequest],
        *,
        prompt_type: Literal["encoder", "decoder"],
    ):
        model_config = self.model_config
        tokenizer = (None if self.tokenizer is None else
                     self.tokenizer.get_lora_tokenizer(lora_request))

        prompt_ids = prompt_inputs.get("prompt_token_ids", [])
        if not prompt_ids:
            if prompt_type == "encoder" and model_config.is_multimodal_model:
                pass  # Mllama may have empty encoder inputs for text-only data
            elif prompt_inputs["type"] == "embeds":
                pass
            else:
                raise ValueError(f"The {prompt_type} prompt cannot be empty")

        if tokenizer is not None:
            max_input_id = max(prompt_ids, default=0)
            if max_input_id > tokenizer.max_token_id:
                raise ValueError(
                    f"Token id {max_input_id} is out of vocabulary")

        max_prompt_len = self.model_config.max_model_len
        if len(prompt_ids) > max_prompt_len:
            if prompt_type == "encoder" and model_config.is_multimodal_model:
                mm_registry = self.input_preprocessor.mm_registry
                mm_processor = mm_registry.create_processor(
                    model_config,
                    tokenizer=tokenizer or object(),  # Dummy if no tokenizer
                )
                assert isinstance(mm_processor, EncDecMultiModalProcessor)

                if mm_processor.pad_dummy_encoder_prompt:
                    return  # Skip encoder length check for Whisper

            if model_config.is_multimodal_model:
                suggestion = (
                    "Make sure that `max_model_len` is no smaller than the "
                    "number of text tokens plus multimodal tokens. For image "
                    "inputs, the number of image tokens depends on the number "
                    "of images, and possibly their aspect ratios as well.")
            else:
                suggestion = (
                    "Make sure that `max_model_len` is no smaller than the "
                    "number of text tokens.")

            raise ValueError(
                f"The {prompt_type} prompt (length {len(prompt_ids)}) is "
                f"longer than the maximum model length of {max_prompt_len}. "
                f"{suggestion}")

            # TODO: Find out how many placeholder tokens are there so we can
            # check that chunked prefill does not truncate them
            # max_batch_len = self.scheduler_config.max_num_batched_tokens

    def _build_logits_processors(
            self, sampling_params: SamplingParams,
            lora_request: Optional[LoRARequest]) -> SamplingParams:
        """Constructs logits processors based on the guided_decoding,
        logits_bias, and allowed_token_ids fields in sampling_params. Deletes
        those fields and adds the constructed logits processors to the
        logits_processors field. Returns the modified sampling params."""

        logits_processors = []

        if sampling_params.guided_decoding is not None:
            # Defensively copy sampling params since guided decoding logits
            # processors can have different state for each request
            sampling_params = copy.copy(sampling_params)
            guided_decoding = sampling_params.guided_decoding

            logger.debug(
                "Building guided decoding logits processor in "
                "LLMEngine. Params: %s", guided_decoding)

            tokenizer = self.get_tokenizer(lora_request=lora_request)
            guided_decoding.backend = guided_decoding.backend or \
                self.decoding_config.backend

            if self.decoding_config.reasoning_backend:
                logger.debug("Building with reasoning backend %s",
                             self.decoding_config.reasoning_backend)

            processor = get_local_guided_decoding_logits_processor(
                guided_params=guided_decoding,
                tokenizer=tokenizer,
                model_config=self.model_config,
                reasoning_backend=self.decoding_config.reasoning_backend,
            )
            if processor:
                logits_processors.append(processor)

            # Unset so this doesn't get passed down to the model
            sampling_params.guided_decoding = None

        if (sampling_params.logit_bias or sampling_params.allowed_token_ids):
            tokenizer = self.get_tokenizer(lora_request=lora_request)

            processors = get_openai_logits_processors(
                logit_bias=sampling_params.logit_bias,
                allowed_token_ids=sampling_params.allowed_token_ids,
                tokenizer=tokenizer)
            logits_processors.extend(processors)

            # Unset so these don't get passed down to the model
            sampling_params.logit_bias = None
            sampling_params.allowed_token_ids = None

        if len(sampling_params.bad_words) > 0:
            tokenizer = self.get_tokenizer(lora_request)
            processors = get_bad_words_logits_processors(
                bad_words=sampling_params.bad_words, tokenizer=tokenizer)
            logits_processors.extend(processors)

        if logits_processors:
            if sampling_params.logits_processors is None:
                sampling_params.logits_processors = logits_processors
            else:
                sampling_params.logits_processors.extend(logits_processors)

        return sampling_params

    def collective_rpc(self,
                       method: Union[str, Callable[..., _R]],
                       timeout: Optional[float] = None,
                       args: tuple = (),
                       kwargs: Optional[dict[str, Any]] = None) -> list[_R]:
        return self.model_executor.collective_rpc(method, timeout, args,
                                                  kwargs)

DO_VALIDATE_OUTPUT class-attribute

DO_VALIDATE_OUTPUT: bool = False

A flag to toggle whether to validate the type of request output.

_skip_scheduling_next_step instance-attribute

_skip_scheduling_next_step = False

async_callbacks instance-attribute

async_callbacks = [
    partial(
        process_model_outputs, ctx=scheduler_contexts[v_id]
    )
    for v_id in range(pipeline_parallel_size)
]

cache_config instance-attribute

cache_config = cache_config

cached_scheduler_outputs instance-attribute

cached_scheduler_outputs = [
    SchedulerOutputState()
    for _ in range(pipeline_parallel_size)
]

decoding_config instance-attribute

decoding_config = decoding_config or DecodingConfig()

detokenizer instance-attribute

detokenizer = Detokenizer(tokenizer)

device_config instance-attribute

device_config = device_config

generation_config_fields instance-attribute

generation_config_fields = try_get_generation_config()

input_preprocessor instance-attribute

input_preprocessor = InputPreprocessor(
    model_config, tokenizer, mm_registry
)

load_config instance-attribute

load_config = load_config

log_stats instance-attribute

log_stats = log_stats

lora_config instance-attribute

lora_config = lora_config

model_config instance-attribute

model_config = model_config

model_executor instance-attribute

model_executor = executor_class(vllm_config=vllm_config)

observability_config instance-attribute

observability_config = (
    observability_config or ObservabilityConfig()
)

output_processor instance-attribute

output_processor = create_output_processor(
    scheduler_config,
    detokenizer,
    scheduler,
    seq_counter,
    get_tokenizer_for_seq,
    stop_checker=StopChecker(
        max_model_len, get_tokenizer_for_seq
    ),
)

parallel_config instance-attribute

parallel_config = parallel_config

process_request_outputs_callback instance-attribute

process_request_outputs_callback: Optional[Callable] = None

prompt_adapter_config instance-attribute

prompt_adapter_config = prompt_adapter_config

scheduler instance-attribute

scheduler = [
    Scheduler(
        scheduler_config,
        cache_config,
        lora_config,
        pipeline_parallel_size,
        async_callbacks[v_id]
        if use_async_output_proc
        else None,
    )
    for v_id in range(pipeline_parallel_size)
]

scheduler_config instance-attribute

scheduler_config = scheduler_config

scheduler_contexts instance-attribute

scheduler_contexts = [
    SchedulerContext(
        multi_step_stream_outputs=multi_step_stream_outputs
    )
    for _ in range(pipeline_parallel_size)
]

seq_counter instance-attribute

seq_counter = Counter()

seq_id_to_seq_group instance-attribute

seq_id_to_seq_group: Dict[str, SequenceGroupBase] = {}

speculative_config instance-attribute

speculative_config = speculative_config

stat_loggers instance-attribute

stat_loggers = stat_loggers

tokenizer instance-attribute

tracer instance-attribute

tracer = None

use_cached_outputs instance-attribute

use_cached_outputs = use_cached_outputs

vllm_config instance-attribute

vllm_config = vllm_config

__del__

__del__()
Source code in vllm/engine/llm_engine.py
def __del__(self):
    # Shutdown model executor when engine is garbage collected
    # Use getattr since __init__ can fail before the field is set
    if model_executor := getattr(self, "model_executor", None):
        model_executor.shutdown()

__init__

__init__(
    vllm_config: VllmConfig,
    executor_class: Type[ExecutorBase],
    log_stats: bool,
    usage_context: UsageContext = ENGINE_CONTEXT,
    stat_loggers: Optional[
        Dict[str, StatLoggerBase]
    ] = None,
    mm_registry: MultiModalRegistry = MULTIMODAL_REGISTRY,
    use_cached_outputs: bool = False,
) -> None
Source code in vllm/engine/llm_engine.py
def __init__(
    self,
    vllm_config: VllmConfig,
    executor_class: Type[ExecutorBase],
    log_stats: bool,
    usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
    stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
    mm_registry: MultiModalRegistry = MULTIMODAL_REGISTRY,
    use_cached_outputs: bool = False,
) -> None:
    if envs.VLLM_USE_V1:
        raise ValueError(
            "Using V0 LLMEngine, but envs.VLLM_USE_V1=True. "
            "This should not happen. As a workaround, try using "
            "LLMEngine.from_vllm_config(...) or explicitly set "
            "VLLM_USE_V1=0 or 1 and report this issue on Github.")

    self.vllm_config = vllm_config
    self.model_config = vllm_config.model_config
    self.cache_config = vllm_config.cache_config
    self.lora_config = vllm_config.lora_config
    self.parallel_config = vllm_config.parallel_config
    self.scheduler_config = vllm_config.scheduler_config
    self.device_config = vllm_config.device_config
    self.speculative_config = vllm_config.speculative_config  # noqa
    self.load_config = vllm_config.load_config
    self.decoding_config = vllm_config.decoding_config or DecodingConfig(  # noqa
    )
    self.prompt_adapter_config = vllm_config.prompt_adapter_config  # noqa
    self.observability_config = vllm_config.observability_config or ObservabilityConfig(  # noqa
    )

    logger.info(
        "Initializing a V0 LLM engine (v%s) with config: %s, "
        "use_cached_outputs=%s, ",
        VLLM_VERSION,
        vllm_config,
        use_cached_outputs,
    )

    self.log_stats = log_stats
    self.use_cached_outputs = use_cached_outputs

    if not self.model_config.skip_tokenizer_init:
        self.tokenizer = self._init_tokenizer()
        self.detokenizer = Detokenizer(self.tokenizer)
        tokenizer_group = self.get_tokenizer_group()
    else:
        self.tokenizer = None
        self.detokenizer = None
        tokenizer_group = None

    # Ensure that the function doesn't contain a reference to self,
    # to avoid engine GC issues
    def get_tokenizer_for_seq(sequence: Sequence) -> AnyTokenizer:
        assert tokenizer_group, ("tokenizer_group cannot be None, "
                                 "make sure skip_tokenizer_init is False")
        return tokenizer_group.get_lora_tokenizer(sequence.lora_request)

    self.seq_counter = Counter()
    self.generation_config_fields = (
        self.model_config.try_get_generation_config())

    self.input_preprocessor = InputPreprocessor(self.model_config,
                                                self.tokenizer,
                                                mm_registry)

    self.model_executor = executor_class(vllm_config=vllm_config)

    if self.model_config.runner_type != "pooling":
        self._initialize_kv_caches()

    # If usage stat is enabled, collect relevant info.
    if is_usage_stats_enabled():
        from vllm.model_executor.model_loader import (
            get_architecture_class_name)
        usage_message.report_usage(
            get_architecture_class_name(self.model_config),
            usage_context,
            extra_kvs={
                # Common configuration
                "dtype":
                str(self.model_config.dtype),
                "tensor_parallel_size":
                self.parallel_config.tensor_parallel_size,
                "block_size":
                self.cache_config.block_size,
                "gpu_memory_utilization":
                self.cache_config.gpu_memory_utilization,

                # Quantization
                "quantization":
                self.model_config.quantization,
                "kv_cache_dtype":
                str(self.cache_config.cache_dtype),

                # Feature flags
                "enable_lora":
                bool(self.lora_config),
                "enable_prompt_adapter":
                bool(self.prompt_adapter_config),
                "enable_prefix_caching":
                self.cache_config.enable_prefix_caching,
                "enforce_eager":
                self.model_config.enforce_eager,
                "disable_custom_all_reduce":
                self.parallel_config.disable_custom_all_reduce,
            })

    self.cached_scheduler_outputs = [
        SchedulerOutputState()
        for _ in range(self.parallel_config.pipeline_parallel_size)
    ]

    self.scheduler_contexts = [
        SchedulerContext(multi_step_stream_outputs=self.scheduler_config.
                         multi_step_stream_outputs)
        for _ in range(self.parallel_config.pipeline_parallel_size)
    ]

    if self.model_config.use_async_output_proc:
        process_model_outputs = weak_bind(self._process_model_outputs)

        self.async_callbacks = [
            partial(process_model_outputs,
                    ctx=self.scheduler_contexts[v_id])
            for v_id in range(self.parallel_config.pipeline_parallel_size)
        ]
    else:
        self.async_callbacks = []

    # Currently used by AsyncLLMEngine to ensure quick append
    # of request outputs to asyncio queues
    self.process_request_outputs_callback: Optional[Callable] = None

    # Create the scheduler.
    # NOTE: the cache_config here have been updated with the numbers of
    # GPU and CPU blocks, which are profiled in the distributed executor.
    if isinstance(self.vllm_config.scheduler_config.scheduler_cls, str):
        Scheduler = resolve_obj_by_qualname(
            self.vllm_config.scheduler_config.scheduler_cls)
    else:
        Scheduler = self.vllm_config.scheduler_config.scheduler_cls
    self.scheduler = [
        Scheduler(
            self.scheduler_config, self.cache_config, self.lora_config,
            self.parallel_config.pipeline_parallel_size,
            self.async_callbacks[v_id]
            if self.model_config.use_async_output_proc else None)
        for v_id in range(self.parallel_config.pipeline_parallel_size)
    ]

    # Metric Logging.
    if self.log_stats:
        if stat_loggers is not None:
            self.stat_loggers = stat_loggers
        else:
            # Lazy import for prometheus multiprocessing.
            # We need to set PROMETHEUS_MULTIPROC_DIR environment variable
            # before prometheus_client is imported.
            # See https://prometheus.github.io/client_python/multiprocess/
            from vllm.engine.metrics import (LoggingStatLogger,
                                             PrometheusStatLogger)

            self.stat_loggers = {
                "logging":
                LoggingStatLogger(
                    local_interval=_LOCAL_LOGGING_INTERVAL_SEC,
                    vllm_config=vllm_config),
                "prometheus":
                PrometheusStatLogger(
                    local_interval=_LOCAL_LOGGING_INTERVAL_SEC,
                    labels=dict(
                        model_name=self.model_config.served_model_name),
                    vllm_config=vllm_config),
            }
            self.stat_loggers["prometheus"].info("cache_config",
                                                 self.cache_config)

    self.tracer = None
    if self.observability_config.otlp_traces_endpoint:
        self.tracer = init_tracer(
            "vllm.llm_engine",
            self.observability_config.otlp_traces_endpoint)

    # Create sequence output processor, e.g. for beam search or
    # speculative decoding.
    self.output_processor = (
        SequenceGroupOutputProcessor.create_output_processor(
            self.scheduler_config,
            self.detokenizer,
            self.scheduler,
            self.seq_counter,
            get_tokenizer_for_seq,
            stop_checker=StopChecker(self.scheduler_config.max_model_len,
                                     get_tokenizer_for_seq),
        ))

    self.seq_id_to_seq_group: Dict[str, SequenceGroupBase] = {}

    # Flag to set when an input fails to process and the engine should run
    # the next step without re-scheduling.
    self._skip_scheduling_next_step = False

    # Don't keep the dummy data in memory
    self.reset_mm_cache()

__reduce__

__reduce__()
Source code in vllm/engine/llm_engine.py
def __reduce__(self):
    # This is to ensure that the LLMEngine is not referenced in
    # the closure used to initialize Ray worker actors
    raise RuntimeError("LLMEngine should not be pickled!")

_abort_and_cache_schedule

_abort_and_cache_schedule(
    request_id: str,
    virtual_engine: int,
    seq_group_metadata_list: List[SequenceGroupMetadata],
    scheduler_outputs: SchedulerOutputs,
    allow_async_output_proc: bool,
) -> None

Aborts a single request, and caches the scheduler outputs minus that request. This allows the next step to continue processing the remaining requests without having to re-run the scheduler.

Source code in vllm/engine/llm_engine.py
def _abort_and_cache_schedule(
        self, request_id: str, virtual_engine: int,
        seq_group_metadata_list: List[SequenceGroupMetadata],
        scheduler_outputs: SchedulerOutputs,
        allow_async_output_proc: bool) -> None:
    """Aborts a single request, and caches the scheduler outputs minus that
    request. This allows the next step to continue processing the remaining
    requests without having to re-run the scheduler."""

    # Abort the request and remove its sequence group from the current
    # schedule
    self.abort_request(request_id)
    for i, metadata in enumerate(seq_group_metadata_list):
        if metadata.request_id == request_id:
            del seq_group_metadata_list[i]
            break
    for i, group in enumerate(scheduler_outputs.scheduled_seq_groups):
        if group.seq_group.request_id == request_id:
            del scheduler_outputs.scheduled_seq_groups[i]
            break

    # If there are still other sequence groups left in the schedule, cache
    # them and flag the engine to reuse the schedule.
    if len(seq_group_metadata_list) > 0:
        self._skip_scheduling_next_step = True
        # Reuse multi-step caching logic
        self._cache_scheduler_outputs_for_multi_step(
            virtual_engine=virtual_engine,
            scheduler_outputs=scheduler_outputs,
            seq_group_metadata_list=seq_group_metadata_list,
            allow_async_output_proc=allow_async_output_proc)

_add_processed_request

_add_processed_request(
    request_id: str,
    processed_inputs: ProcessorInputs,
    params: Union[SamplingParams, PoolingParams],
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    prompt_adapter_request: Optional[PromptAdapterRequest],
    trace_headers: Optional[Mapping[str, str]] = None,
    priority: int = 0,
) -> Optional[SequenceGroup]

Add a processed request to the engine's request pool. return the created sequence group.

Source code in vllm/engine/llm_engine.py
def _add_processed_request(
    self,
    request_id: str,
    processed_inputs: ProcessorInputs,
    params: Union[SamplingParams, PoolingParams],
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    prompt_adapter_request: Optional[PromptAdapterRequest],
    trace_headers: Optional[Mapping[str, str]] = None,
    priority: int = 0,
) -> Optional[SequenceGroup]:
    """Add a processed request to the engine's request pool.
    return the created sequence group.
    """
    if isinstance(params, SamplingParams) and params.n > 1:
        ParallelSampleSequenceGroup.add_request(
            request_id,
            self,
            params,
            processed_inputs=processed_inputs,
            arrival_time=arrival_time,
            lora_request=lora_request,
            trace_headers=trace_headers,
            prompt_adapter_request=prompt_adapter_request,
            priority=priority,
        )
        return None

    self._validate_model_inputs(processed_inputs, lora_request)
    # Create the sequences.
    block_size = self.cache_config.block_size
    seq_id = next(self.seq_counter)
    eos_token_id = self.input_preprocessor.get_eos_token_id(lora_request)

    encoder_inputs, decoder_inputs = split_enc_dec_inputs(processed_inputs)

    seq = Sequence(seq_id, decoder_inputs, block_size, eos_token_id,
                   lora_request, prompt_adapter_request)

    encoder_seq = (None if encoder_inputs is None else Sequence(
        seq_id, encoder_inputs, block_size, eos_token_id, lora_request,
        prompt_adapter_request))

    # Create a SequenceGroup based on SamplingParams or PoolingParams
    if isinstance(params, SamplingParams):
        seq_group = self._create_sequence_group_with_sampling(
            request_id,
            seq,
            params,
            arrival_time=arrival_time,
            lora_request=lora_request,
            trace_headers=trace_headers,
            prompt_adapter_request=prompt_adapter_request,
            encoder_seq=encoder_seq,
            priority=priority)
    elif isinstance(params, PoolingParams):
        seq_group = self._create_sequence_group_with_pooling(
            request_id,
            seq,
            params,
            arrival_time=arrival_time,
            lora_request=lora_request,
            prompt_adapter_request=prompt_adapter_request,
            encoder_seq=encoder_seq,
            priority=priority)
    else:
        raise ValueError(
            "Either SamplingParams or PoolingParams must be provided.")

    # Add the sequence group to the scheduler with least unfinished seqs.
    costs = [
        scheduler.get_num_unfinished_seq_groups()
        for scheduler in self.scheduler
    ]
    min_cost_scheduler = self.scheduler[costs.index(min(costs))]
    min_cost_scheduler.add_seq_group(seq_group)

    return seq_group

_advance_to_next_step

_advance_to_next_step(
    output: SamplerOutput,
    seq_group_metadata_list: List[SequenceGroupMetadata],
    scheduled_seq_groups: List[ScheduledSequenceGroup],
) -> None

Given model output from a single run, append the tokens to the sequences. This is normally done inside output processor, but it is required if the worker is to perform async forward pass to next step.

Source code in vllm/engine/llm_engine.py
def _advance_to_next_step(
        self, output: SamplerOutput,
        seq_group_metadata_list: List[SequenceGroupMetadata],
        scheduled_seq_groups: List[ScheduledSequenceGroup]) -> None:
    """Given model output from a single run, append the tokens to the
    sequences. This is normally done inside output processor, but it is
    required if the worker is to perform async forward pass to next step.
    """
    for seq_group_metadata, sequence_group_outputs, scheduled_seq_group in \
        zip(seq_group_metadata_list, output, scheduled_seq_groups):
        seq_group = scheduled_seq_group.seq_group

        if seq_group.is_finished():
            continue

        if self.scheduler_config.is_multi_step:
            # Updates happen only if the sequence is prefill
            self._update_num_computed_tokens_for_multi_step_prefill(
                seq_group, seq_group_metadata,
                seq_group.state.num_steps == 1)
        else:
            token_chunk_size = (seq_group_metadata.token_chunk_size
                                if seq_group_metadata.token_chunk_size
                                is not None else 0)
            seq_group.update_num_computed_tokens(token_chunk_size)

        if seq_group_metadata.do_sample:
            assert len(sequence_group_outputs.samples) == 1, (
                "Async output processor expects a single sample"
                " (i.e sampling_params.n == 1)")
            sample = sequence_group_outputs.samples[0]

            assert len(seq_group.seqs) == 1
            seq = seq_group.seqs[0]

            if self.scheduler_config.is_multi_step:
                is_prefill_append = seq.data.get_num_uncomputed_tokens(
                ) == 0
                seq.append_token_id(sample.output_token, sample.logprobs,
                                    sample.output_embed)
                if not is_prefill_append:
                    seq_group.update_num_computed_tokens(1)
            else:
                seq.append_token_id(sample.output_token, sample.logprobs,
                                    sample.output_embed)

_build_logits_processors

_build_logits_processors(
    sampling_params: SamplingParams,
    lora_request: Optional[LoRARequest],
) -> SamplingParams

Constructs logits processors based on the guided_decoding, logits_bias, and allowed_token_ids fields in sampling_params. Deletes those fields and adds the constructed logits processors to the logits_processors field. Returns the modified sampling params.

Source code in vllm/engine/llm_engine.py
def _build_logits_processors(
        self, sampling_params: SamplingParams,
        lora_request: Optional[LoRARequest]) -> SamplingParams:
    """Constructs logits processors based on the guided_decoding,
    logits_bias, and allowed_token_ids fields in sampling_params. Deletes
    those fields and adds the constructed logits processors to the
    logits_processors field. Returns the modified sampling params."""

    logits_processors = []

    if sampling_params.guided_decoding is not None:
        # Defensively copy sampling params since guided decoding logits
        # processors can have different state for each request
        sampling_params = copy.copy(sampling_params)
        guided_decoding = sampling_params.guided_decoding

        logger.debug(
            "Building guided decoding logits processor in "
            "LLMEngine. Params: %s", guided_decoding)

        tokenizer = self.get_tokenizer(lora_request=lora_request)
        guided_decoding.backend = guided_decoding.backend or \
            self.decoding_config.backend

        if self.decoding_config.reasoning_backend:
            logger.debug("Building with reasoning backend %s",
                         self.decoding_config.reasoning_backend)

        processor = get_local_guided_decoding_logits_processor(
            guided_params=guided_decoding,
            tokenizer=tokenizer,
            model_config=self.model_config,
            reasoning_backend=self.decoding_config.reasoning_backend,
        )
        if processor:
            logits_processors.append(processor)

        # Unset so this doesn't get passed down to the model
        sampling_params.guided_decoding = None

    if (sampling_params.logit_bias or sampling_params.allowed_token_ids):
        tokenizer = self.get_tokenizer(lora_request=lora_request)

        processors = get_openai_logits_processors(
            logit_bias=sampling_params.logit_bias,
            allowed_token_ids=sampling_params.allowed_token_ids,
            tokenizer=tokenizer)
        logits_processors.extend(processors)

        # Unset so these don't get passed down to the model
        sampling_params.logit_bias = None
        sampling_params.allowed_token_ids = None

    if len(sampling_params.bad_words) > 0:
        tokenizer = self.get_tokenizer(lora_request)
        processors = get_bad_words_logits_processors(
            bad_words=sampling_params.bad_words, tokenizer=tokenizer)
        logits_processors.extend(processors)

    if logits_processors:
        if sampling_params.logits_processors is None:
            sampling_params.logits_processors = logits_processors
        else:
            sampling_params.logits_processors.extend(logits_processors)

    return sampling_params

_cache_scheduler_outputs_for_multi_step

_cache_scheduler_outputs_for_multi_step(
    virtual_engine: int,
    seq_group_metadata_list: Optional[
        List[SequenceGroupMetadata]
    ],
    scheduler_outputs: SchedulerOutputs,
    allow_async_output_proc: bool,
) -> None
Source code in vllm/engine/llm_engine.py
def _cache_scheduler_outputs_for_multi_step(
        self, virtual_engine: int,
        seq_group_metadata_list: Optional[List[SequenceGroupMetadata]],
        scheduler_outputs: SchedulerOutputs,
        allow_async_output_proc: bool) -> None:
    co = self.cached_scheduler_outputs[virtual_engine]

    co.seq_group_metadata_list = seq_group_metadata_list
    co.scheduler_outputs = scheduler_outputs
    co.allow_async_output_proc = allow_async_output_proc
    co.last_output = None

_create_sequence_group_with_pooling

_create_sequence_group_with_pooling(
    request_id: str,
    seq: Sequence,
    pooling_params: PoolingParams,
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    prompt_adapter_request: Optional[PromptAdapterRequest],
    encoder_seq: Optional[Sequence] = None,
    priority: int = 0,
) -> SequenceGroup

Creates a SequenceGroup with PoolingParams.

Source code in vllm/engine/llm_engine.py
def _create_sequence_group_with_pooling(
    self,
    request_id: str,
    seq: Sequence,
    pooling_params: PoolingParams,
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    prompt_adapter_request: Optional[PromptAdapterRequest],
    encoder_seq: Optional[Sequence] = None,
    priority: int = 0,
) -> SequenceGroup:
    """Creates a SequenceGroup with PoolingParams."""
    # Defensive copy of PoolingParams, which are used by the pooler
    pooling_params = pooling_params.clone()
    # Create the sequence group.
    seq_group = SequenceGroup(
        request_id=request_id,
        seqs=[seq],
        arrival_time=arrival_time,
        lora_request=lora_request,
        pooling_params=pooling_params,
        prompt_adapter_request=prompt_adapter_request,
        encoder_seq=encoder_seq,
        priority=priority)
    return seq_group

_create_sequence_group_with_sampling

_create_sequence_group_with_sampling(
    request_id: str,
    seq: Sequence,
    sampling_params: SamplingParams,
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    trace_headers: Optional[Mapping[str, str]] = None,
    prompt_adapter_request: Optional[
        PromptAdapterRequest
    ] = None,
    encoder_seq: Optional[Sequence] = None,
    priority: int = 0,
) -> SequenceGroup

Creates a SequenceGroup with SamplingParams.

Source code in vllm/engine/llm_engine.py
def _create_sequence_group_with_sampling(
    self,
    request_id: str,
    seq: Sequence,
    sampling_params: SamplingParams,
    arrival_time: float,
    lora_request: Optional[LoRARequest],
    trace_headers: Optional[Mapping[str, str]] = None,
    prompt_adapter_request: Optional[PromptAdapterRequest] = None,
    encoder_seq: Optional[Sequence] = None,
    priority: int = 0,
) -> SequenceGroup:
    """Creates a SequenceGroup with SamplingParams."""
    max_logprobs = self.get_model_config().max_logprobs
    if (sampling_params.logprobs
            and sampling_params.logprobs > max_logprobs) or (
                sampling_params.prompt_logprobs
                and sampling_params.prompt_logprobs > max_logprobs):
        raise ValueError(f"Cannot request more than "
                         f"{max_logprobs} logprobs.")

    sampling_params = self._build_logits_processors(
        sampling_params, lora_request)

    # Defensive copy of SamplingParams, which are used by the sampler,
    # this doesn't deep-copy LogitsProcessor objects
    sampling_params = sampling_params.clone()

    sampling_params.update_from_generation_config(
        self.generation_config_fields, seq.eos_token_id)

    # Create the sequence group.
    draft_size = 1
    if self.vllm_config.speculative_config is not None:
        draft_size = \
            self.vllm_config.speculative_config.num_speculative_tokens + 1
    seq_group = SequenceGroup(
        request_id=request_id,
        seqs=[seq],
        arrival_time=arrival_time,
        sampling_params=sampling_params,
        lora_request=lora_request,
        trace_headers=trace_headers,
        prompt_adapter_request=prompt_adapter_request,
        encoder_seq=encoder_seq,
        priority=priority,
        draft_size=draft_size)

    return seq_group

_get_executor_cls classmethod

_get_executor_cls(
    engine_config: VllmConfig,
) -> Type[ExecutorBase]
Source code in vllm/engine/llm_engine.py
@classmethod
def _get_executor_cls(cls,
                      engine_config: VllmConfig) -> Type[ExecutorBase]:
    # distributed_executor_backend must be set in VllmConfig.__post_init__
    distributed_executor_backend = (
        engine_config.parallel_config.distributed_executor_backend)
    # Initialize the cluster and specify the executor class.
    if isinstance(distributed_executor_backend, type):
        if not issubclass(distributed_executor_backend, ExecutorBase):
            raise TypeError(
                "distributed_executor_backend must be a subclass of "
                f"ExecutorBase. Got {distributed_executor_backend}.")
        executor_class = distributed_executor_backend
    elif distributed_executor_backend == "ray":
        from vllm.executor.ray_distributed_executor import (
            RayDistributedExecutor)
        executor_class = RayDistributedExecutor
    elif distributed_executor_backend == "mp":
        from vllm.executor.mp_distributed_executor import (
            MultiprocessingDistributedExecutor)
        assert not envs.VLLM_USE_RAY_SPMD_WORKER, (
            "multiprocessing distributed executor backend does not "
            "support VLLM_USE_RAY_SPMD_WORKER=1")
        executor_class = MultiprocessingDistributedExecutor
    elif distributed_executor_backend == "uni":
        # JAX-style, single-process, multi-device executor.
        from vllm.executor.uniproc_executor import UniProcExecutor
        executor_class = UniProcExecutor
    elif distributed_executor_backend == "external_launcher":
        # executor with external launcher
        from vllm.executor.uniproc_executor import (  # noqa
            ExecutorWithExternalLauncher)
        executor_class = ExecutorWithExternalLauncher
    else:
        raise ValueError("unrecognized distributed_executor_backend: "
                         f"{distributed_executor_backend}")
    return executor_class

_get_last_sampled_token_ids

_get_last_sampled_token_ids(
    virtual_engine: int,
) -> Optional[Tensor]
Source code in vllm/engine/llm_engine.py
def _get_last_sampled_token_ids(
        self, virtual_engine: int) -> Optional[torch.Tensor]:
    cached_last_output = self.cached_scheduler_outputs[
        virtual_engine].last_output
    if (self.scheduler_config.is_multi_step
            and self.parallel_config.pipeline_parallel_size > 1
            and cached_last_output is not None
            and cached_last_output.sampled_token_ids_cpu is not None):
        return cached_last_output.sampled_token_ids_cpu
    return None

_get_stats

_get_stats(
    scheduler_outputs: Optional[SchedulerOutputs],
    model_output: Optional[List[SamplerOutput]] = None,
    finished_before: Optional[List[int]] = None,
    skip: Optional[List[int]] = None,
) -> Stats

Get Stats to be Logged to Prometheus.

Parameters:

Name Type Description Default
scheduler_outputs Optional[SchedulerOutputs]

Optional, used to populate metrics related to the scheduled batch,

required
model_output Optional[List[SamplerOutput]]

Optional, used to emit speculative decoding metrics which are created by the workers.

None
finished_before Optional[List[int]]

Optional, indices of sequences that were finished before. These sequences will be ignored.

None
skip Optional[List[int]]

Optional, indices of sequences that were preempted. These sequences will be ignored.

None
Source code in vllm/engine/llm_engine.py
def _get_stats(self,
               scheduler_outputs: Optional[SchedulerOutputs],
               model_output: Optional[List[SamplerOutput]] = None,
               finished_before: Optional[List[int]] = None,
               skip: Optional[List[int]] = None) -> Stats:
    """Get Stats to be Logged to Prometheus.

    Args:
        scheduler_outputs: Optional, used to populate metrics related to
            the scheduled batch,
        model_output: Optional, used to emit speculative decoding metrics
            which are created by the workers.
        finished_before: Optional, indices of sequences that were finished
            before. These sequences will be ignored.
        skip: Optional, indices of sequences that were preempted. These
            sequences will be ignored.
    """
    now = time.time()

    # System State
    #   Scheduler State
    num_running_sys = sum(
        len(scheduler.running) for scheduler in self.scheduler)
    num_swapped_sys = sum(
        len(scheduler.swapped) for scheduler in self.scheduler)
    num_waiting_sys = sum(
        len(scheduler.waiting) for scheduler in self.scheduler)

    # KV Cache Usage in %
    num_total_gpu = self.cache_config.num_gpu_blocks
    gpu_cache_usage_sys = 0.
    if num_total_gpu:  # Guard against both None and 0
        num_free_gpu = sum(
            scheduler.block_manager.get_num_free_gpu_blocks()
            for scheduler in self.scheduler)
        gpu_cache_usage_sys = 1.0 - (num_free_gpu / num_total_gpu)

    num_total_cpu = self.cache_config.num_cpu_blocks
    cpu_cache_usage_sys = 0.
    if num_total_cpu:  # Guard against both None and 0
        num_free_cpu = sum(
            scheduler.block_manager.get_num_free_cpu_blocks()
            for scheduler in self.scheduler)
        cpu_cache_usage_sys = 1.0 - (num_free_cpu / num_total_cpu)

    # Prefix Cache Hit Rate. Note that we always use
    # the cache hit rate of the first virtual engine.
    cpu_prefix_cache_hit_rate = self.scheduler[
        0].get_prefix_cache_hit_rate(Device.CPU)
    gpu_prefix_cache_hit_rate = self.scheduler[
        0].get_prefix_cache_hit_rate(Device.GPU)

    # Exchange the uasge and cache hit stats between gpu and cpu when
    # running on cpu because the cpu_worker.py intentionally reports the
    # number of cpu blocks as gpu blocks in favor of cache management.
    if self.device_config.device_type == "cpu":
        num_total_gpu, num_total_cpu = num_total_cpu, num_total_gpu
        gpu_cache_usage_sys, cpu_cache_usage_sys = (
            cpu_cache_usage_sys,
            gpu_cache_usage_sys,
        )
        gpu_prefix_cache_hit_rate, cpu_prefix_cache_hit_rate = (
            cpu_prefix_cache_hit_rate,
            gpu_prefix_cache_hit_rate,
        )

    # Iteration stats
    num_prompt_tokens_iter = 0
    num_generation_tokens_iter = 0
    num_tokens_iter = 0
    time_to_first_tokens_iter: List[float] = []
    time_per_output_tokens_iter: List[float] = []
    num_preemption_iter = (0 if scheduler_outputs is None else
                           scheduler_outputs.preempted)

    # Request stats
    #   Latency
    time_e2e_requests: List[float] = []
    time_queue_requests: List[float] = []
    time_inference_requests: List[float] = []
    time_prefill_requests: List[float] = []
    time_decode_requests: List[float] = []
    #   Metadata
    num_prompt_tokens_requests: List[int] = []
    num_generation_tokens_requests: List[int] = []
    n_requests: List[int] = []
    max_num_generation_tokens_requests: List[int] = []
    max_tokens_requests: List[int] = []
    finished_reason_requests: List[str] = []

    # LoRA requests
    running_lora_adapters = dict(
        collectionsCounter([
            running_request.lora_request.lora_name
            for scheduler in self.scheduler
            for running_request in scheduler.running
            if running_request.lora_request
        ]))
    waiting_lora_adapters = dict(
        collectionsCounter([
            waiting_request.lora_request.lora_name
            for scheduler in self.scheduler
            for waiting_request in scheduler.waiting
            if waiting_request.lora_request
        ]))
    max_lora_stat = "0"
    if self.lora_config:
        max_lora_stat = str(self.lora_config.max_loras)

    # NOTE: This loop assumes prefill seq_groups are before
    # decode seq_groups in scheduled_seq_groups.
    if scheduler_outputs is not None:
        # For async postprocessor, already finished sequences need to be
        # not counted (to avoid double counting)
        actual_num_batched_tokens = scheduler_outputs.num_batched_tokens  # type: ignore

        num_generation_tokens_from_prefill_groups = 0
        # NOTE: if scheduler_outputs.num_prefill_groups > 0 and
        # the len of scheduler_outputs.scheduled_seq_groups is !=
        # scheduler_outputs.num_prefill_groups, this means that
        # chunked prefills have been detected.

        for idx, scheduled_seq_group in enumerate(
                scheduler_outputs.scheduled_seq_groups):
            # Skip double logging when using async output proc
            if finished_before and idx in finished_before:
                actual_num_batched_tokens -= 1
                continue

            # Currently, skip == preempted sequences, so we need to skip
            # their log stats
            if skip and idx in skip:
                continue

            group_was_prefill = idx < scheduler_outputs.num_prefill_groups
            seq_group = scheduled_seq_group.seq_group

            # NOTE: a seq_group that completed all of its prefill tokens
            # in the last iteration will have seq_group.is_prefill() = False
            # with group_was_prefill = True
            if group_was_prefill:
                # Number of prompt tokens.
                num_prompt_tokens_iter += (
                    scheduled_seq_group.token_chunk_size)

                # If the seq_group just finished the prefill state
                # get TTFT.
                if not seq_group.is_prefill():
                    latency = seq_group.get_last_token_latency()
                    time_to_first_tokens_iter.append(latency)

                    # One generation token per finished prefill.
                    num_generation_tokens_from_prefill_groups += (
                        seq_group.num_seqs())
            else:
                # TPOTs.
                latency = seq_group.get_last_token_latency()
                time_per_output_tokens_iter.append(latency)
                if seq_group.state.current_step == 0:
                    # For async_output_proc, the do_log_stats()
                    # is called following init_multi_step(), which
                    # sets the current_step to zero.
                    actual_num_batched_tokens +=\
                        seq_group.state.num_steps - 1
                else:
                    actual_num_batched_tokens +=\
                        seq_group.state.current_step - 1

            # Because of chunked prefill, we can have a single sequence
            # group that does multiple prompt_runs. To prevent logging
            # the same metadata more than once per request, we standardize
            # on logging request level information for finished requests,
            # which can only happen once.
            if seq_group.is_finished():
                # Latency timings
                time_e2e_requests.append(now -
                                         seq_group.metrics.arrival_time)
                if (seq_group.metrics.first_scheduled_time is not None and
                        seq_group.metrics.first_token_time is not None):
                    time_queue_requests.append(
                        seq_group.metrics.first_scheduled_time -
                        seq_group.metrics.arrival_time)
                    time_prefill_requests.append(
                        seq_group.metrics.first_token_time -
                        seq_group.metrics.first_scheduled_time)
                    time_decode_requests.append(
                        now - seq_group.metrics.first_token_time)
                    time_inference_requests.append(
                        now - seq_group.metrics.first_scheduled_time)
                # Metadata
                num_prompt_tokens_requests.append(
                    len(seq_group.prompt_token_ids))
                num_generation_tokens_requests.extend([
                    seq.get_output_len()
                    for seq in seq_group.get_finished_seqs()
                ])
                max_num_generation_tokens_requests.append(
                    max(seq.get_output_len()
                        for seq in seq_group.get_seqs()))
                if seq_group.sampling_params is not None:
                    n_requests.append(seq_group.sampling_params.n)
                    max_tokens_requests.append(
                        seq_group.sampling_params.max_tokens)
                finished_reason_requests.extend([
                    SequenceStatus.get_finished_reason(seq.status)
                    for seq in seq_group.get_finished_seqs()
                ])

        # Number of generation tokens.
        #   num_batched_tokens equals the number of prompt_tokens plus the
        #   number of decode_tokens in a single iteration. So,
        #   num_generation_tokens = num_batched_tokens - num_prompt_tokens
        #   + num_generation_tokens_from_prefill_groups (since we generate
        #   one token on prefills on iters where the prefill finishes).
        num_generation_tokens_iter = (
            actual_num_batched_tokens - num_prompt_tokens_iter +
            num_generation_tokens_from_prefill_groups)
        num_tokens_iter = (num_generation_tokens_iter +
                           num_prompt_tokens_iter)
    # Spec decode, if enabled, emits specialized metrics from the worker in
    # sampler output.
    if model_output and isinstance(model_output[0], SamplerOutput) and (
            model_output[0].spec_decode_worker_metrics is not None):
        spec_decode_metrics = model_output[0].spec_decode_worker_metrics
    else:
        spec_decode_metrics = None

    return Stats(
        now=now,
        # System stats
        #   Scheduler State
        num_running_sys=num_running_sys,
        num_swapped_sys=num_swapped_sys,
        num_waiting_sys=num_waiting_sys,
        #   KV Cache Usage in %
        gpu_cache_usage_sys=gpu_cache_usage_sys,
        cpu_cache_usage_sys=cpu_cache_usage_sys,
        #   Prefix Cache Hit Rate
        cpu_prefix_cache_hit_rate=cpu_prefix_cache_hit_rate,
        gpu_prefix_cache_hit_rate=gpu_prefix_cache_hit_rate,

        # Iteration stats
        num_prompt_tokens_iter=num_prompt_tokens_iter,
        num_generation_tokens_iter=num_generation_tokens_iter,
        num_tokens_iter=num_tokens_iter,
        time_to_first_tokens_iter=time_to_first_tokens_iter,
        time_per_output_tokens_iter=time_per_output_tokens_iter,
        spec_decode_metrics=spec_decode_metrics,
        num_preemption_iter=num_preemption_iter,

        # Request stats
        #   Latency
        time_e2e_requests=time_e2e_requests,
        time_queue_requests=time_queue_requests,
        time_inference_requests=time_inference_requests,
        time_prefill_requests=time_prefill_requests,
        time_decode_requests=time_decode_requests,
        #   Metadata
        num_prompt_tokens_requests=num_prompt_tokens_requests,
        num_generation_tokens_requests=num_generation_tokens_requests,
        max_num_generation_tokens_requests=
        max_num_generation_tokens_requests,
        n_requests=n_requests,
        max_tokens_requests=max_tokens_requests,
        finished_reason_requests=finished_reason_requests,
        max_lora=str(max_lora_stat),
        waiting_lora_adapters=list(waiting_lora_adapters.keys()),
        running_lora_adapters=list(running_lora_adapters.keys()))

_has_remaining_steps

_has_remaining_steps(
    seq_group_metadata_list: Optional[
        List[SequenceGroupMetadata]
    ],
) -> bool
Source code in vllm/engine/llm_engine.py
def _has_remaining_steps(
    self, seq_group_metadata_list: Optional[List[SequenceGroupMetadata]]
) -> bool:
    if (not self.scheduler_config.is_multi_step
            or not seq_group_metadata_list):
        return False

    # TODO(will) this is a sanity check for nowto make sure that all the
    # seqs are on the same steps. Eventually we will want to do some sort of
    # dynamic scheduling when doing multi-step decoding.
    ref_remaining_steps = seq_group_metadata_list[0].state.remaining_steps
    if any([
            seq_group.state.remaining_steps != ref_remaining_steps
            for seq_group in seq_group_metadata_list[1:]
    ]):
        raise AssertionError("All running sequence groups should "
                             "have the same remaining steps.")

    return ref_remaining_steps > 0

_init_tokenizer

_init_tokenizer() -> TokenizerGroup
Source code in vllm/engine/llm_engine.py
def _init_tokenizer(self) -> TokenizerGroup:
    return init_tokenizer_from_configs(
        model_config=self.model_config,
        scheduler_config=self.scheduler_config,
        lora_config=self.lora_config)

_initialize_kv_caches

_initialize_kv_caches() -> None

Initialize the KV cache in the worker(s).

The workers will determine the number of blocks in both the GPU cache and the swap CPU cache.

Source code in vllm/engine/llm_engine.py
def _initialize_kv_caches(self) -> None:
    """Initialize the KV cache in the worker(s).

    The workers will determine the number of blocks in both the GPU cache
    and the swap CPU cache.
    """
    start = time.time()
    num_gpu_blocks, num_cpu_blocks = (
        self.model_executor.determine_num_available_blocks())

    if self.cache_config.num_gpu_blocks_override is not None:
        num_gpu_blocks_override = self.cache_config.num_gpu_blocks_override
        logger.info(
            "Overriding num_gpu_blocks=%d with "
            "num_gpu_blocks_override=%d", num_gpu_blocks,
            num_gpu_blocks_override)
        num_gpu_blocks = num_gpu_blocks_override

    self.cache_config.num_gpu_blocks = num_gpu_blocks
    self.cache_config.num_cpu_blocks = num_cpu_blocks

    self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
    elapsed = time.time() - start
    logger.info(("init engine (profile, create kv cache, "
                 "warmup model) took %.2f seconds"), elapsed)

_process_model_outputs

_process_model_outputs(
    ctx: SchedulerContext, request_id: Optional[str] = None
) -> None

Apply the model output to the sequences in the scheduled seq groups and return responses.

ctx: The virtual engine context to work on request_id: If provided, then only this request is going to be processed

Source code in vllm/engine/llm_engine.py
def _process_model_outputs(self,
                           ctx: SchedulerContext,
                           request_id: Optional[str] = None) -> None:
    """Apply the model output to the sequences in the scheduled seq groups
    and return responses.

    ctx: The virtual engine context to work on
    request_id: If provided, then only this request is going to be processed
    """

    now = time.time()

    if len(ctx.output_queue) == 0:
        return None

    # Get pending async postprocessor
    if request_id:
        # When we process only one request, no pop is required
        # (since later we will process all of the rest)
        (outputs, seq_group_metadata_list, scheduler_outputs, is_async,
         is_last_step, is_first_step_output, skip) = ctx.output_queue[0]
    else:
        (outputs, seq_group_metadata_list, scheduler_outputs, is_async,
         is_last_step, is_first_step_output,
         skip) = ctx.output_queue.popleft()

    # Sanity check
    assert len(seq_group_metadata_list) == len(
        scheduler_outputs.scheduled_seq_groups)

    has_multiple_outputs: bool = len(outputs) > 1
    outputs_by_sequence_group: List[List[SequenceGroupOutput]]
    if has_multiple_outputs:
        assert self.scheduler_config.is_multi_step or \
                 self.speculative_config
        # Organize outputs by [step][sequence group] instead of
        # [sequence group][step].
        if self.scheduler_config.is_multi_step:
            outputs_by_sequence_group = create_output_by_sequence_group(
                outputs, len(seq_group_metadata_list))
        elif self.speculative_config:
            # Decodes are multi-steps while prefills are not, outputting at
            # most 1 token. Separate them so that we can trigger chunk
            # processing without having to pad or copy over prompts K times
            # to match decodes structure (costly with prompt_logprobs).
            num_prefills = sum(sg.is_prompt
                               for sg in seq_group_metadata_list)
            prefills, decodes = outputs[:num_prefills], outputs[
                num_prefills:]
            outputs_by_sequence_group = create_output_by_sequence_group(
                decodes,
                num_seq_groups=len(seq_group_metadata_list) - num_prefills)
            outputs_by_sequence_group = [p.outputs for p in prefills
                                         ] + outputs_by_sequence_group
        # We have outputs for multiple steps submitted in a single burst,
        # so invalidate is_first_step_output.
        is_first_step_output = None
    else:
        outputs_by_sequence_group = outputs

    # Determine the requests we need to operate on
    if request_id:
        indices = []
        for i, seq_group_meta in enumerate(seq_group_metadata_list):
            if seq_group_meta.request_id == request_id:
                assert i not in skip  # Cannot be called twice
                indices.append(i)
                break

        # If the request_id was not found, then it means that
        # this is a new request that has no pending async
        # postprocessor
        if not indices:
            return
    else:
        indices = range(len(seq_group_metadata_list))  # type: ignore

    finished_before: List[int] = []
    finished_now: List[int] = []
    for i in indices:
        if i in skip:
            continue

        seq_group_meta = seq_group_metadata_list[i]
        scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

        seq_group: SequenceGroup = scheduled_seq_group.seq_group

        if seq_group.is_finished():
            finished_before.append(i)
            continue

        output: List[SequenceGroupOutput]
        if has_multiple_outputs:
            output = outputs_by_sequence_group[i]
        else:
            output = [outputs_by_sequence_group[0][i]]

        if not is_async:
            if self.scheduler_config.is_multi_step:
                # Updates happen only if the sequence is prefill
                self._update_num_computed_tokens_for_multi_step_prefill(
                    seq_group, seq_group_meta, is_first_step_output)
            else:
                seq_group.update_num_computed_tokens(
                    seq_group_meta.token_chunk_size or 0)

        if outputs:
            for o in outputs:
                if (isinstance(o, SamplerOutput)
                        and seq_group.metrics is not None):
                    if seq_group.metrics.model_forward_time is not None:
                        seq_group.metrics.model_forward_time += (
                            o.model_forward_time or 0)
                    else:
                        seq_group.metrics.model_forward_time = (
                            o.model_forward_time)
                    if seq_group.metrics.model_execute_time is not None:
                        seq_group.metrics.model_execute_time += (
                            o.model_execute_time or 0)
                    else:
                        seq_group.metrics.model_execute_time = (
                            o.model_execute_time)

        if self.model_config.runner_type == "pooling":
            self._process_sequence_group_outputs(seq_group, output)
        else:
            self.output_processor.process_prompt_logprob(seq_group, output)
            if seq_group_meta.do_sample:
                self.output_processor.process_outputs(
                    seq_group, output, is_async)

        if seq_group.is_finished():
            finished_now.append(i)

    # Generate outputs for the requests that finished this iteration
    for i in finished_now:
        scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

        seq_group = scheduled_seq_group.seq_group
        seq_group.maybe_set_first_token_time(now)
        if not seq_group.is_prefill():
            seq_group.set_last_token_time(now)
        request_output = RequestOutputFactory.create(
            seq_group,
            self.seq_id_to_seq_group,
            use_cache=self.use_cached_outputs)
        if request_output:
            ctx.request_outputs.append(request_output)

    # When we process a single request, we skip it for the next time,
    # and invoke the request output callback (if there was final output)
    if request_id:
        assert len(indices) == 1
        skip.append(indices[0])

        if (finished_now
                and self.process_request_outputs_callback is not None):
            self.process_request_outputs_callback(ctx.request_outputs)
            ctx.request_outputs.clear()
        return

    # Free currently finished requests
    if finished_now:
        for scheduler in self.scheduler:
            scheduler.free_finished_seq_groups()

    # For multi-step without streaming, don't create outputs each iteration
    if not is_last_step and not ctx.multi_step_stream_outputs:
        # Immediately process request outputs here (if callback is given)
        if (finished_now
                and self.process_request_outputs_callback is not None):
            self.process_request_outputs_callback(ctx.request_outputs)
            ctx.request_outputs.clear()
        return

    # Create the outputs
    for i in indices:
        if i in skip or i in finished_before or i in finished_now:
            continue  # Avoids double processing

        scheduled_seq_group = scheduler_outputs.scheduled_seq_groups[i]

        seq_group = scheduled_seq_group.seq_group
        seq_group.maybe_set_first_token_time(now)
        if not seq_group.is_prefill():
            seq_group.set_last_token_time(now)
        request_output = RequestOutputFactory.create(
            seq_group,
            self.seq_id_to_seq_group,
            use_cache=self.use_cached_outputs)
        if request_output:
            ctx.request_outputs.append(request_output)

    # For multi-step with streaming, create outputs each iteration
    if not is_last_step and ctx.multi_step_stream_outputs:
        # Immediately process request outputs here (if callback is given)
        if self.process_request_outputs_callback is not None:
            self.process_request_outputs_callback(ctx.request_outputs)
            ctx.request_outputs.clear()
        return

    for seq_group in scheduler_outputs.ignored_seq_groups:
        params = seq_group.sampling_params
        if params is not None and params.output_kind == (
                RequestOutputKind.DELTA) and not seq_group.is_finished():
            continue

        request_output = RequestOutputFactory.create(
            seq_group,
            self.seq_id_to_seq_group,
            use_cache=self.use_cached_outputs,
        )
        if request_output:
            ctx.request_outputs.append(request_output)

    # Immediately process request outputs here (if callback is given)
    if (ctx.request_outputs
            and self.process_request_outputs_callback is not None):
        self.process_request_outputs_callback(ctx.request_outputs)
        ctx.request_outputs.clear()

    # For async case, we need to record the stats here.
    # For non-async case, the stats are done in the
    # LLMEngine/AsyncLLMEngine directly
    if is_async:
        # Log stats.
        self.do_log_stats(scheduler_outputs, outputs, finished_before,
                          skip)

        # Tracing
        self.do_tracing(scheduler_outputs, finished_before)

    return None

_process_sequence_group_outputs staticmethod

_process_sequence_group_outputs(
    seq_group: SequenceGroup,
    outputs: List[PoolingSequenceGroupOutput],
) -> None
Source code in vllm/engine/llm_engine.py
@staticmethod
def _process_sequence_group_outputs(
    seq_group: SequenceGroup,
    outputs: List[PoolingSequenceGroupOutput],
) -> None:
    seq_group.pooled_data = outputs[0].data

    for seq in seq_group.get_seqs():
        seq.status = SequenceStatus.FINISHED_STOPPED

    return

_update_cached_scheduler_output

_update_cached_scheduler_output(
    virtual_engine: int,
    output: List[Optional[SamplerOutput]],
) -> None
Source code in vllm/engine/llm_engine.py
def _update_cached_scheduler_output(
        self, virtual_engine: int,
        output: List[Optional[SamplerOutput]]) -> None:
    if (self.parallel_config.pipeline_parallel_size > 1 and len(output) > 0
            and output[0] is not None):
        last_output = output[-1]
        assert last_output is not None
        assert last_output.sampled_token_ids_cpu is not None
        assert last_output.sampled_token_ids is None
        assert last_output.sampled_token_probs is None
        self.cached_scheduler_outputs[
            virtual_engine].last_output = last_output

_update_num_computed_tokens_for_multi_step_prefill

_update_num_computed_tokens_for_multi_step_prefill(
    seq_group: SequenceGroup,
    seq_group_meta: SequenceGroupMetadata,
    is_first_step_output: Optional[bool],
)

This function updates num_computed_tokens for prompt sequences when Multi-Step is enabled.

seq_group: SequenceGroup to update the num_computed_tokens for. seq_group_meta: Metadata of the given SequenceGroup. is_first_step_output: Optional[bool] - When available, is_first_step_output indicates if the appended output token is the output of the first-step in multi-step. A value of None indicates that outputs from all steps in in multi-step are submitted in a single burst.

Source code in vllm/engine/llm_engine.py
def _update_num_computed_tokens_for_multi_step_prefill(
        self, seq_group: SequenceGroup,
        seq_group_meta: SequenceGroupMetadata,
        is_first_step_output: Optional[bool]):
    """
    This function updates num_computed_tokens for prompt sequences
    when Multi-Step is enabled.

    seq_group: SequenceGroup to update the num_computed_tokens for.
    seq_group_meta: Metadata of the given SequenceGroup.
    is_first_step_output: Optional[bool] -
        When available, is_first_step_output indicates if the appended
        output token is the output of the first-step in multi-step.
        A value of None indicates that outputs from all steps in
        in multi-step are submitted in a single burst.
    """

    assert self.scheduler_config.is_multi_step

    if not seq_group_meta.is_prompt:
        # num_computed_token updates for multi-step decodes happen after
        # the tokens are appended to the sequence.
        return

    do_update: bool = False
    if self.scheduler_config.chunked_prefill_enabled:
        # In multi-step + chunked-prefill case, the prompt sequences
        # that are scheduled are fully processed in the first step.
        do_update = is_first_step_output is None or is_first_step_output
    else:
        # Normal multi-step decoding case. In this case prompt-sequences
        # are actually single-stepped. Always update in this case.
        assert seq_group.state.num_steps == 1
        do_update = True

    if do_update:
        seq_group.update_num_computed_tokens(
            seq_group_meta.token_chunk_size)

_validate_model_input

_validate_model_input(
    prompt_inputs: SingletonInputs,
    lora_request: Optional[LoRARequest],
    *,
    prompt_type: Literal["encoder", "decoder"],
)
Source code in vllm/engine/llm_engine.py
def _validate_model_input(
    self,
    prompt_inputs: SingletonInputs,
    lora_request: Optional[LoRARequest],
    *,
    prompt_type: Literal["encoder", "decoder"],
):
    model_config = self.model_config
    tokenizer = (None if self.tokenizer is None else
                 self.tokenizer.get_lora_tokenizer(lora_request))

    prompt_ids = prompt_inputs.get("prompt_token_ids", [])
    if not prompt_ids:
        if prompt_type == "encoder" and model_config.is_multimodal_model:
            pass  # Mllama may have empty encoder inputs for text-only data
        elif prompt_inputs["type"] == "embeds":
            pass
        else:
            raise ValueError(f"The {prompt_type} prompt cannot be empty")

    if tokenizer is not None:
        max_input_id = max(prompt_ids, default=0)
        if max_input_id > tokenizer.max_token_id:
            raise ValueError(
                f"Token id {max_input_id} is out of vocabulary")

    max_prompt_len = self.model_config.max_model_len
    if len(prompt_ids) > max_prompt_len:
        if prompt_type == "encoder" and model_config.is_multimodal_model:
            mm_registry = self.input_preprocessor.mm_registry
            mm_processor = mm_registry.create_processor(
                model_config,
                tokenizer=tokenizer or object(),  # Dummy if no tokenizer
            )
            assert isinstance(mm_processor, EncDecMultiModalProcessor)

            if mm_processor.pad_dummy_encoder_prompt:
                return  # Skip encoder length check for Whisper

        if model_config.is_multimodal_model:
            suggestion = (
                "Make sure that `max_model_len` is no smaller than the "
                "number of text tokens plus multimodal tokens. For image "
                "inputs, the number of image tokens depends on the number "
                "of images, and possibly their aspect ratios as well.")
        else:
            suggestion = (
                "Make sure that `max_model_len` is no smaller than the "
                "number of text tokens.")

        raise ValueError(
            f"The {prompt_type} prompt (length {len(prompt_ids)}) is "
            f"longer than the maximum model length of {max_prompt_len}. "
            f"{suggestion}")

_validate_model_inputs

_validate_model_inputs(
    inputs: ProcessorInputs,
    lora_request: Optional[LoRARequest],
)
Source code in vllm/engine/llm_engine.py
def _validate_model_inputs(self, inputs: ProcessorInputs,
                           lora_request: Optional[LoRARequest]):
    encoder_inputs, decoder_inputs = split_enc_dec_inputs(inputs)

    if encoder_inputs is not None:
        self._validate_model_input(encoder_inputs,
                                   lora_request,
                                   prompt_type="encoder")

    self._validate_model_input(decoder_inputs,
                               lora_request,
                               prompt_type="decoder")

_verify_args

_verify_args() -> None
Source code in vllm/engine/llm_engine.py
def _verify_args(self) -> None:
    self.model_config.verify_with_parallel_config(self.parallel_config)
    self.cache_config.verify_with_parallel_config(self.parallel_config)
    if self.lora_config:
        self.lora_config.verify_with_model_config(self.model_config)
        self.lora_config.verify_with_scheduler_config(
            self.scheduler_config)
    if self.prompt_adapter_config:
        self.prompt_adapter_config.verify_with_model_config(
            self.model_config)

abort_request

abort_request(
    request_id: Union[str, Iterable[str]],
) -> None

Aborts a request(s) with the given ID.

Parameters:

Name Type Description Default
request_id Union[str, Iterable[str]]

The ID(s) of the request to abort.

required
Details
Example

initialize engine and add a request with request_id

request_id = str(0)

abort the request

engine.abort_request(request_id)

Source code in vllm/engine/llm_engine.py
def abort_request(self, request_id: Union[str, Iterable[str]]) -> None:
    """Aborts a request(s) with the given ID.

    Args:
        request_id: The ID(s) of the request to abort.

    Details:
        - Refer to [vllm.core.scheduler.Scheduler.abort_seq_group][].

    Example:
        >>> # initialize engine and add a request with request_id
        >>> request_id = str(0)
        >>> # abort the request
        >>> engine.abort_request(request_id)
    """
    for scheduler in self.scheduler:
        scheduler.abort_seq_group(
            request_id, seq_id_to_seq_group=self.seq_id_to_seq_group)

add_logger

add_logger(
    logger_name: str, logger: StatLoggerBase
) -> None
Source code in vllm/engine/llm_engine.py
def add_logger(self, logger_name: str, logger: StatLoggerBase) -> None:
    if not self.log_stats:
        raise RuntimeError(
            "Stat logging is disabled. Set `disable_log_stats=False` "
            "argument to enable.")
    if logger_name in self.stat_loggers:
        raise KeyError(f"Logger with name {logger_name} already exists.")
    self.stat_loggers[logger_name] = logger

add_lora

add_lora(lora_request: LoRARequest) -> bool
Source code in vllm/engine/llm_engine.py
def add_lora(self, lora_request: LoRARequest) -> bool:
    return self.model_executor.add_lora(lora_request)

add_prompt_adapter

add_prompt_adapter(
    prompt_adapter_request: PromptAdapterRequest,
) -> bool
Source code in vllm/engine/llm_engine.py
def add_prompt_adapter(
        self, prompt_adapter_request: PromptAdapterRequest) -> bool:
    return self.model_executor.add_prompt_adapter(prompt_adapter_request)

add_request

add_request(
    request_id: str,
    prompt: PromptType,
    params: Union[SamplingParams, PoolingParams],
    arrival_time: Optional[float] = None,
    lora_request: Optional[LoRARequest] = None,
    tokenization_kwargs: Optional[dict[str, Any]] = None,
    trace_headers: Optional[Mapping[str, str]] = None,
    prompt_adapter_request: Optional[
        PromptAdapterRequest
    ] = None,
    priority: int = 0,
) -> None

Add a request to the engine's request pool.

The request is added to the request pool and will be processed by the scheduler as engine.step() is called. The exact scheduling policy is determined by the scheduler.

Parameters:

Name Type Description Default
request_id str

The unique ID of the request.

required
prompt PromptType

The prompt to the LLM. See PromptType for more details about the format of each input.

required
params Union[SamplingParams, PoolingParams]

Parameters for sampling or pooling. SamplingParams for text generation. PoolingParams for pooling.

required
arrival_time Optional[float]

The arrival time of the request. If None, we use the current monotonic time.

None
lora_request Optional[LoRARequest]

The LoRA request to add.

None
trace_headers Optional[Mapping[str, str]]

OpenTelemetry trace headers.

None
prompt_adapter_request Optional[PromptAdapterRequest]

The prompt adapter request to add.

None
priority int

The priority of the request. Only applicable with priority scheduling.

0
Details
  • Set arrival_time to the current time if it is None.
  • Set prompt_token_ids to the encoded prompt if it is None.
  • Create n number of [Sequence][vllm.Sequence] objects.
  • Create a [SequenceGroup][vllm.SequenceGroup] object from the list of [Sequence][vllm.Sequence].
  • Add the [SequenceGroup][vllm.SequenceGroup] object to the scheduler.
Example

initialize engine

engine = LLMEngine.from_engine_args(engine_args)

set request arguments

example_prompt = "Who is the president of the United States?" sampling_params = SamplingParams(temperature=0.0) request_id = 0

add the request to the engine

engine.add_request( str(request_id), example_prompt, SamplingParams(temperature=0.0))

continue the request processing

...

Source code in vllm/engine/llm_engine.py
def add_request(
    self,
    request_id: str,
    prompt: PromptType,
    params: Union[SamplingParams, PoolingParams],
    arrival_time: Optional[float] = None,
    lora_request: Optional[LoRARequest] = None,
    tokenization_kwargs: Optional[dict[str, Any]] = None,
    trace_headers: Optional[Mapping[str, str]] = None,
    prompt_adapter_request: Optional[PromptAdapterRequest] = None,
    priority: int = 0,
) -> None:
    """Add a request to the engine's request pool.

    The request is added to the request pool and will be processed by the
    scheduler as `engine.step()` is called. The exact scheduling policy is
    determined by the scheduler.

    Args:
        request_id: The unique ID of the request.
        prompt: The prompt to the LLM. See
            [PromptType][vllm.inputs.PromptType]
            for more details about the format of each input.
        params: Parameters for sampling or pooling.
            [SamplingParams][vllm.SamplingParams] for text generation.
            [PoolingParams][vllm.PoolingParams] for pooling.
        arrival_time: The arrival time of the request. If None, we use
            the current monotonic time.
        lora_request: The LoRA request to add.
        trace_headers: OpenTelemetry trace headers.
        prompt_adapter_request: The prompt adapter request to add.
        priority: The priority of the request.
            Only applicable with priority scheduling.

    Details:
        - Set arrival_time to the current time if it is None.
        - Set prompt_token_ids to the encoded prompt if it is None.
        - Create `n` number of [Sequence][vllm.Sequence] objects.
        - Create a [SequenceGroup][vllm.SequenceGroup] object
          from the list of [Sequence][vllm.Sequence].
        - Add the [SequenceGroup][vllm.SequenceGroup] object to the
          scheduler.

    Example:
        >>> # initialize engine
        >>> engine = LLMEngine.from_engine_args(engine_args)
        >>> # set request arguments
        >>> example_prompt = "Who is the president of the United States?"
        >>> sampling_params = SamplingParams(temperature=0.0)
        >>> request_id = 0
        >>>
        >>> # add the request to the engine
        >>> engine.add_request(
        >>>    str(request_id),
        >>>    example_prompt,
        >>>    SamplingParams(temperature=0.0))
        >>> # continue the request processing
        >>> ...
    """
    if not isinstance(request_id, str):
        raise TypeError(
            f"request_id must be a string, got {type(request_id)}")

    if lora_request is not None and not self.lora_config:
        raise ValueError(f"Got lora_request {lora_request} but LoRA is "
                         "not enabled!")

    if priority != 0 and not self.scheduler_config.policy == "priority":
        raise ValueError(f"Got priority {priority} but "
                         "Priority scheduling is not enabled.")

    if isinstance(params, SamplingParams) \
        and (params.guided_decoding or params.logits_processors) \
        and self.scheduler_config.num_scheduler_steps > 1:
        raise ValueError(
            "Guided decoding and logits processors are not supported "
            "in multi-step decoding")

    if arrival_time is None:
        arrival_time = time.time()

    if (isinstance(prompt, dict)
            and prompt.get("prompt_embeds", None) is not None
            and not prompt.get("prompt_token_ids", None)):
        seq_len = prompt["prompt_embeds"].shape[0]
        prompt["prompt_token_ids"] = [0] * seq_len

    processed_inputs = self.input_preprocessor.preprocess(
        prompt,
        tokenization_kwargs=tokenization_kwargs,
        lora_request=lora_request,
        prompt_adapter_request=prompt_adapter_request,
    )

    self._add_processed_request(
        request_id=request_id,
        processed_inputs=processed_inputs,
        params=params,
        arrival_time=arrival_time,
        lora_request=lora_request,
        prompt_adapter_request=prompt_adapter_request,
        trace_headers=trace_headers,
        priority=priority,
    )

check_health

check_health() -> None
Source code in vllm/engine/llm_engine.py
def check_health(self) -> None:
    self.model_executor.check_health()

collective_rpc

collective_rpc(
    method: Union[str, Callable[..., _R]],
    timeout: Optional[float] = None,
    args: tuple = (),
    kwargs: Optional[dict[str, Any]] = None,
) -> list[_R]
Source code in vllm/engine/llm_engine.py
def collective_rpc(self,
                   method: Union[str, Callable[..., _R]],
                   timeout: Optional[float] = None,
                   args: tuple = (),
                   kwargs: Optional[dict[str, Any]] = None) -> list[_R]:
    return self.model_executor.collective_rpc(method, timeout, args,
                                              kwargs)

create_trace_span

create_trace_span(seq_group: SequenceGroup) -> None
Source code in vllm/engine/llm_engine.py
def create_trace_span(self, seq_group: SequenceGroup) -> None:
    if self.tracer is None or seq_group.sampling_params is None:
        return
    arrival_time_nano_seconds = int(seq_group.metrics.arrival_time * 1e9)

    trace_context = extract_trace_context(seq_group.trace_headers)

    with self.tracer.start_as_current_span(
            "llm_request",
            kind=SpanKind.SERVER,
            context=trace_context,
            start_time=arrival_time_nano_seconds) as seq_span:
        metrics = seq_group.metrics
        ttft = metrics.first_token_time - metrics.arrival_time
        e2e_time = metrics.finished_time - metrics.arrival_time
        seq_span.set_attribute(SpanAttributes.GEN_AI_RESPONSE_MODEL,
                               self.model_config.model)
        seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_ID,
                               seq_group.request_id)
        seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_TEMPERATURE,
                               seq_group.sampling_params.temperature)
        seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_TOP_P,
                               seq_group.sampling_params.top_p)
        seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_MAX_TOKENS,
                               seq_group.sampling_params.max_tokens)
        seq_span.set_attribute(SpanAttributes.GEN_AI_REQUEST_N,
                               seq_group.sampling_params.n)
        seq_span.set_attribute(SpanAttributes.GEN_AI_USAGE_NUM_SEQUENCES,
                               seq_group.num_seqs())
        seq_span.set_attribute(SpanAttributes.GEN_AI_USAGE_PROMPT_TOKENS,
                               len(seq_group.prompt_token_ids))
        seq_span.set_attribute(
            SpanAttributes.GEN_AI_USAGE_COMPLETION_TOKENS,
            sum([
                seq.get_output_len()
                for seq in seq_group.get_finished_seqs()
            ]))
        seq_span.set_attribute(SpanAttributes.GEN_AI_LATENCY_TIME_IN_QUEUE,
                               metrics.time_in_queue)
        seq_span.set_attribute(
            SpanAttributes.GEN_AI_LATENCY_TIME_TO_FIRST_TOKEN, ttft)
        seq_span.set_attribute(SpanAttributes.GEN_AI_LATENCY_E2E, e2e_time)
        if metrics.scheduler_time is not None:
            seq_span.set_attribute(
                SpanAttributes.GEN_AI_LATENCY_TIME_IN_SCHEDULER,
                metrics.scheduler_time)
        if metrics.model_forward_time is not None:
            seq_span.set_attribute(
                SpanAttributes.GEN_AI_LATENCY_TIME_IN_MODEL_FORWARD,
                metrics.model_forward_time / 1000.0)
        if metrics.model_execute_time is not None:
            seq_span.set_attribute(
                SpanAttributes.GEN_AI_LATENCY_TIME_IN_MODEL_EXECUTE,
                metrics.model_execute_time)

do_log_stats

do_log_stats(
    scheduler_outputs: Optional[SchedulerOutputs] = None,
    model_output: Optional[List[SamplerOutput]] = None,
    finished_before: Optional[List[int]] = None,
    skip: Optional[List[int]] = None,
) -> None

Forced log when no requests active.

Source code in vllm/engine/llm_engine.py
def do_log_stats(self,
                 scheduler_outputs: Optional[SchedulerOutputs] = None,
                 model_output: Optional[List[SamplerOutput]] = None,
                 finished_before: Optional[List[int]] = None,
                 skip: Optional[List[int]] = None) -> None:
    """Forced log when no requests active."""
    if self.log_stats:
        stats = self._get_stats(scheduler_outputs, model_output,
                                finished_before, skip)
        for logger in self.stat_loggers.values():
            logger.log(stats)

do_tracing

do_tracing(
    scheduler_outputs: SchedulerOutputs,
    finished_before: Optional[List[int]] = None,
) -> None
Source code in vllm/engine/llm_engine.py
def do_tracing(self,
               scheduler_outputs: SchedulerOutputs,
               finished_before: Optional[List[int]] = None) -> None:
    if self.tracer is None:
        return

    for idx, scheduled_seq_group in enumerate(
            scheduler_outputs.scheduled_seq_groups):
        # Skip double tracing when using async output proc
        if finished_before and idx in finished_before:
            continue

        seq_group = scheduled_seq_group.seq_group
        if seq_group.is_finished():
            self.create_trace_span(seq_group)

enable_output_validation classmethod

enable_output_validation()
Source code in vllm/engine/llm_engine.py
@classmethod
@contextmanager
def enable_output_validation(cls):
    cls.DO_VALIDATE_OUTPUT = True

    yield

    cls.DO_VALIDATE_OUTPUT = False

from_engine_args classmethod

from_engine_args(
    engine_args: EngineArgs,
    usage_context: UsageContext = ENGINE_CONTEXT,
    stat_loggers: Optional[
        Dict[str, StatLoggerBase]
    ] = None,
) -> LLMEngine

Creates an LLM engine from the engine arguments.

Source code in vllm/engine/llm_engine.py
@classmethod
def from_engine_args(
    cls,
    engine_args: EngineArgs,
    usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
    stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
) -> "LLMEngine":
    """Creates an LLM engine from the engine arguments."""
    # Create the engine configs.
    vllm_config = engine_args.create_engine_config(usage_context)

    engine_cls = cls
    if envs.VLLM_USE_V1:
        from vllm.v1.engine.llm_engine import LLMEngine as V1LLMEngine
        engine_cls = V1LLMEngine

    return engine_cls.from_vllm_config(
        vllm_config=vllm_config,
        usage_context=usage_context,
        stat_loggers=stat_loggers,
        disable_log_stats=engine_args.disable_log_stats,
    )

from_vllm_config classmethod

from_vllm_config(
    vllm_config: VllmConfig,
    usage_context: UsageContext = ENGINE_CONTEXT,
    stat_loggers: Optional[
        Dict[str, StatLoggerBase]
    ] = None,
    disable_log_stats: bool = False,
) -> LLMEngine
Source code in vllm/engine/llm_engine.py
@classmethod
def from_vllm_config(
    cls,
    vllm_config: VllmConfig,
    usage_context: UsageContext = UsageContext.ENGINE_CONTEXT,
    stat_loggers: Optional[Dict[str, StatLoggerBase]] = None,
    disable_log_stats: bool = False,
) -> "LLMEngine":
    return cls(
        vllm_config=vllm_config,
        executor_class=cls._get_executor_cls(vllm_config),
        log_stats=(not disable_log_stats),
        usage_context=usage_context,
        stat_loggers=stat_loggers,
    )

get_decoding_config

get_decoding_config() -> DecodingConfig

Gets the decoding configuration.

Source code in vllm/engine/llm_engine.py
def get_decoding_config(self) -> DecodingConfig:
    """Gets the decoding configuration."""
    return self.decoding_config

get_lora_config

get_lora_config() -> LoRAConfig

Gets the LoRA configuration.

Source code in vllm/engine/llm_engine.py
def get_lora_config(self) -> LoRAConfig:
    """Gets the LoRA configuration."""
    return self.lora_config

get_model_config

get_model_config() -> ModelConfig

Gets the model configuration.

Source code in vllm/engine/llm_engine.py
def get_model_config(self) -> ModelConfig:
    """Gets the model configuration."""
    return self.model_config

get_num_unfinished_requests

get_num_unfinished_requests() -> int

Gets the number of unfinished requests.

Source code in vllm/engine/llm_engine.py
def get_num_unfinished_requests(self) -> int:
    """Gets the number of unfinished requests."""
    return sum(scheduler.get_num_unfinished_seq_groups()
               for scheduler in self.scheduler)

get_parallel_config

get_parallel_config() -> ParallelConfig

Gets the parallel configuration.

Source code in vllm/engine/llm_engine.py
def get_parallel_config(self) -> ParallelConfig:
    """Gets the parallel configuration."""
    return self.parallel_config

get_scheduler_config

get_scheduler_config() -> SchedulerConfig

Gets the scheduler configuration.

Source code in vllm/engine/llm_engine.py
def get_scheduler_config(self) -> SchedulerConfig:
    """Gets the scheduler configuration."""
    return self.scheduler_config

get_tokenizer

get_tokenizer(
    lora_request: Optional[LoRARequest] = None,
) -> AnyTokenizer
Source code in vllm/engine/llm_engine.py
def get_tokenizer(
    self,
    lora_request: Optional[LoRARequest] = None,
) -> AnyTokenizer:
    return self.get_tokenizer_group().get_lora_tokenizer(lora_request)

get_tokenizer_group

get_tokenizer_group() -> TokenizerGroup
Source code in vllm/engine/llm_engine.py
def get_tokenizer_group(self) -> TokenizerGroup:
    if self.tokenizer is None:
        raise ValueError("Unable to get tokenizer because "
                         "skip_tokenizer_init is True")

    return self.tokenizer

get_vllm_config

get_vllm_config() -> VllmConfig

Gets the vllm configuration.

Source code in vllm/engine/llm_engine.py
def get_vllm_config(self) -> VllmConfig:
    """Gets the vllm configuration."""
    return self.vllm_config

has_unfinished_requests

has_unfinished_requests() -> bool

Returns True if there are unfinished requests.

Source code in vllm/engine/llm_engine.py
def has_unfinished_requests(self) -> bool:
    """Returns True if there are unfinished requests."""
    return any(scheduler.has_unfinished_seqs()
               for scheduler in self.scheduler)

has_unfinished_requests_for_virtual_engine

has_unfinished_requests_for_virtual_engine(
    virtual_engine: int,
) -> bool

Returns True if there are unfinished requests for the virtual engine.

Source code in vllm/engine/llm_engine.py
def has_unfinished_requests_for_virtual_engine(
        self, virtual_engine: int) -> bool:
    """
    Returns True if there are unfinished requests for the virtual engine.
    """
    return self.scheduler[virtual_engine].has_unfinished_seqs()

is_sleeping

is_sleeping() -> bool
Source code in vllm/engine/llm_engine.py
def is_sleeping(self) -> bool:
    return self.model_executor.is_sleeping

is_tracing_enabled

is_tracing_enabled() -> bool
Source code in vllm/engine/llm_engine.py
def is_tracing_enabled(self) -> bool:
    return self.tracer is not None

list_loras

list_loras() -> Set[int]
Source code in vllm/engine/llm_engine.py
def list_loras(self) -> Set[int]:
    return self.model_executor.list_loras()

list_prompt_adapters

list_prompt_adapters() -> List[int]
Source code in vllm/engine/llm_engine.py
def list_prompt_adapters(self) -> List[int]:
    return self.model_executor.list_prompt_adapters()

pin_lora

pin_lora(lora_id: int) -> bool
Source code in vllm/engine/llm_engine.py
def pin_lora(self, lora_id: int) -> bool:
    return self.model_executor.pin_lora(lora_id)

remove_logger

remove_logger(logger_name: str) -> None
Source code in vllm/engine/llm_engine.py
def remove_logger(self, logger_name: str) -> None:
    if not self.log_stats:
        raise RuntimeError(
            "Stat logging is disabled. Set `disable_log_stats=False` "
            "argument to enable.")
    if logger_name not in self.stat_loggers:
        raise KeyError(f"Logger with name {logger_name} does not exist.")
    del self.stat_loggers[logger_name]

remove_lora

remove_lora(lora_id: int) -> bool
Source code in vllm/engine/llm_engine.py
def remove_lora(self, lora_id: int) -> bool:
    return self.model_executor.remove_lora(lora_id)

remove_prompt_adapter

remove_prompt_adapter(prompt_adapter_id: int) -> bool
Source code in vllm/engine/llm_engine.py
def remove_prompt_adapter(self, prompt_adapter_id: int) -> bool:
    return self.model_executor.remove_prompt_adapter(prompt_adapter_id)

reset_mm_cache

reset_mm_cache() -> bool

Reset the multi-modal cache.

Source code in vllm/engine/llm_engine.py
def reset_mm_cache(self) -> bool:
    """Reset the multi-modal cache."""
    return self.input_preprocessor.mm_registry.reset_processor_cache()

reset_prefix_cache

reset_prefix_cache(device: Optional[Device] = None) -> bool

Reset prefix cache for all devices.

Source code in vllm/engine/llm_engine.py
def reset_prefix_cache(self, device: Optional[Device] = None) -> bool:
    """Reset prefix cache for all devices."""

    success = True
    for scheduler in self.scheduler:
        success = success and scheduler.reset_prefix_cache(device)
    return success

sleep

sleep(level: int = 1) -> None
Source code in vllm/engine/llm_engine.py
def sleep(self, level: int = 1) -> None:
    assert self.vllm_config.model_config.enable_sleep_mode, (
        "Sleep mode is not enabled in the model config")
    self.model_executor.sleep(level=level)

start_profile

start_profile() -> None
Source code in vllm/engine/llm_engine.py
def start_profile(self) -> None:
    self.model_executor.start_profile()

step

Performs one decoding iteration and returns newly generated results.

Overview of the step function
Overview of the step function

Details: - Step 1: Schedules the sequences to be executed in the next iteration and the token blocks to be swapped in/out/copy.

- Depending on the scheduling policy,
    sequences may be `preempted/reordered`.
- A Sequence Group (SG) refer to a group of sequences
    that are generated from the same prompt.
  • Step 2: Calls the distributed executor to execute the model.
  • Step 3: Processes the model output. This mainly includes:

    • Decodes the relevant outputs.
    • Updates the scheduled sequence groups with model outputs based on its sampling parameters (use_beam_search or not).
    • Frees the finished sequence groups.
  • Finally, it creates and returns the newly generated results.

Example:

# Please see the example/ folder for more detailed examples.

# initialize engine and request arguments
engine = LLMEngine.from_engine_args(engine_args)
example_inputs = [(0, "What is LLM?",
SamplingParams(temperature=0.0))]

# Start the engine with an event loop
while True:
    if example_inputs:
        req_id, prompt, sampling_params = example_inputs.pop(0)
        engine.add_request(str(req_id),prompt,sampling_params)

    # continue the request processing
    request_outputs = engine.step()
    for request_output in request_outputs:
        if request_output.finished:
            # return or show the request output

    if not (engine.has_unfinished_requests() or example_inputs):
        break

Source code in vllm/engine/llm_engine.py
def step(self) -> List[Union[RequestOutput, PoolingRequestOutput]]:
    """Performs one decoding iteration and returns newly generated results.

    <figure markdown="span">
    ![Overview of the step function](https://i.imgur.com/sv2HssD.png)
    <figcaption>Overview of the step function</figcaption>
    </figure>

    Details:
    - Step 1: Schedules the sequences to be executed in the next
        iteration and the token blocks to be swapped in/out/copy.

        - Depending on the scheduling policy,
            sequences may be `preempted/reordered`.
        - A Sequence Group (SG) refer to a group of sequences
            that are generated from the same prompt.

    - Step 2: Calls the distributed executor to execute the model.
    - Step 3: Processes the model output. This mainly includes:

        - Decodes the relevant outputs.
        - Updates the scheduled sequence groups with model outputs
            based on its `sampling parameters` (`use_beam_search` or not).
        - Frees the finished sequence groups.

    - Finally, it creates and returns the newly generated results.

    Example:
    ```
    # Please see the example/ folder for more detailed examples.

    # initialize engine and request arguments
    engine = LLMEngine.from_engine_args(engine_args)
    example_inputs = [(0, "What is LLM?",
    SamplingParams(temperature=0.0))]

    # Start the engine with an event loop
    while True:
        if example_inputs:
            req_id, prompt, sampling_params = example_inputs.pop(0)
            engine.add_request(str(req_id),prompt,sampling_params)

        # continue the request processing
        request_outputs = engine.step()
        for request_output in request_outputs:
            if request_output.finished:
                # return or show the request output

        if not (engine.has_unfinished_requests() or example_inputs):
            break
    ```
    """
    if self.parallel_config.pipeline_parallel_size > 1:
        raise NotImplementedError(
            "Pipeline parallelism is only supported through AsyncLLMEngine "
            "as performance will be severely degraded otherwise.")

    # For llm_engine, there is no pipeline parallel support, so the engine
    # used is always 0.
    virtual_engine = 0

    # These are cached outputs from previous iterations. None if on first
    # iteration
    cached_outputs = self.cached_scheduler_outputs[virtual_engine]
    seq_group_metadata_list = cached_outputs.seq_group_metadata_list
    scheduler_outputs = cached_outputs.scheduler_outputs
    allow_async_output_proc = cached_outputs.allow_async_output_proc

    ctx = self.scheduler_contexts[virtual_engine]

    # Clear outputs for each new scheduler iteration
    ctx.request_outputs.clear()

    # Skip the scheduler if there are any remaining steps in the seq groups.
    # This ensures that the scheduler is only called again when the current
    # batch has completed.
    # The scheduler is also skipped if a single request caused the last
    # engine step to fail, and the previous schedule needs to be rerun.
    if not self._has_remaining_steps(
            seq_group_metadata_list
    ) and not self._skip_scheduling_next_step:
        # Schedule iteration
        (seq_group_metadata_list, scheduler_outputs,
         allow_async_output_proc
         ) = self.scheduler[virtual_engine].schedule()

        ctx.seq_group_metadata_list = seq_group_metadata_list
        ctx.scheduler_outputs = scheduler_outputs

        finished_requests_ids = self.scheduler[
            virtual_engine].get_and_reset_finished_requests_ids()
        # When n>1, elements in self.seq_id_to_seq_group should be deleted
        # here, otherwise memory leaks.
        for finished_request_id in finished_requests_ids:
            if finished_request_id in self.seq_id_to_seq_group:
                del self.seq_id_to_seq_group[finished_request_id]

        # Maybe switch from async mode to sync mode
        if not allow_async_output_proc and len(ctx.output_queue) > 0:
            self._process_model_outputs(ctx=ctx)

        if (self.scheduler_config.is_multi_step
                and scheduler_outputs.num_lookahead_slots > 0):
            # cache the scheduler outputs for the next iteration if we have
            # lookahead slots
            self._cache_scheduler_outputs_for_multi_step(
                virtual_engine, seq_group_metadata_list, scheduler_outputs,
                allow_async_output_proc)
    else:
        finished_requests_ids = list()

    assert seq_group_metadata_list is not None
    assert scheduler_outputs is not None

    if not scheduler_outputs.is_empty():

        # Check if we have a cached last_output from the previous iteration.
        # For supporting PP this is probably the best way to pass the
        # sampled_token_ids, as a separate broadcast over all the PP stages
        # will cause one virtual engine's microbatch to block the pipeline.
        last_sampled_token_ids = \
            self._get_last_sampled_token_ids(virtual_engine)

        execute_model_req = ExecuteModelRequest(
            seq_group_metadata_list=seq_group_metadata_list,
            blocks_to_swap_in=scheduler_outputs.blocks_to_swap_in,
            blocks_to_swap_out=scheduler_outputs.blocks_to_swap_out,
            blocks_to_copy=scheduler_outputs.blocks_to_copy,
            num_lookahead_slots=scheduler_outputs.num_lookahead_slots,
            running_queue_size=scheduler_outputs.running_queue_size,
            finished_requests_ids=finished_requests_ids,
            # We use ExecuteModelRequest to pass the last sampled_token_ids
            # to each of the non-last PP stages for in-place prepare_input.
            last_sampled_token_ids=last_sampled_token_ids)

        if allow_async_output_proc:
            execute_model_req.async_callback = self.async_callbacks[
                virtual_engine]

        try:
            outputs = self.model_executor.execute_model(
                execute_model_req=execute_model_req)
            self._skip_scheduling_next_step = False
        except InputProcessingError as e:
            # The input for this request cannot be processed, so we must
            # abort it. If there are remaining requests in the batch that
            # have been scheduled, they will be retried on the next step.
            invalid_request_id = e.request_id
            self._abort_and_cache_schedule(
                request_id=invalid_request_id,
                virtual_engine=virtual_engine,
                seq_group_metadata_list=seq_group_metadata_list,
                scheduler_outputs=scheduler_outputs,
                allow_async_output_proc=allow_async_output_proc)
            # Raise so the caller is notified that this request failed
            raise

        # We need to do this here so that last step's sampled_token_ids can
        # be passed to the next iteration for PP.
        if self.scheduler_config.is_multi_step:
            self._update_cached_scheduler_output(virtual_engine, outputs)
    else:
        # Nothing scheduled => If there is pending async postprocessor,
        # then finish it here.
        if len(ctx.output_queue) > 0:
            self._process_model_outputs(ctx=ctx)
        # No outputs in this case
        outputs = []

    # Finish the current step for all the sequence groups.
    if self.scheduler_config.is_multi_step:
        for seq_group in seq_group_metadata_list:
            seq_group.finish_step()

    if not self._has_remaining_steps(seq_group_metadata_list):
        # clear the cache if we have finished all the steps.
        if self.scheduler_config.is_multi_step:
            self.cached_scheduler_outputs[0] = SchedulerOutputState()

        # is_first_step_output is True only when the num_steps of all
        # the sequences are 1. When the num_steps > 1,
        # multi_step_model_runner does the first-step output append.
        is_first_step_output: bool = False if not seq_group_metadata_list \
            else seq_group_metadata_list[0].state.num_steps == 1

        # Add results to the output_queue
        ctx.append_output(outputs=outputs,
                          seq_group_metadata_list=seq_group_metadata_list,
                          scheduler_outputs=scheduler_outputs,
                          is_async=allow_async_output_proc,
                          is_last_step=True,
                          is_first_step_output=is_first_step_output)

        if outputs and allow_async_output_proc:
            assert len(outputs) == 1, (
                "Async postprocessor expects only a single output set")

            self._advance_to_next_step(
                outputs[0], seq_group_metadata_list,
                scheduler_outputs.scheduled_seq_groups)

        # Check if need to run the usual non-async path
        if not allow_async_output_proc:
            self._process_model_outputs(ctx=ctx)

            # Log stats.
            self.do_log_stats(scheduler_outputs, outputs)

            # Tracing
            self.do_tracing(scheduler_outputs)
    else:
        # Multi-step case
        return ctx.request_outputs

    if not self.has_unfinished_requests():
        # Drain async postprocessor (if exists)
        if len(ctx.output_queue) > 0:
            self._process_model_outputs(ctx=ctx)
        assert len(ctx.output_queue) == 0

        # Stop the execute model loop in parallel workers until there are
        # more requests to process. This avoids waiting indefinitely in
        # torch.distributed ops which may otherwise timeout, and unblocks
        # the RPC thread in the workers so that they can process any other
        # queued control plane messages, such as add/remove lora adapters.
        logger.debug("Stopping remote worker execution loop.")
        self.model_executor.stop_remote_worker_execution_loop()

    return ctx.request_outputs

stop_profile

stop_profile() -> None
Source code in vllm/engine/llm_engine.py
def stop_profile(self) -> None:
    self.model_executor.stop_profile()

stop_remote_worker_execution_loop

stop_remote_worker_execution_loop() -> None
Source code in vllm/engine/llm_engine.py
def stop_remote_worker_execution_loop(self) -> None:
    self.model_executor.stop_remote_worker_execution_loop()

validate_output classmethod

validate_output(
    output: object, output_type: Type[_O]
) -> _O
Source code in vllm/engine/llm_engine.py
@classmethod
def validate_output(
    cls,
    output: object,
    output_type: Type[_O],
) -> _O:
    do_validate = cls.DO_VALIDATE_OUTPUT

    if ((TYPE_CHECKING or do_validate)
            and not isinstance(output, output_type)):
        raise TypeError(f"Expected output of type {output_type}, "
                        f"but found type {type(output)}")

    return cast(_O, output)

validate_outputs classmethod

validate_outputs(
    outputs: Sequence[object], output_type: Type[_O]
) -> List[_O]
Source code in vllm/engine/llm_engine.py
@classmethod
def validate_outputs(
    cls,
    outputs: GenericSequence[object],
    output_type: Type[_O],
) -> List[_O]:
    do_validate = cls.DO_VALIDATE_OUTPUT

    outputs_: List[_O]
    if TYPE_CHECKING or do_validate:
        outputs_ = []
        for output in outputs:
            if not isinstance(output, output_type):
                raise TypeError(f"Expected output of type {output_type}, "
                                f"but found type {type(output)}")

            outputs_.append(output)
    else:
        outputs_ = outputs

    return outputs_

wake_up

wake_up(tags: Optional[list[str]] = None) -> None
Source code in vllm/engine/llm_engine.py
def wake_up(self, tags: Optional[list[str]] = None) -> None:
    assert self.vllm_config.model_config.enable_sleep_mode, (
        "Sleep mode is not enabled in the model config")
    self.model_executor.wake_up(tags)

OutputData

Bases: NamedTuple

Source code in vllm/engine/llm_engine.py
class OutputData(NamedTuple):
    outputs: List[SamplerOutput]
    seq_group_metadata_list: List[SequenceGroupMetadata]
    scheduler_outputs: SchedulerOutputs
    is_async: bool
    is_last_step: bool
    # Indicates if this output is from the first step of the
    # multi-step. When multi-step is disabled, this is always
    # set to True.
    # is_first_step_output is invalid when `outputs` has
    # outputs from multiple steps.
    is_first_step_output: Optional[bool]
    skip: List[int]

is_async instance-attribute

is_async: bool

is_first_step_output instance-attribute

is_first_step_output: Optional[bool]

is_last_step instance-attribute

is_last_step: bool

outputs instance-attribute

outputs: List[SamplerOutput]

scheduler_outputs instance-attribute

scheduler_outputs: SchedulerOutputs

seq_group_metadata_list instance-attribute

seq_group_metadata_list: List[SequenceGroupMetadata]

skip instance-attribute

skip: List[int]

SchedulerContext

Source code in vllm/engine/llm_engine.py
class SchedulerContext:

    def __init__(self, multi_step_stream_outputs: bool = False):
        self.output_queue: Deque[OutputData] = deque()
        self.request_outputs: List[Union[RequestOutput,
                                         PoolingRequestOutput]] = []
        self.seq_group_metadata_list: Optional[
            List[SequenceGroupMetadata]] = None
        self.scheduler_outputs: Optional[SchedulerOutputs] = None

        self.multi_step_stream_outputs: bool = multi_step_stream_outputs

    def append_output(self, outputs: List[SamplerOutput],
                      seq_group_metadata_list: List[SequenceGroupMetadata],
                      scheduler_outputs: SchedulerOutputs, is_async: bool,
                      is_last_step: bool,
                      is_first_step_output: Optional[bool]):
        self.output_queue.append(
            OutputData(outputs=outputs,
                       seq_group_metadata_list=seq_group_metadata_list,
                       scheduler_outputs=scheduler_outputs,
                       is_async=is_async,
                       is_last_step=is_last_step,
                       is_first_step_output=is_first_step_output,
                       skip=[]))

multi_step_stream_outputs instance-attribute

multi_step_stream_outputs: bool = multi_step_stream_outputs

output_queue instance-attribute

output_queue: Deque[OutputData] = deque()

request_outputs instance-attribute

request_outputs: List[
    Union[RequestOutput, PoolingRequestOutput]
] = []

scheduler_outputs instance-attribute

scheduler_outputs: Optional[SchedulerOutputs] = None

seq_group_metadata_list instance-attribute

seq_group_metadata_list: Optional[
    List[SequenceGroupMetadata]
] = None

__init__

__init__(multi_step_stream_outputs: bool = False)
Source code in vllm/engine/llm_engine.py
def __init__(self, multi_step_stream_outputs: bool = False):
    self.output_queue: Deque[OutputData] = deque()
    self.request_outputs: List[Union[RequestOutput,
                                     PoolingRequestOutput]] = []
    self.seq_group_metadata_list: Optional[
        List[SequenceGroupMetadata]] = None
    self.scheduler_outputs: Optional[SchedulerOutputs] = None

    self.multi_step_stream_outputs: bool = multi_step_stream_outputs

append_output

append_output(
    outputs: List[SamplerOutput],
    seq_group_metadata_list: List[SequenceGroupMetadata],
    scheduler_outputs: SchedulerOutputs,
    is_async: bool,
    is_last_step: bool,
    is_first_step_output: Optional[bool],
)
Source code in vllm/engine/llm_engine.py
def append_output(self, outputs: List[SamplerOutput],
                  seq_group_metadata_list: List[SequenceGroupMetadata],
                  scheduler_outputs: SchedulerOutputs, is_async: bool,
                  is_last_step: bool,
                  is_first_step_output: Optional[bool]):
    self.output_queue.append(
        OutputData(outputs=outputs,
                   seq_group_metadata_list=seq_group_metadata_list,
                   scheduler_outputs=scheduler_outputs,
                   is_async=is_async,
                   is_last_step=is_last_step,
                   is_first_step_output=is_first_step_output,
                   skip=[]))

SchedulerOutputState dataclass

Caches the scheduler outputs for a virtual engine. Used for Multi-Step

Source code in vllm/engine/llm_engine.py
@dataclass
class SchedulerOutputState:
    """Caches the scheduler outputs for a virtual engine. Used for Multi-Step"""
    seq_group_metadata_list: Optional[List[SequenceGroupMetadata]] = None
    scheduler_outputs: Optional[SchedulerOutputs] = None
    allow_async_output_proc: bool = False
    last_output: Optional[SamplerOutput] = None

allow_async_output_proc class-attribute instance-attribute

allow_async_output_proc: bool = False

last_output class-attribute instance-attribute

last_output: Optional[SamplerOutput] = None

scheduler_outputs class-attribute instance-attribute

scheduler_outputs: Optional[SchedulerOutputs] = None

seq_group_metadata_list class-attribute instance-attribute

seq_group_metadata_list: Optional[
    List[SequenceGroupMetadata]
] = None

__init__

__init__(
    seq_group_metadata_list: Optional[
        List[SequenceGroupMetadata]
    ] = None,
    scheduler_outputs: Optional[SchedulerOutputs] = None,
    allow_async_output_proc: bool = False,
    last_output: Optional[SamplerOutput] = None,
) -> None