CVE-ID

CVE-2025-46560

• CVSS Severity Rating • Fix Information • Vulnerable Software Versions • SCAP Mappings • CPE Information
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to &#8203;&#8203;inefficient list concatenation operations&#8203;&#8203;, the algorithm exhibits &#8203;&#8203;quadratic time complexity (O(n²))&#8203;&#8203;, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
References
Note: References are provided for the convenience of the reader to help distinguish between vulnerabilities. The list is not intended to be complete.
Assigning CNA
GitHub (maintainer security advisories)
Date Record Created
20250424 Disclaimer: The record creation date may reflect when the CVE ID was allocated or reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE.
Phase (Legacy)
Assigned (20250424)
Votes (Legacy)
Comments (Legacy)
Proposed (Legacy)
N/A
This is an record on the CVE List, which provides common identifiers for publicly known cybersecurity vulnerabilities.