|
|
CVE-ID | ||
---|---|---|
CVE-2025-52566 |
• CVSS Severity Rating • Fix Information • Vulnerable Software Versions • SCAP Mappings • CPE Information
|
|
Description | ||
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. | ||
References | ||
Note: References are provided for the convenience of the reader to help distinguish between vulnerabilities. The list is not intended to be complete. | ||
|
||
Assigning CNA | ||
GitHub (maintainer security advisories) | ||
Date Record Created | ||
20250618 | Disclaimer: The record creation date may reflect when the CVE ID was allocated or reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE. | |
Phase (Legacy) | ||
Assigned (20250618) | ||
Votes (Legacy) | ||
Comments (Legacy) | ||
Proposed (Legacy) | ||
N/A | ||
This is an record on the CVE List, which provides common identifiers for publicly known cybersecurity vulnerabilities. | ||
You can also search by reference using the CVE Reference Maps.
|
||
For More Information: CVE Request Web Form (select "Other" from dropdown) |