CVE Candidate: Integer Underflow in crypto_gcm_decrypt Leading to Heap OOB Read/Write
File: crypto/gcm.c
Severity: Critical
Type: CWE-191 (Integer Underflow) → CWE-125/787 (Out-of-Bounds Read/Write)
Vulnerability Description
In crypto_gcm_decrypt() (line 495–513), the ciphertext length is computed by subtracting authsize from req->cryptlen using unsigned integer arithmetic with no prior bounds check:
// crypto/gcm.c:495
static int crypto_gcm_decrypt(struct aead_request *req)
{
struct crypto_aead *aead = crypto_aead_reqtfm(req);
...
unsigned int authsize = crypto_aead_authsize(aead);
unsigned int cryptlen = req->cryptlen; // line 501
...
cryptlen -= authsize; // line 504 — UNDERFLOW HERE
...
gctx->cryptlen = cryptlen; // line 509 — stored for later use
...
return gcm_hash(req, flags);
}
If a caller supplies req->cryptlen < authsize (e.g., req->cryptlen = 0, authsize = 16), the subtraction wraps around to 0xFFFFFFF0 (~4 GiB). This underflowed value is then propagated through the entire decrypt pipeline.
Exploit Chain
Step 1 — Underflow stored as gctx->cryptlen
gctx->cryptlen = cryptlen; // now ~0xFFFFFFF0
Step 2 — Out-of-bounds scatter-gather walk during GHASH
gcm_hash() → gcm_hash_init_continue() → gcm_hash_assoc_remain_continue() →
gcm_hash_update(req, gcm_hash_crypt_done, gctx->src, gctx->cryptlen, flags)
// gcm_hash_update (line 198):
ahash_request_set_crypt(ahreq, src, NULL, len); // len ≈ 4 GiB
return crypto_ahash_update(ahreq);
The GHASH update walks ~4 GiB through scatter-gather lists that hold only a few bytes of actual ciphertext. The scatterwalk iterates into unallocated or unrelated kernel memory, yielding a kernel heap out-of-bounds read. On kernels without SMAP/KASAN catching the walk, this can leak cryptographic material, kernel pointers, or other sensitive data from adjacent slab objects.
Step 3 — Integer wraparound neutralizes (but doesn’t fully save) the skcipher length
After hashing, gcm_dec_hash_continue() calls:
crypto_gcm_init_crypt(req, gctx->cryptlen);
Inside crypto_gcm_init_crypt (line 185):
skcipher_request_set_crypt(skreq, pctx->src, dst,
cryptlen + sizeof(pctx->auth_tag), // 0xFFFFFFF0 + 16 = 0x00000000
pctx->iv);
The skcipher receives length 0, so decryption encrypts nothing — but the OOB scatterwalk in Step 2 has already occurred.
Step 4 — Second underflow in crypto_gcm_verify
// line 466
unsigned int cryptlen = req->cryptlen - authsize; // same underflow
scatterwalk_map_and_copy is then called with req->assoclen + cryptlen as an offset — a huge offset into the destination scatter list — potentially enabling an out-of-bounds write of the computed auth tag to an arbitrary kernel memory address, depending on the scatter-gather layout.
Impact
| Impact | Detail |
|---|---|
| Kernel heap OOB read | ~4 GiB scatter-walk leaks adjacent slab contents (keys, pointers, credentials) |
| Kernel heap OOB write | scatterwalk_map_and_copy in crypto_gcm_verify writes 16-byte auth tag at attacker-controlled offset |
| Privilege escalation | OOB write to controlled offset may overwrite kernel structures (e.g., cred, function pointers) |
| Information disclosure | Auth tag computed over attacker-readable OOB memory; observable via timing or error codes |
Affected Call Sites
Any subsystem that calls crypto_aead_decrypt() on a GCM/RFC 4106/RFC 4543 transform with an unsanitized cryptlen is affected:
- IPsec (
net/xfrm/,net/ipv4/esp.c,net/ipv6/esp6.c) — passesskbdata lengths that can be attacker-influenced via crafted ESP packets - TLS (
net/tls/) —tls_do_decryption()setsaead_requestlengths from network data AF_ALGsocket interface — userspace can directly supply arbitrarycryptlenviasendmsg/recvmsg
The AF_ALG path is the most directly reachable from unprivileged userspace (CAP_NET_ADMIN is not required to open an AF_ALG socket for existing algorithms).
Proof-of-Concept (Trigger, No Exploit)
// Trigger via AF_ALG from userspace (no privileges needed)
int sock = socket(AF_ALG, SOCK_SEQPACKET, 0);
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "aead",
.salg_name = "gcm(aes)",
};
bind(sock, (struct sockaddr *)&sa, sizeof(sa));
setsockopt(sock, SOL_ALG, ALG_SET_KEY, key, 16);
setsockopt(sock, SOL_ALG, ALG_SET_AEAD_AUTHSIZE, NULL, 16);
int fd = accept(sock, NULL, 0);
// Send decrypt request with cryptlen=0 (< authsize=16)
// assoclen=0, cryptlen=0 → triggers underflow in crypto_gcm_decrypt
struct msghdr msg = { ... }; // cryptlen field = 0
recvmsg(fd, &msg, 0); // kernel underflows, OOB walk begins
Fix
Add an explicit length validation at the top of crypto_gcm_decrypt before the subtraction:
static int crypto_gcm_decrypt(struct aead_request *req)
{
struct crypto_aead *aead = crypto_aead_reqtfm(req);
unsigned int authsize = crypto_aead_authsize(aead);
unsigned int cryptlen = req->cryptlen;
+ if (cryptlen < authsize)
+ return -EINVAL;
cryptlen -= authsize;
...
}
The same guard is needed in crypto_gcm_verify() (line 466) and crypto_rfc4543_crypt() (line 931) where analogous unsigned subtractions occur without prior validation.
References
crypto/gcm.clines 495–513 (crypto_gcm_decrypt)crypto/gcm.clines 459–472 (crypto_gcm_verify)crypto/gcm.clines 920–948 (crypto_rfc4543_crypt)- CWE-191: Integer Underflow
- CWE-125: Out-of-bounds Read
- CWE-787: Out-of-bounds Write