I identified and responsibly disclosed a Moderate-severity vulnerability in python-ecdsa, a widely used Python cryptography library with 47.8M downloads in the last month.
I found this issue while reviewing python-ecdsa with a very specific question in mind:
What happens if malformed DER lies about its length and the parser trusts it too far?
In this case, that question led to a real bug.
The DER parsing helpers in ecdsa.der accepted truncated data in cases where the encoded length claimed more bytes than were actually present. That malformed input should have been rejected immediately. Instead, it could pass deeper into parsing logic and eventually trigger an internal IndexError during key parsing.
That issue became CVE-2026-33936.
Project: python-ecdsa on GitHub
Package: ecdsa (pip)
CVE: CVE-2026-33936
attacker-controlled malformed DER → truncated length accepted as valid → parser continues past trust boundary → SigningKey.from_der() reaches internal exception path → unexpected IndexError / application-level DoS risk
python-ecdsa is a widely used Python library for elliptic-curve cryptography.
Among other things, it handles:
That means its parsing code sits directly on a security boundary.
Whenever a library accepts externally supplied key material or structured binary input, correctness is not just a quality issue. It is a security property.
If malformed input is accepted when it should be rejected, downstream code starts making assumptions on top of invalid state. That is where bugs stop being “just parsing mistakes” and start becoming vulnerabilities.
DER parsing is one of those areas where small validation mistakes can have outsized effects.
The bug class is straightforward:
That is exactly the kind of boundary failure worth checking in security review.
I was not looking for weird cryptographic behavior here. I was looking for trust failures in structured-input handling.
That was the right place to look.
The root issue was improper validation of DER length fields when parsing malformed or truncated input.
Specifically, ecdsa.der.remove_octet_string() accepted input where the declared DER length exceeded the number of bytes actually available in the buffer.
So instead of rejecting malformed DER like this:
40963the helper accepted it and returned truncated content as if it were valid.
That is already a bug.
But the stronger impact showed up downstream.
Because malformed input was accepted instead of rejected at the boundary, SigningKey.from_der() could later reach an internal exception path and raise:
That matters because this is not the kind of failure a caller expects from malformed input.
The correct behavior is a clean parse rejection such as UnexpectedDER or ValueError.
So the vulnerability was not “IndexError exists” in isolation.
The real vulnerability was this:
That is one bug chain, not two unrelated issues.
A parser rejecting malformed input is not a cosmetic improvement. It is part of the security model.
The important distinction here is not whether the input was invalid. Of course it was invalid.
The important distinction is how the library behaved in the face of invalid input.
There is a real difference between:
The first is robust behavior.
The second creates application-level risk if software parses untrusted DER and assumes library failures stay within expected exception types.
That is why this was properly classified as a vulnerability rather than merely a parser-quality bug.
I used two PoCs because they demonstrated two different parts of the same bug chain.
The first PoC showed that remove_octet_string() accepted truncated DER whose declared length exceeded the available buffer.
That established the core validation failure:
The second PoC showed the more important downstream effect:
malformed DER supplied to SigningKey.from_der() deterministically triggered an internal IndexError before the fix.
That established the security-relevant impact:
That is a much stronger result than “parser accepted weird bytes.”
It shows boundary failure plus real operational consequence.
The first PoC proves root cause.
The second PoC proves impact.
That split matters.
A lot of reports stop at:
“this parser accepts malformed data.”
That is useful, but not always enough to show why the bug matters.
In this case, the stronger report was:
That makes the security story much clearer.
The fix was minimal and correct.
The patch added the same missing safety rule already used in remove_sequence():
the declared length must fit within the available buffer
That check was applied to:
remove_constructed()remove_implicit()remove_octet_string()Once those bounds checks were added, malformed/truncated DER was rejected immediately with:
And the PoC that previously triggered IndexError no longer reached the internal exception path.
It failed cleanly during parsing, which is exactly what should have happened from the start.
This is the kind of fix you want to see in a parser vulnerability:
No redesign. No ambiguity. Just correct validation where it was missing.
I also added focused regression tests to make sure this exact class of malformed DER stays rejected.
The new tests cover truncated-length rejection for:
remove_octet_stringremove_constructedremove_implicitThat was important because the bug was not about one weird runtime path. It was about a validation rule that needed to hold consistently across related DER helpers.
After the fix and tests were added, the full suite passed locally:
That matters in real disclosure work.
A fix is much stronger when it comes with tests that lock the boundary down.
This issue was reasonably classified as Moderate.
The key impact here is availability / robustness, not confidentiality or integrity.
The advisory classification was:
That makes sense.
The claim is not that malformed DER lets an attacker execute code. The claim is that malformed DER could trigger unexpected internal exceptions in software that parses untrusted DER material using this library.
That is a real and defensible DoS-style parsing bug.
This issue was reported privately through GitHub Security Advisories.
The report included:
IndexError reproducerThe maintainer validated the issue, requested that the fix land with unit tests, and the coordinated fix proceeded through the GHSA temporary private fork workflow.
During CVE handling, GitHub initially refused assignment because the advisory text looked like it might describe more than one vulnerability. The clarification was simple:
this was a single vulnerability with a single root cause — improper DER length validation — and the SigningKey.from_der() IndexError was a downstream consequence of that same malformed-input acceptance, not a separate independently fixable issue.
That clarification was enough, and the issue was assigned:
CVE-2026-33936
The lesson here is not “DER is tricky.”
Everybody already knows DER is tricky.
The real lesson is this:
malformed structured input must be rejected at the exact point where the parser knows it is invalid.
If you miss that boundary, later code ends up operating on assumptions that are no longer trustworthy.
That is how low-level parsing mistakes become security issues.
This bug also reinforces something important about writeups and triage:
“Malformed input accepted” was the start of the story.
“Malformed input accepted, then propagated into an internal exception path during key parsing” was the complete story.
That distinction helped make the case clearly and correctly.
This vulnerability was not about exotic crypto.
It was about a parser trusting malformed input longer than it should have.
A truncated DER length field crossed the boundary, survived validation when it should have been rejected, and eventually caused crashy behavior in key parsing.
That is why this became CVE-2026-33936.
Fixed by enforcing proper DER length bounds checks in the affected helper parsers.