#Mythos finds a #curl vulnerability
yes, as in singular one.
https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
Post
#Mythos finds a #curl vulnerability
yes, as in singular one.
https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
@bagder one? wow, that really was worth burning the planet's resources. 😆
@bagder In line with what this blog post stated shortly after it was announced: the model is nothing special and much cheaper models can find the same bugs. Marketing BS turned to 11. https://www.flyingpenguin.com/the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic/
I love it :
"The AI reviews are used in addition to the human reviews. They help us, they don’t replace us."
AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past
I’m not sure this follows from what you’ve said in the rest of the post. Static analysers and fuzzers also made it very easy for people to find vulnerabilities and typically found a lot when they were deployed for the first time. And both were a lot cheaper to run than something like Mythos.
They aren’t finding as many vulnerabilities now because projects that are critical for security are integrating them into their CI flows.
And this is what always happens with some new technique: valgrind, Coverity, sanitisers, fuzzers, and so on: they’re released, they find a load of bugs that existing techniques failed to find, people fix them, they get integrated into regular CI runs, and the kinds of bugs that those tools find never make it into the tree.
Syskaller, for example, has found a lot more bugs in the Linux kernel than any Anthropic tools. And that’s just one fuzzing tool.
@david_chisnall i think it makes sense for everyone to run the "easy" and cheap tools first, and once they all find no more problems, then you bring out the bigger canons like AI analyzers. So yeah, which is "best" ? It probably depends.
@bagder @david_chisnall I'm not going to advocate actually doing this because it's expensive and I'm not a fan of the environmental impacts, but I am curious what it would find if you pointed it at the codebase from a time before the other precursor tools like fuzzers were in use. How many bugs can it find that you know with hindsight are there to be found?
The original Coverity paper claimed, as I recall, 300 CVEs. I'm not sure what the severity distribution was, but that seems a lot more than Mythos, and they probably used less compute than a single Mythos query.
The problem with any static analyser, whether it's based on formal reasoning or pattern recognition, is that it will be unsound (i.e. it will have false positives, in contrast with dynamic analyses that are incomplete and have false negatives). The LLM-based tools are no different in this respect. From a Claude 'comprehensive code review' of one of my projects, the only serious bug in the top ten that it found was one that already had an open PR to fix, and two were not only not bugs, they were intentional design choices and doing it the other way would have caused serious performance regressions (and not fixed bugs).
The thing that does make Mythos different is that it tries to build a PoC exploit. This will reduce the false positive rate, at the expense of creating false negatives (if it can't produce a PoC, you ignore it).
When I've used Coverity on a large project, it's found tens of thousands of bugs, and most of them are false positives, so it requires a lot of effort to find the ones that are actually important bugs. Something that produces PoCs automatically would help this a lot.
The baseline data point I'd really like to see is something that integrates the clang analyser with libFuzzer. For each report the analyser finds, insert profiling points at the branches on the control flow chain that it recommends, then automatically drive the fuzzer to try to trigger the code paths that the analyser reported as potential issues.
The default settings for the clang analyser are compilation-unit-at-a-time and with reduced bounds on loop iteration counts to avoid using enormous amounts of memory. If you're willing to spend as much money as it costs to operate the LLM-based tools, you can use the cross-compilation-unit approaches and bump the state up a lot. Running it configured to use a comparable amount of RAM to the GPUs that the Anthropic models run on would let you do a lot of symbolic execution.
@bagder not trying to buy into Anthropic's hype machine, but I wonder if curl is just a nonrepresentative code base. The average closed source / internal code base is probably worse in orders of magnitude when it comes to static checks, engineering principles, you name it.
I suspect Mythos will be useful in making poor software a bit more secure. That could have been done without AI of course.
@eskett I do emphasize that it is good at finding flaws. And so are many other models. So yes, they will certainly find many flaws in source code going forward. Mythos and the others.
@bagder the power of rigorous software engineering :D
@bagder I suspect the question is, will it still be a worthwhile tool when the actual price to use the tool, not subsidized by anyone's war chest or VC, is revealed?
The report concluded it found five “Confirmed security vulnerabilities”. I think using the term confirmed is a little amusing when the AI says it confidently by itself. Yes, the AI thinks they are confirmed, but the curl security team has a slightly different take.
"Zero memory-safety vulnerabilities found." 💚
@bagder b-b-b-but curl is not in Rust!
@bagder Would it be a good idea to take an older version, where you already know you (as humans) found (and fixed) a certain number of vulnerabilities and see if AI can spot those correctly?
The Idee beeing to really have a quality test? ("For Science" 😉 ).
Or are the all trained on your latest version already and that would invalidate that test?
@johnnythan I agree that would be an interesting challenge for someone with time and tokens to burn
@bagder
At least it works. It would have been quite a disaster if it found zero.
@bagder hah! i was right!
@bagder spectacular result! Huge congratulations to the entire team! Made my day :)
@bagder thanks for this. It was really helpful to understand the hype around Mythos and also see that high quality in code matters a lot,especially if human driven
My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing. I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos. Maybe this model is a little bit better, but even if it is, it is not better to a degree that seems to make a significant dent in code analyzing.
@bagder How do you explain that Mythos found 271 bugs in Firefox, and counting, and only 1 in cURL. Is the Firefox code base 271 times larger?
@bagder
In terms oft evidence to the contrary:
Check out
https://social.security.plumbing/@freddy/116549451049357174 / the blog post:
https://hacks.mozilla.org/2026/05/behind-the-scenes-hardening-firefox/
>270 vulnerabilities found by Mythos fixed in a single Firefox release.
That's just one data point, but interestingly far off from yours.
@bagder from my talks with people who had been given access to mythos in their org, they say it does find things which current tools miss, but also overlooks cases which current tools catch. so, yeah, to me it is "mostly marketing" combined with general FUD