is it possible to convincingly prove or disprove that daniel j. bernstein actually came up with curve25519 vs uhhh "had a vision from god"
Discussion
because literally every single thing he has ever claimed to have created (qhasm, eSTREAM) has never been in a runnable state
i assumed he just died in 2010 but he keeps pretending to publish papers with himself as first author despite being the fucking PI
there are so many other things wrong with the world atm but i'm still shocked about that because authorship is not a subjective editorial decision—it's a highly regulated and adjudicated process taught to every single graduate student. it is one of the many little legal systems that arise to achieve a nash equilibrium over an iterated prisoner's dilemma. it is a matter of academic integrity
"profs who steal students' work" is a very well known failure mode and it's up there right next to sexually harrassing students for reasons which should be obvious. usually they avoid leaving a meticulous written record, but then again usually it's the journal's job to perform due diligence on the author listing (because otherwise they can be sued in Real Court i'm p sure)
honestly this is sufficient evidence https://cr.yp.to/papers.html every single paper is first author. this is the evidence they find on law and order at the 50-minute mark before finding the door locked from the outside
i'm gonna email the journal that gave him best paper award about their authorlist policies. until my demands are met
i really want someone to tell me authorship works differently in crypto. that's exactly what i want to see. but i expect stonewalling
he's so fucking butthurt about kyber all the time. that's the one isis lovecruft specifically says is good. but he's not a lattice hater (quite the opposite), just kyber
this 30-page paper takes uhhhh quite a while to load https://kyberslash.cr.yp.to/kyberslash-20250115.pdf (yes he only provides pdf documents)
[someone who develops security vulnerabilities cannot be allowed to run code for longer than 5-10 seconds on your phone. if it makes your entire phone slower i recommend exiting the entire browser app immediately. this is what side-channel attacks can look like.]
i absolutely believe cloudflare rowhammered me for a captcha on an ACM article once given that they performed three very slow reloads before my fans went off and my 64gb of RAM was swapping. i thought it was game over dude. maybe it was
clownflare really believes someone would go afk waiting for their censorship regime to decide whether to censor you. let's get one thing straight: i am timing you here.
CAP theorem (Censorship, Accountability, Plausibility): choose two
- censorship: blocking access to a resource
- accountability: the outward appearance of unfettered access
- plausibility: how likely it is for the user to assume good faith
clownflare captchas are great for surveillance alone. that's why i assumed they would just fingerprint me asynchronously, linking my frontend browser id to the backend id of my request as it propagates through cloudflare IXPs.
that's a great gig! a new spin on the old classic!
but the ACM "open access" flow is much slower than standard clownflare!
(this probably should have been a clue because tracking alone, afaiu, is almost always done async. maybe you could open a websocket and trace a few more pings with a lengthier time budget, but that looks weird)
without interpreting it yet, it's important to recognize that average time to successful entry is a signal. reliably longer wait times for X vs Y through customsflare indicates that Y is assigned a higher time budget.
clownflare (and google, and twitter, and...) frequently claims that each second waiting longer causes users to "give up" in some unspecified way. this is used to convince management types to e.g. use google AMP and kill their own website, so any relationship of that claim with the truth is entirely incidental.
in addition, this problem description quite skillfully assumes the user is on google search or other decontextualized list of results, and the user is trying to evaluate likelihood/interest.
but that's not how customsflare interstitial spinners work. the user almost always knows the document they need (e.g. looking up a DOI from ACM "open access"). the spinner blocks them. so our concept of "time budget" is not about attracting users to read our page over others. for this, we need the evil 🧢 theorem proposed above
the decision to make a document contextually or globally inaccessible.
we define this in two major forms:
- explicit: a direct request for a document via globally unique identifier fails to complete successfully within the expected time span.
- implicit: the document is removed from the results list for any or all queries.
(if a uuid is a "query", could these be said to overlap?)
fun facts about explicit censorship:
- any URI is just such a universally unique identifier
any non-success result is censorship, particularly "security" errors
a timeout policy produces yes/no/timeout and always completes under the timeout, which may often be more applicable than indefinite waiting
- however, understand that a static timeout length may be inferred by the server, in order to resolve only after your inferred timeout. (this discourages timeouts. why?)
the "valid" reason to discourage client-enforced timeouts:
- ok i can't think of any
(a minimum wait time is very reasonable though. if you are actually just a queueing system like customsflare, this is a tool to modulate flow rate across multiple stochastic processes.)
the evil reasons to discourage client-enforced timeouts:
- you can do extra evil tracking stuff
- you can mine crypto
- you can contribute to a botnet ddos
- you can escape the browser sandbox via generic side-channel attacks like rowhammer along with cpu speculation like meltdown/spectre
- you can use browser-specific side-channel attacks to extract user credentials
- you may be able to trigger a separate RCE after port scanning the requesting client
those are all i can think of right now. please suggest anything i've missed!