Here's the full chat, including the post-match analysis :-)
https://chatgpt.com/share/69130153-88cc-8011-b8a0-f15837e4568b
Post
Here's the full chat, including the post-match analysis :-)
https://chatgpt.com/share/69130153-88cc-8011-b8a0-f15837e4568b
As a side quest I tried this with Deepseek, it always wins if I go first (sample size limited by my boredom threshold), and it did identify knowing my pick as the deciding advantage. (It also said that it was learning my pattern and that helped too 🤔).
@Tallish_Tom GPT-5 typically wins the first 3-4 rounds and then starts losing badly.
This is, I suspect, about context entropy. As the sequence of tokens grows, the probability of matches rapidly deteriorates, forcing the model out of distribution.
This happens on *every* class of problem, but it's most visible with simple, deterministic rules (e.g., chess).
What's *really* interesting is that when each round is played in a fresh context, it gets a perfect score.
A space for Bonfire maintainers and contributors to communicate