I said @BenjDicken studying database papers was very inspirational but I never did anything about it last year.
So let me get started.. Today’s paper is “Solving A Million-Step LLM Task With Zero Errors”
Takeaways:
* Break the task into many many (deterministic) micro-tasks
* Run each task 2+ times to implement voting, aka ambiguity detection
* N≥2 queries is unavoidable, ambiguity detection without redundancy is not possible
* The paper explicitly accepts higher token cost as the price of certainty
* This is a reliability architecture, not an efficiency one
What’s interesting and relevant to 1MM:
* redundancy ≈ column consensus
* disagreement ≈ instability signal
* repeated sampling ≈ temporal pooling
* escalation ≈ attention / gain control
In other words, this paper accidentally reinvented cortical voting in LLM form
References: