This is in response to some previous discussion about this topic in the Free Madeira telegram group.
The Bitcoin value overflow incident in 2010 was a rollback of the chain and I don't see how this is a convoluted way of thinking about it... but let me try to elaborate on that.
I don't see any difference, the outcome is exactly the same, but it was more convenient and helped to make the recovery smoother as nodes that still had the bug eventually followed the longest valid chain again. To me this is really just a nuance of the bug, because the new rule to reject the overflow was stricter, not looser. So nodes running the buggy code reorged once the patched chain surpassed the old one in total difficulty.
I was thinking about this, but if that were the case, I am not sure what I would consider a rollback, I can't even come with a theoretical example, you could always say "it's just a deep reorg".
It should be pretty undisputed that this required to patch the software, you can find the code change here fix for block 74638 overflow output transaction, so this was a irregular state transition, the block was valid under the previous (buggy) consensus rules as per the software and manual intervention was required to invalidate / reject it, ie. fixing the bug and updating nodes.
The second in my opinion strong argument for considering this not "just a reorg" is that there was a period of ~5-6 hours during which blocks were only build on the invalid chain, this means the chain effectivly lost liveness during that period until the bug was fixed and Bitcoin core v0.3.10 was released and only after operators updated their nodes with the patch there were two competing chains. Which is a important detail here because in my opinion if you "just" had a chain split then it would be harder to make the case for a rollback. This is one of the important reasons why Ethereum embraces client diversity (more below), a bug in the code should not result in such a disaster, the worst case must be a temporary chain split, resulting in degraded block production and reduced user expierence but after the faulty client(s) fix the bug, the chain split would resolve by itself but most importantly, the chain would not lose liveness.
So it was just socially and operationally a rollback but for the Bitcoin core software, this was just a deep reorg?
So far, the arguments still leave some room for intepretation as one could argue it was a rollback by human coordination, implemented as a deep reorg by protocol rules, so technially not a rollback?
This is not the case either, because the way Bitcoin nodes find the longest chain with the most total difficulty is by just checking the block header. This is a technical detail, headers are cheap to check and are sufficient to determine the longest chain with the most accumulated proof-of-work. But this is bad, because in this scenario, the nodes would always end up on the invalid chain becasue as per consensus rules, it was the longest chain, so after verifying that via the headers, even patched nodes would try to follow that chain but eventually once the full validation is started, ie. blocks are downloaded and validated (transactions are executed) the overflow bug would be caught, but the nodes in that case would stall as they cannot build blocks on that chain since it is invalid as per their rules.
So what was the solution to that? besides the overflow bug fix, there was another code change scanback check to prevent adding to the 74638 overflow chain, this would force the updated nodes to "scan back", ie. check whether a block is a descendant of the invalid / overflow block. This was a temporary emergency patch to make sure updated nodes reject the overflow chain outright, or more technically, reject the whole branch of the tree that is build on top of the invalid block and force the node to build blocks on top of block 74637 which was the last valid block before the invalid block 74638.
So even from the perspective of the Bitcoin core software, it was required to rollback to the previous block to be able to continue building blocks on a chain without the overflow bug.
This is very easy for me to answer because the Ethereum community has thought about this a lot, the clear winner is client diversity.
Coincidentally, a similar issue happened just recently on a Ethereum testnet called Holesky, there was bug in multiple client implementations which caused those to produce invalid blocks and required a emergency bug fix for those clients but the chain was never rolled back, it was "just" a chain split and degraded chain stability due to non-finality and a lot of missed blocks, but the chain has not lost liveness because there were still other (minority) clients that produced valid blocks as they didn't have the bug and eventually the valid chain was able to finalize again and proceed as usual.
For anyone who's interested in learning more about this, we have published a blog post here Lodestar Holesky Rescue Retrospective.