My favourite trivia about "On Computable Numbers" is that Alan Turing got the definition of computable reals wrong! The way he defines them - namely that a computable real is a Turing machine that generates the sequence of digits of the real - has severe issues, because addition and other basic operations are incomputable due to subtle diagonalisation arguments (see https://jdh.hamkins.org/alan-turing-on-computable-numbers/ for instance).
It's interesting that Turing didn't realise this, as the whole paper revolves around diagonalisation arguments in a few places.
The "modern" take is to define a computable real to be a program that takes an epsilon as input and returns a rational number within that epsilon from the real being represented. Or something similar - in this case it is very simple to define addition, for instance, as you have control over these bounds.
One more fact - equality is incomputable for both of these representations! There's no program that can uniformly decide whether two computable reals are the same, no matter how you cook up the definition. This one maybe isn't too surprising - in some sense no matter how many digits or bounds you check between two reals you can never be sure there isn't a smaller gap between them you haven't gotten to yet.
Turing's definition of computable reals can't be wrong, because listing decimal digits is equivalent to giving rational approximations. The problem here is a confusion between real numbers and Turing machines that compute them.
They are logically equivalent definitions (they determine the same set of computable reals) but they are not equivalent from the perspective of computability. As I said, with the first definition you cannot compute addition, whereas with the second you can. I highly recommend reading the blog post I linked which proves this - the issues involve self-reference and are quite subtle (like much of computability theory!).
This has nothing to do with such a "confusion", by the way, which is obvious to anyone writing about this stuff (including me). Rather it's the difference between something being logically definable and computable, which Turing himself writes about in his paper.
Wrong is a judgement call, of course, but given that we're talking about computability here I think it's fair to call Turing's definition flawed. There's a reason nobody uses it in computable analysis.
I skimmed through the page that you linked and I am familiar with computability. When defining "computable reals", Turing defines a subset of real numbers, the only flaw is that he doesn't prove that this subset is closed under addition and multiplication. A proof appears in [1] where Rice indeed gives more convenient definitions and also cites Turing's paper for an "intuitive definition". Saying that Turing was wrong is a big stretch based on a deliberately introduced confusion.
You are still misunderstanding me. The set of computable reals given by Turing's definition is exactly the same as the set of computable reals given by the modern definition.
However, under Turing's definition there is no algorithm that can uniformly compute addition or multiplication! This has nothing to do with whether the set of computable reals is closed under those operations (although it is, of course - this is an easy exercise, no need to cite a paper).
I'll be more formal. Let us say a Turing machine A is a "Turing-real for x" if A outputs the successive digits of x when run. Then consider the following problem: given two Turing-reals A for a and B for b as input, output a Turing-real for a+b.
It turns out this problem is incomputable!
On the other hand the same problem but for the modern "approximation-reals" is computable, a marked improvement.
The lack of computable addition and multiplication is the flaw in Turing's definition. This is well known stuff - there's even a discussion of it on the Wikipedia page for computable reals. The fact that you keep bringing up orthogonal issues, like closure of the computable reals and the distinction between a Turing machine coding for a real and the real itself, makes me think that you might be missing the crux of the matter here.
I perfectly understand you from the beginning, and I still think that the blog post that you mention is nitpicking on a minor detail. I mentioned the paper by Rice (that is indeed elementary) as an example of someone citing Turing without making a big deal from the difference in the definitions. Let's agree that Turing's definition is "inconvenient", but not really "wrong".
Okay, I see. I guess from my point of view, it's not a minor detail. It is simply an unworkable definition if you want to study computable reals in any meaningful way (i.e. beyond the absolute basics like defining the set of computable reals).
By the way I read through that paper you linked and I think it's possible Rice wasn't aware of the issue here either. He references Turing's paper and notes that Turing's definition is equivalent to his, but what I've been getting at here is that this equivalence itself is not computable. There's no effective procedure to convert between Turing-reals and Rice-reals, even though you can prove that they define equivalent subsets of R.
Turing himself proves this in "On Computable Numbers, with an Application to the Entscheidungsproblem: A Correction", so it seems he came to realise his error.
It's interesting that Turing didn't realise this, as the whole paper revolves around diagonalisation arguments in a few places.
The "modern" take is to define a computable real to be a program that takes an epsilon as input and returns a rational number within that epsilon from the real being represented. Or something similar - in this case it is very simple to define addition, for instance, as you have control over these bounds.
One more fact - equality is incomputable for both of these representations! There's no program that can uniformly decide whether two computable reals are the same, no matter how you cook up the definition. This one maybe isn't too surprising - in some sense no matter how many digits or bounds you check between two reals you can never be sure there isn't a smaller gap between them you haven't gotten to yet.