A Friendly Chat About Whether 0.999... = 1

Eric V: If 1/3 is really a “process” more than a number, then are all fractions processes? For example, what about 1/5, expressible in decimal as 0.2, but in binary as 0.00110011…?

Caper_26:

The proof was not a proof at all since it is invalid… unless you assume 9 = 8.999… which is completely circular, that is, assuming \displaystyle{S - rS = a} which is the basis of the entire “proof” of 0.(9) = 1.

There’s no reason to label that a circular assumption. For one thing, there’s no reason to approach it with the assumption that they’re different numbers either. For another, the equality of 1 and .9… doesn’t rely on 8.999… = 9 as a part of the proof. What you’re saying is somewhat like to “if 3/6 = 1/2, then 10/5 = 2, and to say the 10/5 = 2 is an assumption based on the principles that are assumed true so that 3/6 can equal 1/2”.

Tanmay Bhore: You can’t say 9.999… is “completely divisible by three” in any meaningful way, because there’s a remainder of .333… (which you called the “quotient”). So so conclude that 10 is divisible by 3 with a remainder 0 is obviously incorrect.

I already disproved the algebraic “proof”.
It is based on incorrect mathematics of infinite series. (Comment 133).
if “x” = 0.555…
then 10x - x is
10 Σ 5/10ⁿ - Σ 5/10ⁿ = Σ 50/10ⁿ - Σ 5/10ⁿ = Σ 45/10ⁿ

Next is the question: "How can we ACCURATELY describe Σ 45/10ⁿ as a “number” ?
4.999… ?
But what about the ‘5’ that is generated at every iteration?
Do we take a leap of faith and say it doesn’t exist?
s1 = 4.5
s2 = 4.95
s3 = 4.995
For any ‘n’, there exists a 5, and the 9’s are inserted between the 4 and the 5.
But “looking” at the 4, there are only 9’s after and we “never reach the 5” ?
What if we looked at the 5? Would there be only 9’s in FRONT of it with no 4 at the beginning ?
An endless decimal is not something that we can fathom. It has no “value” since the finite value that we think we can assign to it, is not that value since there is always another digit after that.

The whole article is founded on a flawed principle. If someone asks whether 0.999… = 1 without any qualifiers, then I wouldn’t jump to the conclusion they were asking about the equation in a hyperreal number system any more than I would jump to the conclusion they were asking it in a hexadecimal number system. Clearly the equation is false if we are using hexadecimal numbers. But to argue the equation is false on those grounds is ludicrous. The real, decimal number system is the OBVIOUS default for any general discussion.

The author would have been better served to make it clear that THEY are the ones deviating from the norm and into alternate number systems instead of trying to claim it was the layman asking the question that must have been asking about alternate number systems.

This stuff can easily be proven. it stems from why rational numbers can be expressed as fractions (and potentially integers).

0.3333333333333333333… equals 1/3 as we already know
multiply by three
0.99999999999999999999… equals 3/3 or 1

i.e… if you want an algebraic proof
x = 0.5555555555

Step 5:

Your two equations are:

10x = 5.555555555

x = 0.5555555555

10x - x = 5.555555555 − 0.555555555555

9x = 5

Divide both sides by 9

x = 5/9

(Edited)
i.e… if you want an algebraic proof
x = 0.5555555555…

Step 5:

Your two equations are:

10x = 5.555555555…

x = 0.5555555555…

10x – x = 5.555555555 …− 0.555555555555…

9x = 5

Divide both sides by 9

x = 5/9

“When writing, I like to envision a super-pedant, concerned more with satisfying (and demonstrating) his rigor than educating the reader. This mythical(?) nemesis inspires me to focus on intuition. I really should give Mr. Rigor a name.”

Call him Mortimus Rigor. Or Morty.

Perhaps the best thing to do is never use decimals/floats, only keep things as fractions. If irrational numbers must be represented, you can use a very large denominator and be forced to be explicit about your resolution tolerance. For example, when you need to use Pi, use a crazy fraction (e.g. Gregory-Leibniz series) with length/complexity depending on the resolution of tolerance. Another way to see it is that moving from fractions to decimals moves you to a different dimension, one in which resolution is hidden/missing/continuous (fractions) and one in which resolution exists (decimals) and should be explicit but normally isn’t

Nice analogy. It’s almost like we have to rasterize a vector image when going from a fraction to a decimal. You get a fixed level of accuracy vs. the “infinite precision” fraction.

1 Like

Every repeating decimal represents a rational number. There is a way to convert repeating decimals to fractions.

x = 0.48123123…

Multiply x by 103, because the repetend of the decimal consists of 3 digits.

1000x = 481.23123123…

1000x − x = 480.75
999x = 48075 ÷ 100
x = 48075 ÷ 99900

Simplify.

48075 ÷ 99900 = 641 ÷ 1332

0.48123123… = 641 ÷ 1332

Consider the set,  {0.9, 0.99, 0.999, 0.9999,  …}

Those decimals terminate, so their repetend is ‘0’. Implicitly, there are infinite ‘0’ digits after their last ‘9’ digit.

It is important to understand that 0.999… is not an element of the set mentioned above, just as ∞ is not an element of  {1, 2, 3, 4, …},  because the repetend of 0.999… is ‘9’. There just are infinite ‘9’ digits to the right of the decimal point.


0.999… is a numeral, just as 1 is, that represents the same number in a different light, revealing a profound truth about how place-value notation expresses a certain characteristic of numbers regarding continuity.
When a number does not grow exponentially, all the digits to the left of the point of a decimal that represents it are ‘0’ (e.g. 0.502 = 0.25 < 0.50). Conversely, when a number does not decay exponentially, not all the digits to the left of the point of a decimal that represents it are ‘0’ (e.g. 3.202 = 10.24 > 3.20).
The number 1 neither grows nor decays exponentially.  0.999… expresses its non-growing characteristic, while 1.0 expresses its non-decaying characteristic.

I think that a simple argument against 0.999… = 1 is that wherever you can use infinity, you can use infinitesimals, (as you can get infinitesimals from ((((((1/x)/x)/x)/x)/x)/x)…), and you have to allow infinity to have the concept of infinite decimals like 0.999… There are definitely better arguments, but it really comes down to wether you believe in infinitesimals or not so as long as you can prove infinitesimals, 0.999… doesn’t = 1

I think Tall & Gray’s concept of the ‘procept’ is useful here. According to this notion, there is an inherent ambiguity in much mathematical symbolism. For example, ‘10 + 3’ refers to both the formal process of addition (as we learn it in school) and to the number ‘13’, a concept. Hence, ‘10+3’ is a procept.

This might seem all very trivial, but in cases like ‘0.9999… = 1’, we can see the disconnect between the process of ‘tending to a limit’, represented by 0.999…, and the concept of the ‘value of the limit’, represented by 1.

As Tall himself states:

the limit ‘nought point nine repeating’ has mathematical limit equal to one, but cognitively there is a tendency to view the concept as getting closer and closer to one,* without actually ever reaching it*.) Over the years the reason behind this distinction has become clearer. The primitive brain notices movement. Hence the mental notion of a sequence of points tending to a limit is more likely to focus on the moving points than on the limit point. The limit concept is conceived first as a process, then as a concept. It is therefore amenable to an analysis in terms of the theory of procepts. In the case of the limit, the process of tending to a limit is a potential process that may never reach its limit (it may not even have an explicit finite procedure to carry out the limit process). This gives rise to cognitive conflict in terms of cognitive images that conflict with the formal definition.

Or here:

Research shows most beginning university students conceive of a
limit as a dynamic process rather than the static limit concept and end
up with all sorts of confusions (Schwarzenberger & Tall, 1977). They
fail to understand the deeply embedded cultural way that mathematicians
use ambiguity of notation to bridge the difference between process and
concept. It seems, too, that many mathematicians are unaware of this
explicit ambiguity in their thinking processes.

So, to my mind at least, most of the confusion on this topic is the result of conflating the dynamic process of 0.999…, and the concept of the end result of this process, i.e. 1. Of course, this conflation of process and concept is the result of the ambiguous nature of the mathematical symbolism.
Another problem is that limits of sequences are often stated to be values that a sequence can get as close to as possible. The problem with “close” is that is usually means “near, but not equal to”. So 0.999 can get as close as you want to 1, without actually reaching it. A better way to state the limit,l, of a sequence s1, s2…sn is as follows:

By sn tending to the limit l, sn →l, we mean:
Given any accuracy ε>0, there exists N such that if n>N, then
sn and l are ε-indistinguishable.

In our case, 0.999… and 1 are infinitely-indistinguishable, which is just another word for ‘equal’ in my book; since, at no level of discrimination, ε, can a difference between them be discerned. So, even given an infinitesimal microscope that could distinguish ‘1’ and ’ 1 + h ', as in your article, the infinite sequence 0.999… and the real number 1 would appear identical.

P.S. I think there is a fundamental clash between human intuition that perceives 0.999… to be a dynamic, endless process that is continually under construction and the mathematical definition of the static limit of this sequence as 1. When we define ‘0.999… = 1’ we are describing an actually completed infinity, that is, the final state of an endless process, which is (apparently) contradictory to our intuition. I think a better way to retain the intuitive understanding of 0.99… as a dynamic process that can equal the static limit 1, is if you imagine subtracting 0.999… from 1.

If we take 3 terms of the sequence and 3 decimal places of accuracy, then 1 - 0.999 = 0.000|1, with 0.0001 being invisible to us, since it is to four decimal places (the ‘|’ represents decimals that we cannot detect). We can continue this process indefinitely, so that not matter what ‘9’ we stopped at in the sequence 0.999…, there would be a level of accuracy that would result in ’ 0.9999…9n’ appearing identical to 1, since their difference would be zero. If the difference between two numbers (or a limit and a sequence in this case) is always and forever 0, then I think it is fair to say that they are equal, in the typical sense of the word, not “infinitely close” or “as close as you want” etc.

Great insights, thanks for sharing. A few thoughts:

  1. I think the distinction of a concept and a process is important. Often times in discussions of 0.999… = 1 we (or the math community) wants to browbeat people into accepting the equivalence without really digging into the issue. Namely, that we think of “building up” to 1, and if we started with .9 as the first decimal, there nothing we can add in the subsequent ones (.9xxxx) to ever make the total 1. (Or so it seems – the resolution, if we accept it, is that 0.999… and 1.000… can refer to the same concept, just like 1 and 2/2 do.)

  2. The concept of “equivalence” is important as well. We seldom acknowledge that equivalence really means “We cannot detect a difference”. In math, if we say x and y are equivalent, we mean that we cannot detect a difference (using subtraction or any other technique). In the physical world, we know our instruments have limitations, but do we acknowledge it in the conceptual mathematical world? We can see the Roman numerals have limitations on how well they can do arithmetic (that may not have been immediately obvious to the Romans), perhaps our number system has limits on how well it can do comparisons (which is not immediately obvious to us).

Appreciate the thoughts.

Kalid,

Happy to have supplied some insights, your website has been a fantastic resource down through the years!

Regards point 2 above.

I have been conscious of certain avoidance of the issue of uncertainty throughout my mathematical education, whether deliberate or not. Consider Laplace’s Demon and its attempts to know the precise location and momentum of every atom in the universe. Now, I know QM prohibits the existence of such an entity, but consider a purely classically mechanical universe - could such an entity exist, at least in theory?

If the Demon wants to completely characterise a particular particle, it would need to determine its

1 - Position, which requires one finite time point
2 - Velocity, which requires two finite time points
3 - Acceleration, which requires three finite time points
-Here is where this infinite regress was mysteriously halted in high school physics-
4 - Jerk, which requires four finite time points



∞ - "Some silly name for the ∞th derivative, which requires ∞ time

The Demon cannot guarantee that terminating the characterisation of a single particle at the nth derivative will not result in unknown errors in the predicted higher-order derivatives from cascading down through the lower-order derivatives and causing utterly failed predictions in due course. So, to my mind, since bursting onto the scene in 1814, Laplace’s Demon is still sitting on its backside trying to fully characterise the first atom it encountered. And I don’t expect any profound predictions to emanate from any of its demonic orifices any time soon…

This inability to fully characterise any given number is why I like the hyperreals and (most recently) the superreals - they make this reality quite clear and emphasize it with every calculation. We can never say that two real numbers a and b, with equal standard parts, are actually equal at every and all levels of magnification, since at some level of magnification, they may differ profoundly. e.g.
a = 10 + 10ε^1 + 10ε^2 + 10000ε^3 + …
b = 10 + 10ε^1 + 10ε^2 + ε^3 + …

Where a and b differ when we zoom in on third order infinitesimals, ε^3.

So, to my mind, limitations to our knowledge are inherent in these ideal, formal systems which I find both surprising and refreshing. There’s enough certainty in Euclidean geometry to comfort all those afraid of the unknowable. :slight_smile:

Regards point 1, I have two thoughts

Firstly, I agree that this browbeating of students until they accept that 0.99… = 1 by definition alone, which requires them to suppress their intuition, is unhelpful and is symptomatic of a problem with the method of teaching, not with the student’s way of thinking, per say.

I think in this situation, we need to find a better way to teach limits, that matches the individual notions of the typical student. Personally, this whole " the sequence approaches but never reaches the limit" approach is wrong and inevitably leads to the cognitive difficulties displayed in the comments of this article.

David Tall mentions drawing of graphs, with pen and paper, exactly as you have done in your article. I’d suggest a sort of simple algorithm for drawing this curve on graph paper with a y-axis a meter high, for the sake of argument. With my pencil, I can specify a point to an accuracy of a millimeter. So I have an error of 0.001 m

Starting my drawing, I draw a continuous curve, with y-co-ordinate increasing from 0.9 m, to 0.99 m and then to 0.999 m. Now, at this last step, where the difference between the limit (1m) and my minimum step size (0.001 m) is equal to my minimum step size, is the last time I can increment my y-co-ordinate.
=>
0.999 m
0.001 m

1.000 m

So I physically hit the limit, the line ‘1’, and I can continue on along the line y=1. If I could be accurate to a level of a tenth of a millimeter with my pencil, I could have gone to 0.9999 m, but with my imperfect tools (pencil and eyeballs) I can only increment by 0.001 m and cannot discern a difference between 1 m and 0.9999 m. => By my standards of precision, I have reached the limit l of the sequence (0.9, 0.99, 0.99…)

We could demonstrate to students that this process can continue forever, with the same pattern repeating, i.e. no matter how small we make our error, if we have step size n > N, we can reach the desired limit in the sense that we cannot distinguish our sequence from the limit itself. They are ‘equivalent’ as you suggest. We know that people are comfortable with this notion of infinitely repeated behaviour from their intuitions of what 0.999… represents. When we take a sequence to infinity we are saying that, even the infinitely pedantic are satisfied that the sequence sn and the limit l are equal, as they are* infinitely indistinguishable*. Hence why the infinitely repeating decimal 0.999… equals the integer 1.

Secondly, in response to your first point, I don’t either of us have been quite clear in our decimal representation of hyper-real numbers, so far. Since my first post, I have read the article from Katz & Katz that you linked in the appendix, as well as Lightstone’s article on the topic. I think it would be helpful to update the article with this less ambiguous depiction of hyperreal decimals (although there are difficulties with the decimal representation of hyperreals as per Lightstone at least).

So, for example:

0.999… = 1 – h [there is an infinitely small difference]

May not be correct. If we have the sequence x = 0.999…mapped to the hyperreals, then we have a number with the following decimal expansion:

x = "0.999…; …999…

where the first half of the expansion is the "real part, before the semi-colon, while the second part, after the semi-colon, is the infinitesimal part that recurs an infinite amount of times. Now, the obvious problem is that if we have an infinite hyperreal, H, then the infinitesimal sequence “0.999…; …999” will terminate at decimal index 1/10^H, indicated by the ‘|’ in the following: "0.999…;…999|000. . This sort of “truncated infinity” is allowable within the hyperreals, indeed is almost mandatory.

Thing is, there are an infinite number of infinities in the hyperreals, e.g. 2H, H^4 and H^H. Therefore, we can construct decimal expansions that terminate at any arbitrary infinitesimal point, all of which result in hyperreals that are infinitesimally smaller than ‘1’. However, when we state ‘0.999…’ in the real number system, we are referring to that decimal number that never terminates, not at index H, 2H, H^H^H, never! As per the Katz & Katz article, the convention is to denote this unique “infinitely infinite” number as ‘1’.

I think this more rigorus depiction of the decimal expansions of the hyperreals makes my argument that ‘0.999…’ and ‘1’ are indistinguishable at any level of magnification clearer. It also suggests, to me at least, that the following part of your article may be incorrect:

When we switch back to our world, it’s called taking the “standard part” of a number. It essentially means we throw away all the h’s, and convert them to zeroes. So,

0.999… = 1 – h [there is an infinitely small difference]
St(0.999…) = St(1 – h) = St(1) – St(h) = 1 – 0 = 1 [And to us, 0.999… = 1]
The happy compromise is this: in a more accurate dimension, 0.999… and 1 are different. But, when we, with our finite accuracy, try to describe the difference, we cannot: 0.999… and 1 look identical.

That is, there is no difference between 0.999… and 1, infinitely small or otherwise.

John