Jump to content

Talk:Intermediate value theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Error in proof?

[edit]

I'm not a mathematician, but there's a bit of this proof that seems wrong. It goes

"Suppose first that f (c) > u."

I don't see how we can suppose this since by definition

    c = sup {x in [a, b] : f(x) ≤ u}. 

which I understand as meaning that c is the largest number of those real numbers x such that f(x) is less or equal to u. If so, how could f(c) be greater than u??

Wrong. sup does not mean the largest member of a set, but rather the smallest upper bound of the set, i.e., the smallest number that no member of the set exceeds. If the set has a largest member, then that is the set's sup, but the set of all numbers strictly less than 10 has no largest member, but still has a sup, which is 10. Michael Hardy 01:32, 4 Jan 2004 (UTC)

My mistakem, which I realised on Friday evening when I looked up the definition of supremum. Wondering why I didn't get this from Wikipedia, I looked at the definition there supremum, and saw a non-mathematical definition precedes it, which could cause confusion. DB. ________________

There is indeed an error in the proof, which I copied below, as I shall explain.

Let S be the set of all x in [a, b] such that f(x) ≤ u. Then S is non-empty since a is an element of S, and S is bounded above by b. Hence, by the completeness property of the real numbers, the supremum c = sup S exists. That is, c is the lowest number that is greater than or equal to every member of S. We claim that f(c) = u.
  • Suppose first that f(c) > u, then f(c) − u > 0. Since f is continuous, there is a δ > 0 such that | f(x) − f(c) | < ε whenever | xc | < δ. Pick ε = f(c) − u, then | f(x) − f(c) | < f(c) − u. But then, f(x) > f(c) − (f(c) − u) = u whenever | xc | < δ (that is, f(x) > u for x in (cδ, c + δ)). This requires that cδ be an upper bound for S (since no point in the interval (cδ, c] for which f > u, can be contained in S, and c was defined as the least upper bound for S), an upper bound less than c. The contradiction negates this paragraph's opening assumption.
  • Suppose instead that f(c) < u. Again, by continuity, there is a δ > 0 such that | f(x) − f(c) | < uf(c) whenever | xc | < δ. Then f(x) < f(c) + (uf(c)) = u for x in (cδ, c + δ). This requires that c + δ/2, contained in (cδ, c + δ) for which f < u, must itself be contained in S, though it exceeds its least upper bound, c. The contradiction negates this paragraph's opening assumption, as well.
We deduce that f(c) = u as stated.Toolnut (talk) 20:59, 16 October 2011 (UTC)[reply]

The most flagrant error is found in the step "Pick ε = f(c) − u": "ε", in the ε-δ definition of continuity, is any positive number, however small; if we start out with the assumption that f(c)≠u, we may not also assume that f(c) − u can be made as small as we like.
Another one is really an exercise in futility: proving that f(c) > u is contradictory: since c=sup(S)∈S, it immediately follows from the definition of S that f(c)≤u.Toolnut (talk) 21:24, 15 October 2011 (UTC)[reply]

c is not necessarily a member of S. Consider the (discontinuous) function f(x) = 0 if x < 0; 1 if x ≥ 0, and take u = 1/2. Then S is the set of negative numbers, and c, which is sup(S), is 0, but 0 is not a member of S, and indeed f(c) > u. To conclude that c is a member of S and thus f(c) ≤ u, you need to use the property of continuity, which is what the correct proof does. As you seem not to have grasped this point, and your rewritten "proof" assumes as given that c is member of S (or D in your notation), I have reverted it. Gandalf61 (talk) 13:48, 16 October 2011 (UTC)[reply]
You're right about my misunderstanding of supremum: it occurred to me to look deeper into it, overnight. I should have taken care to notice what an earlier post here had argued about: let me rethink this and get back to you.Toolnut (talk) 21:09, 16 October 2011 (UTC)[reply]
I needed to see a case to justify the "=" sign in the range f(x)≤u for which the domain is the set which has for maximum c, w/out c actually belonging to it, and I have found one. f(x) can take on the value u in (a,c) (not at c), or have a maximum in (a,c) which equals u, and f(c-) can then be <u. But then, the intermediate value will have already been reached, so why include the "=" sign in the definition of the range corresponding to the domain S?Toolnut (talk) 21:49, 16 October 2011 (UTC)[reply]

I have considerably revised it before reposting, taking Gandalf61's concerns into account. Does anyone have any rebuttal to my first observation of the original treatment, the "flagrant error"?Toolnut (talk) 02:34, 17 October 2011 (UTC)[reply]

I have posted a note at Wikipedia talk:WikiProject Mathematics to see if we can get a wider review of your rewritten proof. Gandalf61 (talk) 09:50, 17 October 2011 (UTC)[reply]
I have a simple rebuttal. It's true for any positive ε, so it is true for this ε, although the sentence should be clarified to indicate that we choose the δ corresponding to this ε. This is called instantiation of a universal quantifier, and is a valid logical inference and standard technique in epsilon-delta proofs. Sławomir Biały (talk) 10:47, 17 October 2011 (UTC)[reply]
Now it's beginning to make perfect sense: you put it very well. I'm sorry it took me so long to see that. Thanks.Toolnut (talk) 18:33, 18 October 2011 (UTC)[reply]
I take that back. The reason why this proof appears to work is that, just as for ε, however small the difference |f(c) - u| is made, this would still be a violation of the intuitive fact that this difference should be zero; hence the contradiction. I, on the other hand, was able to prove this theorem w/out making the assumption that this difference and ε should be the same. All I had to do was visualize it geometrically first, then put it in δ-ε terms.Toolnut (talk) 21:56, 18 October 2011 (UTC)[reply]
If your uncomfortable setting to a particular value, how do you show, using the definition of a limit, that does not exist? Thenub314 (talk) 22:58, 18 October 2011 (UTC)[reply]
Ok, I was about to revise my last posting when I found an edit conflict. So I'll just paste my latest epiphany, here.
The reason why this proof appears to work is that, just as for ε, however small the difference Δ=|f(c) - u| is made, this would still be a violation of the intuitive fact that this difference should be zero; hence the desired contradiction. I, on the other hand, in my proof have avoided making the assumption that Δ=ε, simply by noting that, however small we choose to make Δ, we can find an ε<Δ. This boils down to a simple solution to the current proof: instead of saying "pick an ε=Δ," we say "pick an ε<Δ," and problem solved! Our two proofs would be nearly identical, except for (a) the range of the function (f(x)≤u vs. f(x)<u) corresponding to the domain S, (b) a confirmation that Δ=0 produces a valid solution, and (c) a statement about the domain of other possible solutions of f(x)=u.Toolnut (talk) 23:14, 18 October 2011 (UTC)[reply]
To answer your question, I would evaluate the left and right handed limits separately and show that the two are not equal. Though I'm puzzled about what that has anything to do with this.Toolnut (talk) 23:24, 18 October 2011 (UTC)[reply]
In order to for your proof to work you first need to prove the statement that the existence of the limit is equivalent to the existence and equality of the left and right hand side limits. Without using this theorem, how would you prove it directly from the definition? (The reason I ask this here is that the usual correct answer does the same thing as this proof. Thenub314 (talk) 00:15, 19 October 2011 (UTC)[reply]
There's a jump discontinuity at x=0, from -1 to +1. Therefore, for any δ>0, the |f(δ)-f(-δ)|=2=ε, a constant, which cannot be made vanishingly small, as required by the limit. Therefore, the limit does not exist. Is that it? So what are we supposed to conclude from that, relevant to the above? You tell me.Toolnut (talk) 01:01, 19 October 2011 (UTC)[reply]
There is still a flaw in your logic. I would suggest trying to write a formal proof, using only valid logical inferences. Such a proof would begin by writing the definition of a limit. It would probably not involve the words "vanishingly small", since these do not appear in the definition of limit. You might find the triangle inequality helpful. Sławomir Biały (talk) 01:14, 19 October 2011 (UTC)[reply]
OK. For the limit to exist, we must have: for any ε>0 there should exist a δ>0 such that |f(δ)-f(-δ)|<ε. This is not true for ε<2, as shown earlier. Are we playing a game, or will you be making a point soon?Toolnut (talk) 01:31, 19 October 2011 (UTC)[reply]

Why don't you write what you are trying to prove? Write the negation of the definition of a limit. That is, write this:

Convince me that this holds for the function . Note that this is an existential statement: you prove it by exhibiting one example of ε. Sławomir Biały (talk) 02:09, 19 October 2011 (UTC)[reply]

This is the first time I've ever had to negate a "for all...exists" implication logic statement: you're probably better at this than I am. So I had to think about it some. It appears to me that we need to assume that the ε-δ map is one-to-one, in order to switch a "for all" into "there exists", and vice versa. All right then,
¬( ∃L ∀ϵ>0 ∃δ>0 ∀x (0<|x|<δ ⇒ 0<|f(x)-L|<ϵ) )
∀L ∃ϵ>0 ∀δ>0 ∃x ( 0<|x|<δ ⋀ |f(x)-L|≥ϵ )
∀L ∃ϵ>0 ∀δ>0 ∃x ( 0<|x|<δ ⋀ (| x/|x| - L | ≥ ϵ) )
∴ϵ ≤ | x/|x| - L | ≤ | x/|x| | + |L| = 1 + |L|,∀L ∴0<ϵ≤a, where a≥1 ∴ϵ exists ∀L.
I think I get it: if we can show one ε case that contradicts, that is all we need to disprove a "for all ε...exists" statement. Thanks for the lesson: I'm the idiot now.Toolnut (talk) 06:03, 19 October 2011 (UTC)[reply]
(Edit conflict) @Toolnut, I like second version better, the problem is with the "|f(δ)-f(-δ)|<ε". Applying this definition to any even function will be problematic. Though starting with L=0 might clarify things a bit, which you sort of did above. The real point I wanted to make is that point that Sławomir Biały has made, it really comes down to choosing an ε. I know you and he got off to the wrong foot, if it helps at all, when he and I first interacted on wiki he corrected me... so you can count yourself in the company of a professional mathematician, he knows analysis really quite well. Thenub314 (talk) 06:15, 19 October 2011 (UTC)[reply]
I have bigger concerns with the new proof. First, it's very hard to read. On Wikipedia, we prefer complete English sentences to symbolic expressions. Brevity, clarity, and efficiency are also favored. Second, it appears to be original research: one shouldn't have to think overnight about the correctness of a contribution like this. That said, the original proof also lacked sources. I can provide one if necessary. Sławomir Biały (talk) 10:47, 17 October 2011 (UTC)[reply]
I'd also prefer to go with a standard textbook proof, as this provides better assessment and trust by non mathematicians. It also might help to avoid potential arguments on details, correctness or appropriateness of a "new" proof and addresses the issue of sourcing concerns as well.--Kmhkmh (talk) 14:21, 17 October 2011 (UTC)[reply]
My feelings are the same as those appearing in Kmhkmh's and Sławomir's comments. Rschwieb (talk) 14:43, 17 October 2011 (UTC)[reply]
I knew you would have concerns: my only concern is that it is convincing, regardless of its source, based on the founding rules of the art, and the original treatment left something to be desired, including readability. Again, I don't see how you can assume u-f(c)=ε, when you are trying to disprove that the difference between u and f(c) may be some value other than zero.Toolnut (talk) 16:28, 17 October 2011 (UTC)[reply]
It's a proof by contradiction. Let ε=u-f(c). If ε>0, then by the definition of continuity, there exists a δ>0 such that whenever . However, this implies that is also an upper bound of the set in question. This contradicts c being the least upper bound. Sławomir Biały (talk) 18:24, 17 October 2011 (UTC)[reply]
Well to be honest I have never been a fan of having calculus proofs in wikipedia pages (generally speaking). That being said, I would like to echo the comments of others here and say that the original proof was far more clear. Also, I would like to comment that all facts (including proofs) on wikipedia are supposed to be verifiable and we should point to the appropriate sources. It is simply outside the scope of wikipedia to provide new proofs of known results and as far as I can tell this proof is a product of our collective work. Thenub314 (talk) 21:22, 17 October 2011 (UTC)[reply]
That's just it: there is no verifiable, authoritative, source for the proof (now restored by RDBury) cited and no one has even tried to convince me that the assumption that u-f(c)=ε does not conflict with the fact that ε needs to vanish independently of u-f(c), in order to show that the latter should be zero, by contradiction. The idea, as I see it, behind the δ-ε proof of something as intuitively obvious as the IVT is more to prove the soundness of the δ-ε reasoning and strengthen it as a tool to tackle harder problems.Toolnut (talk) 02:26, 18 October 2011 (UTC)[reply]
Fine, let me try. Forget about anything actually vanishing, it is a convenient mental picture but not precisely what is going on mathematically. To say that means by definition that an entire family of inequalities hold, one inequality for each positive real number Specifically given a particular positive real number , we know we can find a positive real number (written as a function to emphasize the dependence on epsilon) so that: if then .
So if I want to show that it is not possible that f(c)>u (recall in the setting of this theorem we know , I consider one particular inequality above. Namely "if then " and as discussed in the proof then and is an upper bound smaller then our least upper bound, contradiction. Notice I am not saying all the other inequalities do not old. I am not saying that we first set to one thing and later to another. The idea is to simply look at the consequences of one particular inequality (perhaps implication would be more correct to say) out of the infinite family we know to hold true. Thenub314 (talk) 04:27, 18 October 2011 (UTC)[reply]
To get back to references for the proof, it's very standard and I found one essentially the same in the second real analysis book I looked in. I'll add the ref, there are probably better ones that could be used but I happen to own this one.--RDBury (talk) 05:46, 18 October 2011 (UTC)[reply]
I also replied to the substance of your objection, Toolnut. Twice in fact. But I guess it was too much for me to expect that you would understand, and I shouldn't have used big words like "instantiation" and "quantifier". Thenub seems to bring it down to the correct level for you. Sławomir Biały (talk) 07:56, 18 October 2011 (UTC)[reply]
Well having just come here actually I think RDBury's sticking in a citation might be the best level :) Dmcq (talk) 10:21, 19 October 2011 (UTC)[reply]

Calculus?

[edit]

The IVT for integration is certainly a result of calculus, but the original IVT involves neither differentiation nor integration but only continuity, so shouldn't it be referred to in the first sentence as a result of analysis? --131.111.249.207 17:31, 2 Jun 2005 (UTC)


I agree, and I changed this

Grokmoo 13:29, 21 October 2005 (UTC)[reply]

Hmmm...if it only involves continuity, shouldn't it REALLY be called a result of TOPOLOGY???

Closed interval

[edit]

Uncommon though the formation may be, I think the theorem can say f(x) = c for x in [a, b] not just (a, b). [article] says so too. -- Taku 01:07, 13 October 2005 (UTC)[reply]

Note that the statement with (a, b) is actually a STRONGER statement. In any case, there's no point in including a and b as endpoints, since it's impossible for them to work (plug them in and try).


It can if you like, but this case it trivial, so I wouldn't worry about it. The standard statement is for (a, b).

Proof that f(x) = x for some x

[edit]

The proof right below the intermediate value theorem that there exists some x such that f(x) = x is wrong. This statement is in fact not generally true, even for f(x) continuous. There are additional requirements, for example, if the domain of f is all the reals and the function is bounded (above and below).


I reorganized this article and fixed the proof.

Grokmoo 13:29, 21 October 2005 (UTC)[reply]


Would it be useful to add an intuitively understandable example to this page? Something like: If person A is climbing a mountain from 6 to 7 am, and person B is coming down the mountain during the same time interval, then there has to be some time t in that time interval when they are both at exactly the same altitude?


The example of the "without lifting the pencil" is quite intuitive and very nice.


I agree. However, opening the page and being faced with a set of formulae isn't very nice. It would be better if someone put the following paragraph at the beginning, or in the introduction:

""This captures an intuitive property of continuous functions: given f continuous on [1, 2], if f (1) = 3 and f (2) = 5 then f must be equal to 4 somewhere between 1 and 2. It represents the idea that the graph of a continuous function can be drawn without lifting your pencil from the paper.""

Khedron 00:41, 23 December 2006 (UTC)[reply]


Too bad that this idea of "not lifting the pen from the paper" is quite wrong. For example, consider the Vitali Cantor Function. It is a continuous (even holder continuous) function but it is quite impossible to draw (or even, to think of). I think that this kind of "generally true" statements should be avoided when writing mathematics. Francesco di Plinio (francesco.diplinio@libero.it) —Preceding unsigned comment added by 151.57.122.59 (talk) 16:53, 27 September 2007 (UTC)[reply]

left and right neighbourhood

[edit]

"Suppose first that f (c) > u ... whenever | x - c | < δ" I think it should be c - δ < x < c (left neighbourhood) and c < x < c + δ (right neighbourhood) for f(c) < u so we can omit absolut function. In first whenever c - δ < x < c --> f(x) - f(c) < 0 so |f(x) - f(c)| = -f(x) + f(c) —The preceding unsigned comment was added by 149.156.124.14 (talk) 15:48, 23 January 2007 (UTC).[reply]

Intermediate value theorem of integration

[edit]

The explanation reads a little funny in this section. It states that one can multiply (b-a) by "some function value f(c)" and you will get the area under the curve. However the c refers back to the c found in the function using the mean value theorem not just any c value in between a and b chosen on a whim. Although the section refers to the mean value theorem, it does not explicitly state the connection and I think it is hardly enough to explain the derivation fully enough for a newer calculus student to comprehend the equation in its totality. Super-c-sharp (talk) 20:21, 9 September 2008 (UTC)[reply]

This has now been done. Xantharius (talk) 22:50, 9 September 2008 (UTC)[reply]

It reads a little better, but I still think that it is not explicit enough to give a thorough understanding as to how the two are related. Super-c-sharp (talk) 19:00, 10 September 2008 (UTC)[reply]

History

[edit]

Funny fact: in paragraph 35 of Euler's work Introduction to Analysis of the Infinite (i have John D, Blanton's translation in my hand): If the polynomial functien Z takes the value A when z= a and takes the value B when z=b, then there is a value of z between a and b for which the function Z takes any value between A and B.

So there already was a notion of the intermediate value theorem before Bolzano: euler lived in 1707-1783 —Preceding unsigned comment added by 132.229.215.173 (talk) 14:28, 27 January 2009 (UTC)[reply]

Analytic proof: an appeal for help

[edit]

...from those who have a much better grasp of the foundations of analysis that I. I have just put some stuff about how Bolzano's analytic proof of the IVT introduced the idea of analytic proof that went on, through Bolzano's writings on logic, to become of fundamental importance in proof theory. I have, no doubt, made several errors, and what I have not commented on is why Bolzano thought the idea of analytic proof to be so important.

I have the idea that Bolzano's concern was that analysis is more fundamental than geometry, and so should not depend upon it, but this is too much like guesswork for me to write up. I'd be grateful if anyone can help me out with this! — Charles Stewart (talk) 14:15, 3 March 2009 (UTC)[reply]

Suggest adding another non-standard IVT reference

[edit]

In addition to the alternative proof link non-standard calculus possibly add refference to the Constructivist analysis article. There is a is a long section on IVT in that page. The Constructivist analysis IVT section forward references to this page. A link to that discussion might be usefull in this article.

However, I believe the that the the discussion of 'classical analysis IVT' on that page is not in close aggrement with this page.

--138.162.0.45 (talk) 17:36, 21 May 2009 (UTC)[reply]


Capital letter for the theorem name

[edit]

Just wanted to know why some people apply the capital letter for a theorem name, while some don't. Is there any standard which applies for this case ? —Preceding unsigned comment added by 129.240.64.201 (talk) 14:51, 17 November 2009 (UTC)[reply]

Franklin.vp's edit

[edit]

Franklin.vp reverted a change I made on the page concerning Bolzano's proof of the IVT. The current text states that Bolzano's proof was unrigorous, and provides as reference the famous article by Grabiner. However, Grabiner's position is that the proof is rigorous. We should either make a note of this on the page, or provide alternative sources for any claim that Bolzano's and Cauchy's proofs were unrigorous. Tkuvho (talk) 15:36, 31 January 2010 (UTC)[reply]

  • Well, the current version of the article neither claims it to rigorous nor unrigorous.  franklin  15:39, 31 January 2010 (UTC)[reply]
  • Beyond rigorous or not rigorous the issue with Bolzano's and Cauchy's proves is that Bolzano was ignored for a long time while chronologically seems to be the first in proving it. Cauchy, on the other hand, gave a prove inside the bigger work of formalization of analysis in his epsilon-delta language. Rigorous or not rigorous is a very questionable matter and it depends on the framework. All the discussion about the rigorousness of Bolzano's proof comes from the intention to vindicate his work, for him being ignored and for not getting Cauchy all the merits. I think that for doing that it is enough to say that he got the proof and that's it.  franklin  16:04, 31 January 2010 (UTC)[reply]
Your description of my motivation is spurious. Tkuvho (talk) 17:34, 31 January 2010 (UTC)[reply]
  • It is probably because I am not describing your motivation. How, can I if I don't even know you. What I am saying is that the question about qualifying the proof as rigorous is only important as to whether or not consider Bolzano the first giving a proof which is what really matters (especially because poor Bolzano worked lonely and forgotten and probably was not the SOB that was Cauchy). Cauchy is also mentioned for giving not only a proof but also a formalization of the whole framework in which the theorem is stated. definition of limits, function, ...  franklin  17:49, 31 January 2010 (UTC)[reply]
My original edit was motivated by the fact that the page attributes to Grabiner a view she did not state, namely that Bolzano's proof was unrigorous. This error has still not been corrected. As far as the rigor of Cauchy's framework, Grattan-Guinness has shown that Bolzano's framework was, in fact, superior (albeit less influential). Tkuvho (talk) 11:07, 1 February 2010 (UTC)[reply]
  • I think there is no place in the article right now saying that Grabiner stated that Bolzano's is unrigorous. Where is it that you find such a statement? Grattan-Guinness also said that Cauchy plagiarized Bolzano, a statement that is now disregarded. I don't have Grattan-Guinness' work can you bring/expose what are the reasons why Grattan-Guinness says that? I doubt very much that a comparison of the two proves according to rigorousness can be established. I was reading this work by Grabiner and I don't find it as having the position of Bolzano's being more rigorous. Actually is saying that Cauchy's proof was "crystal clear" and Bolzano's was involved. For example Bolzano proves first that if f(a)<g(a) and f(b)>g(b) then f(x)=f(x) for some x, after proving that bounded monotonic sequences have a limit to then derive IVT as a consequence. Cauchy approaches directly f(a)f(b)<0 implies f(x)=0 for some x using a nested interval argument. The one given in the references of the WP article doesn't have such perspective either (or I didn't see it). Again, comparison about rigorousness seems to be nonsense, the important is that both proves work, that Bolzano did it first, that Cauchy's is clear and simple and its done within a bigger formalization of the concepts of analysis.  franklin  13:08, 1 February 2010 (UTC)[reply]
I am sorry, I misspoke. I meant to say that Freudenthal said that Bolzano's framework was superior to Cauchy. The consensus among scholars is that Bolzano's proof is no less rigorous than Cauchy's. The current version of the article implies, contrary to what you have clearly outlined above, that Bolzano did not have a proof. He certainly did, as everyone knows, including yourself, to judge by your comment. Tkuvho (talk) 13:41, 1 February 2010 (UTC)[reply]
  • Oh my. Crazy me. I noticed now, it says "stated" instead of "proved". Actually the theorem was stated much earlier and used. That section also needs being populated with more info. The story is richer than just a line.  franklin  14:05, 1 February 2010 (UTC)[reply]
I see that there was a bit of a misunderstanding. Thanks for your cooperation! Tkuvho (talk) 14:19, 1 February 2010 (UTC)[reply]

The graph of the function has 3 places where f(x) = u. The choice of c in the graph is the the middle one. There is a region to the right of this where f(x)≤ u. In other words the location of c in the graph ought to be the right most of the three since c is the lowest number that is greater than or equal to every member of S. Right? - Brian Minchau —Preceding unsigned comment added by Brianjamesminchau (talkcontribs) 04:52, 2 December 2010 (UTC)[reply]

The last I touched Mathematics was around 30 years ago, while earning my Electrical Engineering degree. So my claim that I can remember something fuzzy here is well placed. Currently I work in education sector and I am responsible for IT service delivery.

My suggestion to the community of the wiki is that we should make this more interesting for our current generation of students. This all is purely mathematical and while it will interest many it won't draw in the student who is needs that little bit extra to make things clear for him or the pure theory will not be of much interest to a person who cannot see the practical application of these great findings.

How can we make all this interactive and interesting? Let us start by providing some real life working examples, let us show how this theorem is applied in real world. —Preceding unsigned comment added by 60.241.168.218 (talk) 21:05, 6 April 2011 (UTC)[reply]

The main "Real world implication" is just more mathematics.

[edit]

Perhaps the section "implications of theorem in real world" should be removed, or rewritten to concentrate on the "wobbly table" example. The point about there being, on any great circle and any physical quantity, a pair of antipodal points for which that quantity takes the same value on each point of the pair, is a cute mathematical consequence but while it is a mathematical abstraction of a fact about the real world, I don't see how it has much relevance; it seems like an essentially *mathematical* property of (an abstract description of) the real world, not really the kind of implication that will convince anybody of the theorem's relevance. It's unlikely anyone motivated by anything practical would ever bother to actually *find* this pair of antipodal points, whereas the wobbly table application describes something that someone might actually do for reasons other than illustrating a mathematical fact. I suppose the section title could be changed and the example kept roughly the same...certainly the point about antipodal pairs is an interesting mathematical consequence, so one wouldn't want it removed from the article entirely. MorphismOfDoom (talk) 03:18, 12 September 2012 (UTC)[reply]

I think the formulation of the section is nonsense. It is a purely mathematical theorem, it does not have any implications in the “real world”. It has implications for some physical models assuming the world to be euclidean space and temperature a continuous function. --22:52, 24 September 2012 (UTC) — Preceding unsigned comment added by Chricho (talkcontribs)
Actually, the proof in this section is incorrect. Here is my refutation: The angle between the two points of equal value is not fixed at 180 degrees along the great circle, which would be a non-physical geometric constraint. The angle is less than or equal to 180 degrees. So this could be restated that along any great circle, there will be two points within 180 degrees of each other with the same value (of course within measurement errors). And the maximum and minimum values along the great circle each are degenerate cases with zero degrees of separation. So the use of antipodal is incorrect -- the two points are *at most* antipodal, which is kind of a gee wiz answer anyway. Maybe you could expand that to the table -- you would have to rotate it *at most* +-180 degrees to find a stable orientation.96.88.65.241 (talk) 18:05, 11 August 2016 (UTC)[reply]

Graph might be made clearer

[edit]

It appears from the graph that y=u could lead to the false conclusion that the red line is the distance from a to b.

I see that the graph is trying to indicate that there are many points on f(x) which would satisfy the condition of y=u.

Just to make it clear, why not put a red dot "u" on the y axis?

116.55.65.151 (talk) 12:34, 7 January 2013 (UTC)[reply]

Change all formulas to <math></math>

[edit]

Is there a particular reason for the fact that this article uses plain Wiki-markup for formulas (including inline single variables)?. That makes the article hard to read because the formulas don't stand out from the rest of the text, and the Wiki-markup is also harder to understand and modify than LaTeX code. Mario Castelán Castro (talk) 20:37, 7 February 2015 (UTC).[reply]

"rigorous footing" Clarification Request

[edit]

AVM2019 wrote, "The placement and phrasing of this remark may suggest that the classical proof is somehow "intuitive" and not rigorous, which is not the case." referring to the remark after the proof after the "relation to completeness".

My own feeling is that the remark is not confusing, but I'm not a beginner, so maybe we should ask an undergrad.Irchans (talk) 19:39, 18 January 2023 (UTC)[reply]

MacTutor reference

[edit]

Reference #10 to MacTutor has the wrong title. The page it links to [1]is about Arbogast and is not titled "Intermediate Value Theorem" (MacTutor does not appear to have a page with that title). The page title is being generated automatically, so I am not sure how to fix this. JLeander (talk) 14:05, 6 November 2024 (UTC)[reply]