Is it legitimate to specify a statement as a Theorem if it is proved using numerical methods (since it can't...












6














I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question




















  • 14




    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    – lhf
    yesterday








  • 12




    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    – Theo Bendit
    yesterday






  • 3




    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    – Theo Bendit
    yesterday






  • 4




    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    – Theo Bendit
    yesterday








  • 12




    @sane How do you know that it cannot be proved?
    – Noah Schweber
    yesterday
















6














I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question




















  • 14




    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    – lhf
    yesterday








  • 12




    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    – Theo Bendit
    yesterday






  • 3




    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    – Theo Bendit
    yesterday






  • 4




    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    – Theo Bendit
    yesterday








  • 12




    @sane How do you know that it cannot be proved?
    – Noah Schweber
    yesterday














6












6








6


1





I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question















I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.







proof-writing numerical-methods






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday







sane

















asked yesterday









sanesane

448




448








  • 14




    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    – lhf
    yesterday








  • 12




    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    – Theo Bendit
    yesterday






  • 3




    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    – Theo Bendit
    yesterday






  • 4




    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    – Theo Bendit
    yesterday








  • 12




    @sane How do you know that it cannot be proved?
    – Noah Schweber
    yesterday














  • 14




    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    – lhf
    yesterday








  • 12




    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    – Theo Bendit
    yesterday






  • 3




    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    – Theo Bendit
    yesterday






  • 4




    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    – Theo Bendit
    yesterday








  • 12




    @sane How do you know that it cannot be proved?
    – Noah Schweber
    yesterday








14




14




How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
– lhf
yesterday






How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
– lhf
yesterday






12




12




Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
– Theo Bendit
yesterday




Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
– Theo Bendit
yesterday




3




3




@sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
– Theo Bendit
yesterday




@sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
– Theo Bendit
yesterday




4




4




@sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
– Theo Bendit
yesterday






@sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
– Theo Bendit
yesterday






12




12




@sane How do you know that it cannot be proved?
– Noah Schweber
yesterday




@sane How do you know that it cannot be proved?
– Noah Schweber
yesterday










5 Answers
5






active

oldest

votes


















46














In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



Whether or not other proofs exist has no bearing on the validity of a proof using numerical methods.






share|cite|improve this answer



















  • 2




    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    – sane
    yesterday






  • 3




    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    – alephzero
    yesterday








  • 20




    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    – Mike Miller
    yesterday






  • 6




    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    – user3482749
    20 hours ago






  • 1




    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    – RcnSc
    7 hours ago



















11














Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



So what you really should be asking is whether or not you actually have proven your statement.






share|cite|improve this answer

















  • 1




    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    – sane
    yesterday






  • 2




    @sane It's computations which suggest a result might be true.
    – Jack M
    yesterday






  • 2




    @sane You don't call it anything. It's just data.
    – The Great Duck
    12 hours ago










  • @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    – Flater
    2 hours ago



















5














And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



For your actual problem, to find solutions $x(γ,y)>0$ for the equation
$$
(γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
$$

it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
$$
x>frac{A}{1+A}gefrac{y}{1+y}
$$

for the positive root, as either $xge 1$ or, for $0<x<1$,
$$
A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
$$





In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
$$
|x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
~~ text{ or } ~~
|x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
$$

which are lower bound for a single positive root.






share|cite|improve this answer























  • Sorry, I had a typo, this isn't the problem that I consider.
    – sane
    yesterday






  • 2




    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    – LutzL
    yesterday












  • We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
    – sane
    22 hours ago








  • 4




    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    – Eric Towers
    22 hours ago






  • 1




    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    – Eric Towers
    21 hours ago



















4














I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




  1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


  2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






share|cite|improve this answer





























    2














    As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



    The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



    Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




    • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

    • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

    • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

    • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






    share|cite|improve this answer





















    • And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
      – badjohn
      2 hours ago











    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3063680%2fis-it-legitimate-to-specify-a-statement-as-a-theorem-if-it-is-proved-using-numer%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    46














    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Whether or not other proofs exist has no bearing on the validity of a proof using numerical methods.






    share|cite|improve this answer



















    • 2




      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      – sane
      yesterday






    • 3




      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      – alephzero
      yesterday








    • 20




      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      – Mike Miller
      yesterday






    • 6




      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      – user3482749
      20 hours ago






    • 1




      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      – RcnSc
      7 hours ago
















    46














    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Whether or not other proofs exist has no bearing on the validity of a proof using numerical methods.






    share|cite|improve this answer



















    • 2




      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      – sane
      yesterday






    • 3




      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      – alephzero
      yesterday








    • 20




      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      – Mike Miller
      yesterday






    • 6




      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      – user3482749
      20 hours ago






    • 1




      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      – RcnSc
      7 hours ago














    46












    46








    46






    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Whether or not other proofs exist has no bearing on the validity of a proof using numerical methods.






    share|cite|improve this answer














    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Whether or not other proofs exist has no bearing on the validity of a proof using numerical methods.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited 6 hours ago

























    answered yesterday









    Todor MarkovTodor Markov

    1,60949




    1,60949








    • 2




      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      – sane
      yesterday






    • 3




      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      – alephzero
      yesterday








    • 20




      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      – Mike Miller
      yesterday






    • 6




      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      – user3482749
      20 hours ago






    • 1




      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      – RcnSc
      7 hours ago














    • 2




      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      – sane
      yesterday






    • 3




      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      – alephzero
      yesterday








    • 20




      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      – Mike Miller
      yesterday






    • 6




      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      – user3482749
      20 hours ago






    • 1




      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      – RcnSc
      7 hours ago








    2




    2




    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    – sane
    yesterday




    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    – sane
    yesterday




    3




    3




    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    – alephzero
    yesterday






    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    – alephzero
    yesterday






    20




    20




    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    – Mike Miller
    yesterday




    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    – Mike Miller
    yesterday




    6




    6




    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    – user3482749
    20 hours ago




    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    – user3482749
    20 hours ago




    1




    1




    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    – RcnSc
    7 hours ago




    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    – RcnSc
    7 hours ago











    11














    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer

















    • 1




      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      – sane
      yesterday






    • 2




      @sane It's computations which suggest a result might be true.
      – Jack M
      yesterday






    • 2




      @sane You don't call it anything. It's just data.
      – The Great Duck
      12 hours ago










    • @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      – Flater
      2 hours ago
















    11














    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer

















    • 1




      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      – sane
      yesterday






    • 2




      @sane It's computations which suggest a result might be true.
      – Jack M
      yesterday






    • 2




      @sane You don't call it anything. It's just data.
      – The Great Duck
      12 hours ago










    • @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      – Flater
      2 hours ago














    11












    11








    11






    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer












    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered yesterday









    Jack MJack M

    18.6k33880




    18.6k33880








    • 1




      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      – sane
      yesterday






    • 2




      @sane It's computations which suggest a result might be true.
      – Jack M
      yesterday






    • 2




      @sane You don't call it anything. It's just data.
      – The Great Duck
      12 hours ago










    • @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      – Flater
      2 hours ago














    • 1




      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      – sane
      yesterday






    • 2




      @sane It's computations which suggest a result might be true.
      – Jack M
      yesterday






    • 2




      @sane You don't call it anything. It's just data.
      – The Great Duck
      12 hours ago










    • @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      – Flater
      2 hours ago








    1




    1




    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    – sane
    yesterday




    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    – sane
    yesterday




    2




    2




    @sane It's computations which suggest a result might be true.
    – Jack M
    yesterday




    @sane It's computations which suggest a result might be true.
    – Jack M
    yesterday




    2




    2




    @sane You don't call it anything. It's just data.
    – The Great Duck
    12 hours ago




    @sane You don't call it anything. It's just data.
    – The Great Duck
    12 hours ago












    @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    – Flater
    2 hours ago




    @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    – Flater
    2 hours ago











    5














    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer























    • Sorry, I had a typo, this isn't the problem that I consider.
      – sane
      yesterday






    • 2




      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      – LutzL
      yesterday












    • We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
      – sane
      22 hours ago








    • 4




      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      – Eric Towers
      22 hours ago






    • 1




      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      – Eric Towers
      21 hours ago
















    5














    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer























    • Sorry, I had a typo, this isn't the problem that I consider.
      – sane
      yesterday






    • 2




      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      – LutzL
      yesterday












    • We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
      – sane
      22 hours ago








    • 4




      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      – Eric Towers
      22 hours ago






    • 1




      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      – Eric Towers
      21 hours ago














    5












    5








    5






    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer














    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited yesterday

























    answered yesterday









    LutzLLutzL

    56.5k42054




    56.5k42054












    • Sorry, I had a typo, this isn't the problem that I consider.
      – sane
      yesterday






    • 2




      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      – LutzL
      yesterday












    • We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
      – sane
      22 hours ago








    • 4




      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      – Eric Towers
      22 hours ago






    • 1




      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      – Eric Towers
      21 hours ago


















    • Sorry, I had a typo, this isn't the problem that I consider.
      – sane
      yesterday






    • 2




      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      – LutzL
      yesterday












    • We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
      – sane
      22 hours ago








    • 4




      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      – Eric Towers
      22 hours ago






    • 1




      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      – Eric Towers
      21 hours ago
















    Sorry, I had a typo, this isn't the problem that I consider.
    – sane
    yesterday




    Sorry, I had a typo, this isn't the problem that I consider.
    – sane
    yesterday




    2




    2




    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    – LutzL
    yesterday






    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    – LutzL
    yesterday














    We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
    – sane
    22 hours ago






    We have the following polynomial: $frac{a}{1+y}+frac{a}{(1+gamma y)^2}+...+frac{a+1}{(1+gamma^{n-1}y)^n}=frac{a}{1+x}+frac{a}{(1+x)^2}+...+frac{a+1}{(1+x)^n}$, we need to construct the function $x(a,y,n,gamma)$. From another hand we have another polynomial: $frac{a+1/n}{1+y}+frac{a(1-1/n)+1/n)}{(1+gamma y)^2}+...+frac{a(1-(n-1)/n)+1/n)}{(1+gamma ^{n-1}y)^n}=frac{a+1/n}{1+z}+frac{a(1-1/n)+1/n)}{(1+z)^2}+...$, and we construct the function $z(a,y,n,gamma)$. Finally, we need to construct $h(a,y,n,gamma)=x(cdot)-z(cdot)$,check some properties.
    – sane
    22 hours ago






    4




    4




    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    – Eric Towers
    22 hours ago




    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    – Eric Towers
    22 hours ago




    1




    1




    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    – Eric Towers
    21 hours ago




    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    – Eric Towers
    21 hours ago











    4














    I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




    1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


    2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



    What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



    Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



    So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






    share|cite|improve this answer


























      4














      I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




      1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


      2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



      What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



      Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



      So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






      share|cite|improve this answer
























        4












        4








        4






        I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




        1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


        2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



        What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



        Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



        So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






        share|cite|improve this answer












        I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




        1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


        2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



        What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



        Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



        So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 12 hours ago









        The Great DuckThe Great Duck

        17932047




        17932047























            2














            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer





















            • And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              – badjohn
              2 hours ago
















            2














            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer





















            • And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              – badjohn
              2 hours ago














            2












            2








            2






            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer












            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered yesterday









            KevinKevin

            1,625722




            1,625722












            • And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              – badjohn
              2 hours ago


















            • And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              – badjohn
              2 hours ago
















            And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
            – badjohn
            2 hours ago




            And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
            – badjohn
            2 hours ago


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3063680%2fis-it-legitimate-to-specify-a-statement-as-a-theorem-if-it-is-proved-using-numer%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            1300-talet

            1300-talet

            Display a custom attribute below product name in the front-end Magento 1.9.3.8