Confidence Interval question for amount of experiments one should do.
Before posing this question, the lecture notes I am reading discussed games, probability, the binomial distribution and central limit theorem. It usually assumes some form of game when it asks something. This question has been confusing me a bit:
How many experiments do we need to perform to estimate with $90$% confidence a winning
probability within an accuracy of $1$%? And if we want an accuracy of $0.1$%?
We want to estimate $p$, let us use $hat{p}$. Normally the confidence interval for $90$% certainty for a binomial distribution (win/lose) is given by:
$$(hat{p}-frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}},hat{p}+frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} ) $$
Here we have that the last bit is the uncertainty, we want this to be equal to $1$% so $0.01$, we then get that:
$$ 0.01=frac{1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} $$
We get that we should at least pick:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^4$$
Is it true that I cannot directly compute $hat{p}$ from how the answer is phrased and this would be the best answer?
The answer for the second question will be:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^6$$
By symmetry of the zeros of $f(x)=x(1-x)$ at $x=0$ and $x=1$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$. This would indeed correspond to the worst case scenario for an estimator.
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
statistics proof-verification binomial-distribution confidence-interval
add a comment |
Before posing this question, the lecture notes I am reading discussed games, probability, the binomial distribution and central limit theorem. It usually assumes some form of game when it asks something. This question has been confusing me a bit:
How many experiments do we need to perform to estimate with $90$% confidence a winning
probability within an accuracy of $1$%? And if we want an accuracy of $0.1$%?
We want to estimate $p$, let us use $hat{p}$. Normally the confidence interval for $90$% certainty for a binomial distribution (win/lose) is given by:
$$(hat{p}-frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}},hat{p}+frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} ) $$
Here we have that the last bit is the uncertainty, we want this to be equal to $1$% so $0.01$, we then get that:
$$ 0.01=frac{1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} $$
We get that we should at least pick:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^4$$
Is it true that I cannot directly compute $hat{p}$ from how the answer is phrased and this would be the best answer?
The answer for the second question will be:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^6$$
By symmetry of the zeros of $f(x)=x(1-x)$ at $x=0$ and $x=1$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$. This would indeed correspond to the worst case scenario for an estimator.
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
statistics proof-verification binomial-distribution confidence-interval
2
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
1
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15
add a comment |
Before posing this question, the lecture notes I am reading discussed games, probability, the binomial distribution and central limit theorem. It usually assumes some form of game when it asks something. This question has been confusing me a bit:
How many experiments do we need to perform to estimate with $90$% confidence a winning
probability within an accuracy of $1$%? And if we want an accuracy of $0.1$%?
We want to estimate $p$, let us use $hat{p}$. Normally the confidence interval for $90$% certainty for a binomial distribution (win/lose) is given by:
$$(hat{p}-frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}},hat{p}+frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} ) $$
Here we have that the last bit is the uncertainty, we want this to be equal to $1$% so $0.01$, we then get that:
$$ 0.01=frac{1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} $$
We get that we should at least pick:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^4$$
Is it true that I cannot directly compute $hat{p}$ from how the answer is phrased and this would be the best answer?
The answer for the second question will be:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^6$$
By symmetry of the zeros of $f(x)=x(1-x)$ at $x=0$ and $x=1$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$. This would indeed correspond to the worst case scenario for an estimator.
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
statistics proof-verification binomial-distribution confidence-interval
Before posing this question, the lecture notes I am reading discussed games, probability, the binomial distribution and central limit theorem. It usually assumes some form of game when it asks something. This question has been confusing me a bit:
How many experiments do we need to perform to estimate with $90$% confidence a winning
probability within an accuracy of $1$%? And if we want an accuracy of $0.1$%?
We want to estimate $p$, let us use $hat{p}$. Normally the confidence interval for $90$% certainty for a binomial distribution (win/lose) is given by:
$$(hat{p}-frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}},hat{p}+frac{ 1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} ) $$
Here we have that the last bit is the uncertainty, we want this to be equal to $1$% so $0.01$, we then get that:
$$ 0.01=frac{1.645 sqrt{hat{p}(1-hat{p})}}{sqrt{N}} $$
We get that we should at least pick:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^4$$
Is it true that I cannot directly compute $hat{p}$ from how the answer is phrased and this would be the best answer?
The answer for the second question will be:
$$ N geq 1.645^2hat{p}(1-hat{p}) cdot 10^6$$
By symmetry of the zeros of $f(x)=x(1-x)$ at $x=0$ and $x=1$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$. This would indeed correspond to the worst case scenario for an estimator.
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
statistics proof-verification binomial-distribution confidence-interval
statistics proof-verification binomial-distribution confidence-interval
edited yesterday
asked Jan 3 at 22:24
Wesley Strik
1,608423
1,608423
2
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
1
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15
add a comment |
2
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
1
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15
2
2
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
1
1
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15
add a comment |
1 Answer
1
active
oldest
votes
By symmetry of the zeros of $f(x)=x(1-x)$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061082%2fconfidence-interval-question-for-amount-of-experiments-one-should-do%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
By symmetry of the zeros of $f(x)=x(1-x)$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
add a comment |
By symmetry of the zeros of $f(x)=x(1-x)$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
add a comment |
By symmetry of the zeros of $f(x)=x(1-x)$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
By symmetry of the zeros of $f(x)=x(1-x)$, we know that this estimate for $N$ is maximised whenever $hat{p}= frac{1}{2}$
$$ N_1 geq 1.645^2 cdot frac{1}{4} cdot 10^4 approx 6765 $$
And similarly:
$$ N_2 geq 1.645^2 cdot frac{1}{4} cdot 10^6 approx 676506 $$
edited yesterday
answered Jan 3 at 23:13
Wesley Strik
1,608423
1,608423
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061082%2fconfidence-interval-question-for-amount-of-experiments-one-should-do%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
You may want to use the fact that $x(1-x)$ is maximised when $x=frac12$
– Henry
Jan 3 at 22:37
1
Consider a Bernoulli game and suppose $N$ games are played and the outcomes recorded as the variates $X_1=x_1,dotsc, X_N=x_N$. Then an estimate of $p$ is given by $$hat{p}=frac{X_1+dotsc +X_N}{N},$$ so that actually $hat{p}$ is a function of the sample size $N$. If we want the standard error of a $90%$ confidence interval to be equal to $0.01$, then, indeed, we must have $N$ at least $geq 2.576^2 hat{p}(1-hat{p})10^4$—but the RHS still depends on $N$ here! So you must use Henry's suggestion...
– LoveTooNap29
Jan 3 at 23:15