Distribution of repeated binomial processes where success probability and number of trials changes each time












1














I have an experiment where each "run" of the experiment has a binomial distribution. In a given run I have a number of trials $N_i$ and probability of success $p_i$. The result is number of successes $S_i$ which is a sample from the binomial distribution. For this single run of the experiment, I know the variance is $N_i p_i(1-p_i)$.



In a different run the probability of success and the number of trials changes. Call these $N_j$ and $p_j$.



The number of trials and success probabilities are in turn drawn from their own distributions, so each $N_j$ and $p_j$ is a sample from its own distribution.



If I know the distribution of the success probabilities and the distribution of the number of trials, then what is the distribution of the entire set of runs? I'm most interested in the mean and the variance of the set of runs.



In essence, I have a set of samples all drawn from different (but related) binomial distributions. I want to know the mean and variance of this set.
I think this can be thought of as a compound distribution:
https://en.wikipedia.org/wiki/Compound_probability_distribution



For the purpose of this question, let's say that the distribution of the success probabilities $p_i$ is Beta with some mean and variance: $psim (mu_p,sigma^2_p)$, and the distribution of the number of trials is Gaussian: $Nsim mathcal{N}(mu_N,sigma^2_N)$.



I was initially thinking to solve this as a special case of the Poisson binomial distribution, where I sum over the total number of trials and I get something like
$sigma^2 = sum_{i=1}^{M_{trials}}N_ip_i(1-p_i)$ for the variance and $mu = sum_{i=1}^{M_{trials}}N_ip_i$ for the mean. But this isn't really useful since I have lots of different "runs" and I do know the distributions of the number of trials and the success probabilities. It seems like I should be able get something more compact. Ideally, I would have an expression for the variance of the set of runs in terms of the means and variances of $N$ and $p$.



For a set of runs, each with variance $N_i p_i(1-p_i)$ should I calculate the variance of the quantity $N_i p_i(1-p_i)$ instead of taking the sum? This is the variance of the variance, and it doesn't really seem like the correct thing to do. I'm stuck on how I can express the sum $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ as something more compact when I know the distributions of N and p.



One thing that I have been stumbling on is that my variance, $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ appears to be expressed as a sum of random variables: $N,p$. In reality, though, it is expressed as a sum of samples of random variables.










share|cite|improve this question
























  • Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
    – Clement C.
    Dec 17 '18 at 22:15










  • I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
    – SabrinaChoice
    Dec 17 '18 at 22:19










  • Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
    – Clement C.
    Dec 17 '18 at 22:21










  • However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
    – Clement C.
    Dec 17 '18 at 22:23










  • Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
    – SabrinaChoice
    Dec 17 '18 at 22:23
















1














I have an experiment where each "run" of the experiment has a binomial distribution. In a given run I have a number of trials $N_i$ and probability of success $p_i$. The result is number of successes $S_i$ which is a sample from the binomial distribution. For this single run of the experiment, I know the variance is $N_i p_i(1-p_i)$.



In a different run the probability of success and the number of trials changes. Call these $N_j$ and $p_j$.



The number of trials and success probabilities are in turn drawn from their own distributions, so each $N_j$ and $p_j$ is a sample from its own distribution.



If I know the distribution of the success probabilities and the distribution of the number of trials, then what is the distribution of the entire set of runs? I'm most interested in the mean and the variance of the set of runs.



In essence, I have a set of samples all drawn from different (but related) binomial distributions. I want to know the mean and variance of this set.
I think this can be thought of as a compound distribution:
https://en.wikipedia.org/wiki/Compound_probability_distribution



For the purpose of this question, let's say that the distribution of the success probabilities $p_i$ is Beta with some mean and variance: $psim (mu_p,sigma^2_p)$, and the distribution of the number of trials is Gaussian: $Nsim mathcal{N}(mu_N,sigma^2_N)$.



I was initially thinking to solve this as a special case of the Poisson binomial distribution, where I sum over the total number of trials and I get something like
$sigma^2 = sum_{i=1}^{M_{trials}}N_ip_i(1-p_i)$ for the variance and $mu = sum_{i=1}^{M_{trials}}N_ip_i$ for the mean. But this isn't really useful since I have lots of different "runs" and I do know the distributions of the number of trials and the success probabilities. It seems like I should be able get something more compact. Ideally, I would have an expression for the variance of the set of runs in terms of the means and variances of $N$ and $p$.



For a set of runs, each with variance $N_i p_i(1-p_i)$ should I calculate the variance of the quantity $N_i p_i(1-p_i)$ instead of taking the sum? This is the variance of the variance, and it doesn't really seem like the correct thing to do. I'm stuck on how I can express the sum $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ as something more compact when I know the distributions of N and p.



One thing that I have been stumbling on is that my variance, $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ appears to be expressed as a sum of random variables: $N,p$. In reality, though, it is expressed as a sum of samples of random variables.










share|cite|improve this question
























  • Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
    – Clement C.
    Dec 17 '18 at 22:15










  • I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
    – SabrinaChoice
    Dec 17 '18 at 22:19










  • Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
    – Clement C.
    Dec 17 '18 at 22:21










  • However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
    – Clement C.
    Dec 17 '18 at 22:23










  • Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
    – SabrinaChoice
    Dec 17 '18 at 22:23














1












1








1


0





I have an experiment where each "run" of the experiment has a binomial distribution. In a given run I have a number of trials $N_i$ and probability of success $p_i$. The result is number of successes $S_i$ which is a sample from the binomial distribution. For this single run of the experiment, I know the variance is $N_i p_i(1-p_i)$.



In a different run the probability of success and the number of trials changes. Call these $N_j$ and $p_j$.



The number of trials and success probabilities are in turn drawn from their own distributions, so each $N_j$ and $p_j$ is a sample from its own distribution.



If I know the distribution of the success probabilities and the distribution of the number of trials, then what is the distribution of the entire set of runs? I'm most interested in the mean and the variance of the set of runs.



In essence, I have a set of samples all drawn from different (but related) binomial distributions. I want to know the mean and variance of this set.
I think this can be thought of as a compound distribution:
https://en.wikipedia.org/wiki/Compound_probability_distribution



For the purpose of this question, let's say that the distribution of the success probabilities $p_i$ is Beta with some mean and variance: $psim (mu_p,sigma^2_p)$, and the distribution of the number of trials is Gaussian: $Nsim mathcal{N}(mu_N,sigma^2_N)$.



I was initially thinking to solve this as a special case of the Poisson binomial distribution, where I sum over the total number of trials and I get something like
$sigma^2 = sum_{i=1}^{M_{trials}}N_ip_i(1-p_i)$ for the variance and $mu = sum_{i=1}^{M_{trials}}N_ip_i$ for the mean. But this isn't really useful since I have lots of different "runs" and I do know the distributions of the number of trials and the success probabilities. It seems like I should be able get something more compact. Ideally, I would have an expression for the variance of the set of runs in terms of the means and variances of $N$ and $p$.



For a set of runs, each with variance $N_i p_i(1-p_i)$ should I calculate the variance of the quantity $N_i p_i(1-p_i)$ instead of taking the sum? This is the variance of the variance, and it doesn't really seem like the correct thing to do. I'm stuck on how I can express the sum $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ as something more compact when I know the distributions of N and p.



One thing that I have been stumbling on is that my variance, $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ appears to be expressed as a sum of random variables: $N,p$. In reality, though, it is expressed as a sum of samples of random variables.










share|cite|improve this question















I have an experiment where each "run" of the experiment has a binomial distribution. In a given run I have a number of trials $N_i$ and probability of success $p_i$. The result is number of successes $S_i$ which is a sample from the binomial distribution. For this single run of the experiment, I know the variance is $N_i p_i(1-p_i)$.



In a different run the probability of success and the number of trials changes. Call these $N_j$ and $p_j$.



The number of trials and success probabilities are in turn drawn from their own distributions, so each $N_j$ and $p_j$ is a sample from its own distribution.



If I know the distribution of the success probabilities and the distribution of the number of trials, then what is the distribution of the entire set of runs? I'm most interested in the mean and the variance of the set of runs.



In essence, I have a set of samples all drawn from different (but related) binomial distributions. I want to know the mean and variance of this set.
I think this can be thought of as a compound distribution:
https://en.wikipedia.org/wiki/Compound_probability_distribution



For the purpose of this question, let's say that the distribution of the success probabilities $p_i$ is Beta with some mean and variance: $psim (mu_p,sigma^2_p)$, and the distribution of the number of trials is Gaussian: $Nsim mathcal{N}(mu_N,sigma^2_N)$.



I was initially thinking to solve this as a special case of the Poisson binomial distribution, where I sum over the total number of trials and I get something like
$sigma^2 = sum_{i=1}^{M_{trials}}N_ip_i(1-p_i)$ for the variance and $mu = sum_{i=1}^{M_{trials}}N_ip_i$ for the mean. But this isn't really useful since I have lots of different "runs" and I do know the distributions of the number of trials and the success probabilities. It seems like I should be able get something more compact. Ideally, I would have an expression for the variance of the set of runs in terms of the means and variances of $N$ and $p$.



For a set of runs, each with variance $N_i p_i(1-p_i)$ should I calculate the variance of the quantity $N_i p_i(1-p_i)$ instead of taking the sum? This is the variance of the variance, and it doesn't really seem like the correct thing to do. I'm stuck on how I can express the sum $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ as something more compact when I know the distributions of N and p.



One thing that I have been stumbling on is that my variance, $sigma^2 = sum_{i=1}^{N_{total}}N_ip_i(1-p_i)$ appears to be expressed as a sum of random variables: $N,p$. In reality, though, it is expressed as a sum of samples of random variables.







probability probability-theory binomial-distribution






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 5 at 5:53







SabrinaChoice

















asked Dec 17 '18 at 22:09









SabrinaChoiceSabrinaChoice

246




246












  • Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
    – Clement C.
    Dec 17 '18 at 22:15










  • I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
    – SabrinaChoice
    Dec 17 '18 at 22:19










  • Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
    – Clement C.
    Dec 17 '18 at 22:21










  • However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
    – Clement C.
    Dec 17 '18 at 22:23










  • Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
    – SabrinaChoice
    Dec 17 '18 at 22:23


















  • Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
    – Clement C.
    Dec 17 '18 at 22:15










  • I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
    – SabrinaChoice
    Dec 17 '18 at 22:19










  • Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
    – Clement C.
    Dec 17 '18 at 22:21










  • However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
    – Clement C.
    Dec 17 '18 at 22:23










  • Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
    – SabrinaChoice
    Dec 17 '18 at 22:23
















Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
– Clement C.
Dec 17 '18 at 22:15




Indeed, it's not a Poisson Binomial distribution. In all generality, what you have cannot be better described than a "sum of Binomials" (since the parameters differ). What are you interested in? The full distribution of the successes, or merely expectation, maybe variance, etc?
– Clement C.
Dec 17 '18 at 22:15












I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
– SabrinaChoice
Dec 17 '18 at 22:19




I'm primarily interested in the expectation and variance of the sum. Can you expand on why it's not a Poission Binomial?
– SabrinaChoice
Dec 17 '18 at 22:19












Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
– Clement C.
Dec 17 '18 at 22:21




Technically, it is a PBD, but that's not really going to help, since phrasing it that way you'll lose a lot of structure. It is a PBD, as the sum of $X_1,dots, X_{n}$, where $n=sum_j N_j$; and the first $N_1$ $X_i's$ are Bernoulli with parameter $p_1$, the next $N_2$ are Bernoulli with parameter $p_2$, etc.
– Clement C.
Dec 17 '18 at 22:21












However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
– Clement C.
Dec 17 '18 at 22:23




However, for expectation and variance, you can just use the fact that everything is independent: the expected total number of successes is just $sum_j mathbb{E}[S_j] = sum_j p_j N_j$ (if $p_j, N_j$ are themselves random variables, this is the conditional expectation over those) (no need for independence for the expectation, by the way).
– Clement C.
Dec 17 '18 at 22:23












Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
– SabrinaChoice
Dec 17 '18 at 22:23




Ah gotcha, thanks so much. So you think I should just consider it a sum of Binomial trials. If I know the distribution of the number of trials in a given run and the distribution of the 'success probability' in the runs, is there no way to simplify the result from a giant sum?
– SabrinaChoice
Dec 17 '18 at 22:23










2 Answers
2






active

oldest

votes


















1














In general, if $p_1, p_2, ldots, p_m in (0,1)$ are IID realizations from some probability distribution with mean $mu_p$ and standard deviation $sigma_p$, and $n_1, n_2, ldots, n_m in mathbb Z^+$ are IID realizations of from another probability distribution with mean $mu_n$ and standard deviation $sigma_n$, and for each $i = 1, 2, ldots, m$, we have random variables $$X_i sim operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = sum_{i=1}^m X_i$, then we have by linearity of expectation $$operatorname{E}[S] = sum_{i=1}^m operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$operatorname{E}[X_i] = operatorname{E}[operatorname{E}[X_i mid (n_i cap p_i)]] = operatorname{E}[n_i p_i] = operatorname{E}[n_i]operatorname{E}[p_i] = mu_n mu_p;$$ thus $$operatorname{E}[S] = mmu_n mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$operatorname{Var}[S] overset{text{ind}}{=} sum_{i=1}^m operatorname{Var}[X_i],$$ whence by the law of total variance
$$begin{align*}
operatorname{Var}[X_i]
&= operatorname{Var}[operatorname{E}[X_i mid (n_i cap p_i)]] + operatorname{E}[operatorname{Var}[X_i mid (n_i cap p_i)]] \
&= operatorname{Var}[n_i p_i] + operatorname{E}[n_i p_i (1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n operatorname{E}[p_i(1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n (mu_p - (sigma_p^2 + mu_p^2)).
end{align*}$$



To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $mu_A, sigma_A, mu_B, sigma_B$, respectively,
$$begin{align*}operatorname{Var}[AB]
&= operatorname{E}[(AB)^2] - operatorname{E}[AB]^2 \
&= operatorname{E}[A^2 B^2] - operatorname{E}[A]^2 operatorname{E}[B]^2 \
&= operatorname{E}[A^2]operatorname{E}[B^2] - mu_A^2 mu_B^2 \
&= (operatorname{Var}[A] + operatorname{E}[A]^2)(operatorname{Var}[B] + operatorname{E}[B]^2) - mu_A^2 mu_B^2 \
&= (sigma_A^2 + mu_A^2)(sigma_B^2 + mu_B^2) - mu_A^2 mu_B^2 \
&= sigma_A^2 sigma_B^2 + sigma_A^2 mu_B^2 + sigma_B^2 mu_A^2. end{align*}$$



Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i sim operatorname{Poisson}(lambda)$ and $p_i sim operatorname{Beta}(a,b)$, for $lambda = 11$ and $(a,b) = (7,3)$. This should result in $operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.






share|cite|improve this answer























  • Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
    – SabrinaChoice
    Jan 4 at 22:06








  • 2




    I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
    – Just_to_Answer
    Jan 4 at 22:06










  • I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
    – SabrinaChoice
    Jan 4 at 22:08










  • @SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
    – heropup
    Jan 4 at 22:22










  • Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
    – SabrinaChoice
    Jan 4 at 22:52



















0














The best I've been able to do so far is compute the expected value of the variance. I'm not sure this is the correct approach.



We have 2 Gaussian random variables
$Nsimmathcal{N}(mu_N,sigma^2_N)$ and $psimmathcal{N}(mu_p,sigma^2_p)$. They are independent of one another.
We have an expression for the variance of an experiment in terms of samples of these random variables
$$sigma^2 = sum_i^{M_{runs}} N_i p_i(1-p_i)$$
where M is the number of `runs' of the experiment. M is very large.



So the expected value of this variance is



begin{align}
mathbb{E}[sigma^2] &= sum_i^{M_{runs}}mathbb{E}[ N_i p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i] mathbb{E}[ p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i]( mathbb{E}[ p_i]-mathbb{E}[p_i^2])\
&=sum_i^{M_{runs}}mathbb{E}[ N_i](mathbb{E}[p_i]-[sigma^2_{pi}+mathbb{E}[p_i]^2])\
&=sum_i^{M_{runs}}mu_N(mu_p-[sigma^2_{p}+mu_p^2])\
&= Mmu_N(mu_p- mu_p^2-sigma^2_{p})\
&=Mmu_N(mu_p(1-mu_p)-sigma^2_p)
end{align}



I'm not really satisfied with this answer, to be honest. One thing that bothers me in particular is that the variance of my set of measurements decreases as the variance of the `success probabilities' increases. This can't be right!






share|cite|improve this answer























  • How can $p_i$ be a normal random variable?
    – mm8511
    Jan 4 at 18:51










  • @mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
    – SabrinaChoice
    Jan 4 at 18:54










  • How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
    – mm8511
    Jan 4 at 18:57










  • @mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
    – SabrinaChoice
    Jan 4 at 19:03










  • Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
    – mm8511
    Jan 4 at 19:08













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044515%2fdistribution-of-repeated-binomial-processes-where-success-probability-and-number%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














In general, if $p_1, p_2, ldots, p_m in (0,1)$ are IID realizations from some probability distribution with mean $mu_p$ and standard deviation $sigma_p$, and $n_1, n_2, ldots, n_m in mathbb Z^+$ are IID realizations of from another probability distribution with mean $mu_n$ and standard deviation $sigma_n$, and for each $i = 1, 2, ldots, m$, we have random variables $$X_i sim operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = sum_{i=1}^m X_i$, then we have by linearity of expectation $$operatorname{E}[S] = sum_{i=1}^m operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$operatorname{E}[X_i] = operatorname{E}[operatorname{E}[X_i mid (n_i cap p_i)]] = operatorname{E}[n_i p_i] = operatorname{E}[n_i]operatorname{E}[p_i] = mu_n mu_p;$$ thus $$operatorname{E}[S] = mmu_n mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$operatorname{Var}[S] overset{text{ind}}{=} sum_{i=1}^m operatorname{Var}[X_i],$$ whence by the law of total variance
$$begin{align*}
operatorname{Var}[X_i]
&= operatorname{Var}[operatorname{E}[X_i mid (n_i cap p_i)]] + operatorname{E}[operatorname{Var}[X_i mid (n_i cap p_i)]] \
&= operatorname{Var}[n_i p_i] + operatorname{E}[n_i p_i (1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n operatorname{E}[p_i(1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n (mu_p - (sigma_p^2 + mu_p^2)).
end{align*}$$



To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $mu_A, sigma_A, mu_B, sigma_B$, respectively,
$$begin{align*}operatorname{Var}[AB]
&= operatorname{E}[(AB)^2] - operatorname{E}[AB]^2 \
&= operatorname{E}[A^2 B^2] - operatorname{E}[A]^2 operatorname{E}[B]^2 \
&= operatorname{E}[A^2]operatorname{E}[B^2] - mu_A^2 mu_B^2 \
&= (operatorname{Var}[A] + operatorname{E}[A]^2)(operatorname{Var}[B] + operatorname{E}[B]^2) - mu_A^2 mu_B^2 \
&= (sigma_A^2 + mu_A^2)(sigma_B^2 + mu_B^2) - mu_A^2 mu_B^2 \
&= sigma_A^2 sigma_B^2 + sigma_A^2 mu_B^2 + sigma_B^2 mu_A^2. end{align*}$$



Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i sim operatorname{Poisson}(lambda)$ and $p_i sim operatorname{Beta}(a,b)$, for $lambda = 11$ and $(a,b) = (7,3)$. This should result in $operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.






share|cite|improve this answer























  • Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
    – SabrinaChoice
    Jan 4 at 22:06








  • 2




    I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
    – Just_to_Answer
    Jan 4 at 22:06










  • I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
    – SabrinaChoice
    Jan 4 at 22:08










  • @SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
    – heropup
    Jan 4 at 22:22










  • Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
    – SabrinaChoice
    Jan 4 at 22:52
















1














In general, if $p_1, p_2, ldots, p_m in (0,1)$ are IID realizations from some probability distribution with mean $mu_p$ and standard deviation $sigma_p$, and $n_1, n_2, ldots, n_m in mathbb Z^+$ are IID realizations of from another probability distribution with mean $mu_n$ and standard deviation $sigma_n$, and for each $i = 1, 2, ldots, m$, we have random variables $$X_i sim operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = sum_{i=1}^m X_i$, then we have by linearity of expectation $$operatorname{E}[S] = sum_{i=1}^m operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$operatorname{E}[X_i] = operatorname{E}[operatorname{E}[X_i mid (n_i cap p_i)]] = operatorname{E}[n_i p_i] = operatorname{E}[n_i]operatorname{E}[p_i] = mu_n mu_p;$$ thus $$operatorname{E}[S] = mmu_n mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$operatorname{Var}[S] overset{text{ind}}{=} sum_{i=1}^m operatorname{Var}[X_i],$$ whence by the law of total variance
$$begin{align*}
operatorname{Var}[X_i]
&= operatorname{Var}[operatorname{E}[X_i mid (n_i cap p_i)]] + operatorname{E}[operatorname{Var}[X_i mid (n_i cap p_i)]] \
&= operatorname{Var}[n_i p_i] + operatorname{E}[n_i p_i (1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n operatorname{E}[p_i(1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n (mu_p - (sigma_p^2 + mu_p^2)).
end{align*}$$



To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $mu_A, sigma_A, mu_B, sigma_B$, respectively,
$$begin{align*}operatorname{Var}[AB]
&= operatorname{E}[(AB)^2] - operatorname{E}[AB]^2 \
&= operatorname{E}[A^2 B^2] - operatorname{E}[A]^2 operatorname{E}[B]^2 \
&= operatorname{E}[A^2]operatorname{E}[B^2] - mu_A^2 mu_B^2 \
&= (operatorname{Var}[A] + operatorname{E}[A]^2)(operatorname{Var}[B] + operatorname{E}[B]^2) - mu_A^2 mu_B^2 \
&= (sigma_A^2 + mu_A^2)(sigma_B^2 + mu_B^2) - mu_A^2 mu_B^2 \
&= sigma_A^2 sigma_B^2 + sigma_A^2 mu_B^2 + sigma_B^2 mu_A^2. end{align*}$$



Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i sim operatorname{Poisson}(lambda)$ and $p_i sim operatorname{Beta}(a,b)$, for $lambda = 11$ and $(a,b) = (7,3)$. This should result in $operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.






share|cite|improve this answer























  • Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
    – SabrinaChoice
    Jan 4 at 22:06








  • 2




    I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
    – Just_to_Answer
    Jan 4 at 22:06










  • I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
    – SabrinaChoice
    Jan 4 at 22:08










  • @SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
    – heropup
    Jan 4 at 22:22










  • Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
    – SabrinaChoice
    Jan 4 at 22:52














1












1








1






In general, if $p_1, p_2, ldots, p_m in (0,1)$ are IID realizations from some probability distribution with mean $mu_p$ and standard deviation $sigma_p$, and $n_1, n_2, ldots, n_m in mathbb Z^+$ are IID realizations of from another probability distribution with mean $mu_n$ and standard deviation $sigma_n$, and for each $i = 1, 2, ldots, m$, we have random variables $$X_i sim operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = sum_{i=1}^m X_i$, then we have by linearity of expectation $$operatorname{E}[S] = sum_{i=1}^m operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$operatorname{E}[X_i] = operatorname{E}[operatorname{E}[X_i mid (n_i cap p_i)]] = operatorname{E}[n_i p_i] = operatorname{E}[n_i]operatorname{E}[p_i] = mu_n mu_p;$$ thus $$operatorname{E}[S] = mmu_n mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$operatorname{Var}[S] overset{text{ind}}{=} sum_{i=1}^m operatorname{Var}[X_i],$$ whence by the law of total variance
$$begin{align*}
operatorname{Var}[X_i]
&= operatorname{Var}[operatorname{E}[X_i mid (n_i cap p_i)]] + operatorname{E}[operatorname{Var}[X_i mid (n_i cap p_i)]] \
&= operatorname{Var}[n_i p_i] + operatorname{E}[n_i p_i (1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n operatorname{E}[p_i(1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n (mu_p - (sigma_p^2 + mu_p^2)).
end{align*}$$



To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $mu_A, sigma_A, mu_B, sigma_B$, respectively,
$$begin{align*}operatorname{Var}[AB]
&= operatorname{E}[(AB)^2] - operatorname{E}[AB]^2 \
&= operatorname{E}[A^2 B^2] - operatorname{E}[A]^2 operatorname{E}[B]^2 \
&= operatorname{E}[A^2]operatorname{E}[B^2] - mu_A^2 mu_B^2 \
&= (operatorname{Var}[A] + operatorname{E}[A]^2)(operatorname{Var}[B] + operatorname{E}[B]^2) - mu_A^2 mu_B^2 \
&= (sigma_A^2 + mu_A^2)(sigma_B^2 + mu_B^2) - mu_A^2 mu_B^2 \
&= sigma_A^2 sigma_B^2 + sigma_A^2 mu_B^2 + sigma_B^2 mu_A^2. end{align*}$$



Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i sim operatorname{Poisson}(lambda)$ and $p_i sim operatorname{Beta}(a,b)$, for $lambda = 11$ and $(a,b) = (7,3)$. This should result in $operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.






share|cite|improve this answer














In general, if $p_1, p_2, ldots, p_m in (0,1)$ are IID realizations from some probability distribution with mean $mu_p$ and standard deviation $sigma_p$, and $n_1, n_2, ldots, n_m in mathbb Z^+$ are IID realizations of from another probability distribution with mean $mu_n$ and standard deviation $sigma_n$, and for each $i = 1, 2, ldots, m$, we have random variables $$X_i sim operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = sum_{i=1}^m X_i$, then we have by linearity of expectation $$operatorname{E}[S] = sum_{i=1}^m operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$operatorname{E}[X_i] = operatorname{E}[operatorname{E}[X_i mid (n_i cap p_i)]] = operatorname{E}[n_i p_i] = operatorname{E}[n_i]operatorname{E}[p_i] = mu_n mu_p;$$ thus $$operatorname{E}[S] = mmu_n mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$operatorname{Var}[S] overset{text{ind}}{=} sum_{i=1}^m operatorname{Var}[X_i],$$ whence by the law of total variance
$$begin{align*}
operatorname{Var}[X_i]
&= operatorname{Var}[operatorname{E}[X_i mid (n_i cap p_i)]] + operatorname{E}[operatorname{Var}[X_i mid (n_i cap p_i)]] \
&= operatorname{Var}[n_i p_i] + operatorname{E}[n_i p_i (1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n operatorname{E}[p_i(1-p_i)] \
&= (sigma_n^2 sigma_p^2 + sigma_n^2 mu_p^2 + sigma_p^2 mu_n^2) + mu_n (mu_p - (sigma_p^2 + mu_p^2)).
end{align*}$$



To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $mu_A, sigma_A, mu_B, sigma_B$, respectively,
$$begin{align*}operatorname{Var}[AB]
&= operatorname{E}[(AB)^2] - operatorname{E}[AB]^2 \
&= operatorname{E}[A^2 B^2] - operatorname{E}[A]^2 operatorname{E}[B]^2 \
&= operatorname{E}[A^2]operatorname{E}[B^2] - mu_A^2 mu_B^2 \
&= (operatorname{Var}[A] + operatorname{E}[A]^2)(operatorname{Var}[B] + operatorname{E}[B]^2) - mu_A^2 mu_B^2 \
&= (sigma_A^2 + mu_A^2)(sigma_B^2 + mu_B^2) - mu_A^2 mu_B^2 \
&= sigma_A^2 sigma_B^2 + sigma_A^2 mu_B^2 + sigma_B^2 mu_A^2. end{align*}$$



Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i sim operatorname{Poisson}(lambda)$ and $p_i sim operatorname{Beta}(a,b)$, for $lambda = 11$ and $(a,b) = (7,3)$. This should result in $operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 4 at 21:40

























answered Jan 4 at 21:29









heropupheropup

62.7k66099




62.7k66099












  • Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
    – SabrinaChoice
    Jan 4 at 22:06








  • 2




    I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
    – Just_to_Answer
    Jan 4 at 22:06










  • I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
    – SabrinaChoice
    Jan 4 at 22:08










  • @SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
    – heropup
    Jan 4 at 22:22










  • Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
    – SabrinaChoice
    Jan 4 at 22:52


















  • Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
    – SabrinaChoice
    Jan 4 at 22:06








  • 2




    I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
    – Just_to_Answer
    Jan 4 at 22:06










  • I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
    – SabrinaChoice
    Jan 4 at 22:08










  • @SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
    – heropup
    Jan 4 at 22:22










  • Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
    – SabrinaChoice
    Jan 4 at 22:52
















Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
– SabrinaChoice
Jan 4 at 22:06






Thank you very much. I'm unclear on a key point: am I really interested in the distribution of the sum of binomial random variables? Each run of the experiment is a measurement. I am interested in the distribution of the set of measurements. I'm not sure this is the same as the distribution of the sum of the set of measurements. I honestly don't know. Does this comment make sense?
– SabrinaChoice
Jan 4 at 22:06






2




2




I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
– Just_to_Answer
Jan 4 at 22:06




I think it is worth pointing out that the choices of distributions for the binomial number of trials and the success probabilities in this answer are the appropriate ones. The OP uses normal distribution for both.
– Just_to_Answer
Jan 4 at 22:06












I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
– SabrinaChoice
Jan 4 at 22:08




I agree. Thanks. I put in Normal RV's as I thought that might ease the discussion, though I see now that is not the case.
– SabrinaChoice
Jan 4 at 22:08












@SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
– heropup
Jan 4 at 22:22




@SabrinaChoice yes your comment does make sense; however, because you were not clear about this in your original statement of the question, and the computation in your answer suggested that you were attempting to compute the variance of a sum, this is how I addressed the question. The individual means and variances of each $X_i$ remain valid even if we are not looking at $S$. That said, the joint distribution of the $X_i$ is high-dimensional. If your goal is to make inferences about $n$ and $p$ based on the sample, that is an entirely different question.
– heropup
Jan 4 at 22:22












Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
– SabrinaChoice
Jan 4 at 22:52




Thank you very much. I have been having trouble formulating my question properly as I am confused by it! I'll edit my original statement of the question to try to make this more clear.
– SabrinaChoice
Jan 4 at 22:52











0














The best I've been able to do so far is compute the expected value of the variance. I'm not sure this is the correct approach.



We have 2 Gaussian random variables
$Nsimmathcal{N}(mu_N,sigma^2_N)$ and $psimmathcal{N}(mu_p,sigma^2_p)$. They are independent of one another.
We have an expression for the variance of an experiment in terms of samples of these random variables
$$sigma^2 = sum_i^{M_{runs}} N_i p_i(1-p_i)$$
where M is the number of `runs' of the experiment. M is very large.



So the expected value of this variance is



begin{align}
mathbb{E}[sigma^2] &= sum_i^{M_{runs}}mathbb{E}[ N_i p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i] mathbb{E}[ p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i]( mathbb{E}[ p_i]-mathbb{E}[p_i^2])\
&=sum_i^{M_{runs}}mathbb{E}[ N_i](mathbb{E}[p_i]-[sigma^2_{pi}+mathbb{E}[p_i]^2])\
&=sum_i^{M_{runs}}mu_N(mu_p-[sigma^2_{p}+mu_p^2])\
&= Mmu_N(mu_p- mu_p^2-sigma^2_{p})\
&=Mmu_N(mu_p(1-mu_p)-sigma^2_p)
end{align}



I'm not really satisfied with this answer, to be honest. One thing that bothers me in particular is that the variance of my set of measurements decreases as the variance of the `success probabilities' increases. This can't be right!






share|cite|improve this answer























  • How can $p_i$ be a normal random variable?
    – mm8511
    Jan 4 at 18:51










  • @mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
    – SabrinaChoice
    Jan 4 at 18:54










  • How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
    – mm8511
    Jan 4 at 18:57










  • @mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
    – SabrinaChoice
    Jan 4 at 19:03










  • Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
    – mm8511
    Jan 4 at 19:08


















0














The best I've been able to do so far is compute the expected value of the variance. I'm not sure this is the correct approach.



We have 2 Gaussian random variables
$Nsimmathcal{N}(mu_N,sigma^2_N)$ and $psimmathcal{N}(mu_p,sigma^2_p)$. They are independent of one another.
We have an expression for the variance of an experiment in terms of samples of these random variables
$$sigma^2 = sum_i^{M_{runs}} N_i p_i(1-p_i)$$
where M is the number of `runs' of the experiment. M is very large.



So the expected value of this variance is



begin{align}
mathbb{E}[sigma^2] &= sum_i^{M_{runs}}mathbb{E}[ N_i p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i] mathbb{E}[ p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i]( mathbb{E}[ p_i]-mathbb{E}[p_i^2])\
&=sum_i^{M_{runs}}mathbb{E}[ N_i](mathbb{E}[p_i]-[sigma^2_{pi}+mathbb{E}[p_i]^2])\
&=sum_i^{M_{runs}}mu_N(mu_p-[sigma^2_{p}+mu_p^2])\
&= Mmu_N(mu_p- mu_p^2-sigma^2_{p})\
&=Mmu_N(mu_p(1-mu_p)-sigma^2_p)
end{align}



I'm not really satisfied with this answer, to be honest. One thing that bothers me in particular is that the variance of my set of measurements decreases as the variance of the `success probabilities' increases. This can't be right!






share|cite|improve this answer























  • How can $p_i$ be a normal random variable?
    – mm8511
    Jan 4 at 18:51










  • @mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
    – SabrinaChoice
    Jan 4 at 18:54










  • How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
    – mm8511
    Jan 4 at 18:57










  • @mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
    – SabrinaChoice
    Jan 4 at 19:03










  • Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
    – mm8511
    Jan 4 at 19:08
















0












0








0






The best I've been able to do so far is compute the expected value of the variance. I'm not sure this is the correct approach.



We have 2 Gaussian random variables
$Nsimmathcal{N}(mu_N,sigma^2_N)$ and $psimmathcal{N}(mu_p,sigma^2_p)$. They are independent of one another.
We have an expression for the variance of an experiment in terms of samples of these random variables
$$sigma^2 = sum_i^{M_{runs}} N_i p_i(1-p_i)$$
where M is the number of `runs' of the experiment. M is very large.



So the expected value of this variance is



begin{align}
mathbb{E}[sigma^2] &= sum_i^{M_{runs}}mathbb{E}[ N_i p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i] mathbb{E}[ p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i]( mathbb{E}[ p_i]-mathbb{E}[p_i^2])\
&=sum_i^{M_{runs}}mathbb{E}[ N_i](mathbb{E}[p_i]-[sigma^2_{pi}+mathbb{E}[p_i]^2])\
&=sum_i^{M_{runs}}mu_N(mu_p-[sigma^2_{p}+mu_p^2])\
&= Mmu_N(mu_p- mu_p^2-sigma^2_{p})\
&=Mmu_N(mu_p(1-mu_p)-sigma^2_p)
end{align}



I'm not really satisfied with this answer, to be honest. One thing that bothers me in particular is that the variance of my set of measurements decreases as the variance of the `success probabilities' increases. This can't be right!






share|cite|improve this answer














The best I've been able to do so far is compute the expected value of the variance. I'm not sure this is the correct approach.



We have 2 Gaussian random variables
$Nsimmathcal{N}(mu_N,sigma^2_N)$ and $psimmathcal{N}(mu_p,sigma^2_p)$. They are independent of one another.
We have an expression for the variance of an experiment in terms of samples of these random variables
$$sigma^2 = sum_i^{M_{runs}} N_i p_i(1-p_i)$$
where M is the number of `runs' of the experiment. M is very large.



So the expected value of this variance is



begin{align}
mathbb{E}[sigma^2] &= sum_i^{M_{runs}}mathbb{E}[ N_i p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i] mathbb{E}[ p_i(1-p_i)] \
&=sum_i^{M_{runs}}mathbb{E}[ N_i]( mathbb{E}[ p_i]-mathbb{E}[p_i^2])\
&=sum_i^{M_{runs}}mathbb{E}[ N_i](mathbb{E}[p_i]-[sigma^2_{pi}+mathbb{E}[p_i]^2])\
&=sum_i^{M_{runs}}mu_N(mu_p-[sigma^2_{p}+mu_p^2])\
&= Mmu_N(mu_p- mu_p^2-sigma^2_{p})\
&=Mmu_N(mu_p(1-mu_p)-sigma^2_p)
end{align}



I'm not really satisfied with this answer, to be honest. One thing that bothers me in particular is that the variance of my set of measurements decreases as the variance of the `success probabilities' increases. This can't be right!







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 4 at 23:15

























answered Jan 4 at 18:49









SabrinaChoiceSabrinaChoice

246




246












  • How can $p_i$ be a normal random variable?
    – mm8511
    Jan 4 at 18:51










  • @mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
    – SabrinaChoice
    Jan 4 at 18:54










  • How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
    – mm8511
    Jan 4 at 18:57










  • @mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
    – SabrinaChoice
    Jan 4 at 19:03










  • Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
    – mm8511
    Jan 4 at 19:08




















  • How can $p_i$ be a normal random variable?
    – mm8511
    Jan 4 at 18:51










  • @mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
    – SabrinaChoice
    Jan 4 at 18:54










  • How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
    – mm8511
    Jan 4 at 18:57










  • @mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
    – SabrinaChoice
    Jan 4 at 19:03










  • Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
    – mm8511
    Jan 4 at 19:08


















How can $p_i$ be a normal random variable?
– mm8511
Jan 4 at 18:51




How can $p_i$ be a normal random variable?
– mm8511
Jan 4 at 18:51












@mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
– SabrinaChoice
Jan 4 at 18:54




@mm8511 this describes a physical experiment. The $i$th run of the experiment follows a binomial distribution with probability of success $p_i$ and number of trials $N_i$. Each run of the experiment has a different probability of success $p$ and number of trials $N$. The success probability $p$ and number of trials $N$ are both normally distributed.
– SabrinaChoice
Jan 4 at 18:54












How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
– mm8511
Jan 4 at 18:57




How are you using a normal distribution to sample a number between 0 and 1? I'd imagine the transformation you're applying will have some affect on your calculations.
– mm8511
Jan 4 at 18:57












@mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
– SabrinaChoice
Jan 4 at 19:03




@mm8511 $p$ has some mean less than 1 (say, o.7) and a variance which gives it a spread less than 1. I'm not sure how to answer this question, I'm sorry.
– SabrinaChoice
Jan 4 at 19:03












Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
– mm8511
Jan 4 at 19:08






Okay. I think, in general, it might be a good idea to choose a distribution for $p$ that is bounded (although it may be rare, there is a non-zero probability that the sampled $p$ is negative, or arbitrarily large). You could, for example, assume that $p_{i} sim Bin(N,p)/N$ to get such a distribution. Assuming $p$ is normal is probably fine if you're just looking for an approximation though.
– mm8511
Jan 4 at 19:08




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044515%2fdistribution-of-repeated-binomial-processes-where-success-probability-and-number%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

1300-talet

1300-talet

Display a custom attribute below product name in the front-end Magento 1.9.3.8