Quantifying dependence of Cauchy random variables
Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.
Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?
covariance independence copula heavy-tailed
add a comment |
Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.
Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?
covariance independence copula heavy-tailed
3
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday
add a comment |
Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.
Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?
covariance independence copula heavy-tailed
Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.
Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?
covariance independence copula heavy-tailed
covariance independence copula heavy-tailed
asked yesterday
JonasJonas
48211
48211
3
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday
add a comment |
3
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday
3
3
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday
add a comment |
2 Answers
2
active
oldest
votes
Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:
$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$
which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.
For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.
Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:
$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$
where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:
$$mu = sum w_ix_i/sum w_i$$
$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$
$$w_i = (1+k)/(1+s_i)$$
The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.
For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
add a comment |
While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.
Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.
¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check thecopula
keyword on Wikipedia.
– Xi'an
8 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:
$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$
which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.
For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.
Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:
$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$
where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:
$$mu = sum w_ix_i/sum w_i$$
$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$
$$w_i = (1+k)/(1+s_i)$$
The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.
For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
add a comment |
Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:
$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$
which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.
For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.
Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:
$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$
where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:
$$mu = sum w_ix_i/sum w_i$$
$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$
$$w_i = (1+k)/(1+s_i)$$
The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.
For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
add a comment |
Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:
$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$
which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.
For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.
Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:
$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$
where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:
$$mu = sum w_ix_i/sum w_i$$
$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$
$$w_i = (1+k)/(1+s_i)$$
The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.
For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:
$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$
which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.
For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.
Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:
$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$
where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:
$$mu = sum w_ix_i/sum w_i$$
$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$
$$w_i = (1+k)/(1+s_i)$$
The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.
For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
edited yesterday
answered yesterday
jbowmanjbowman
23.8k34278
23.8k34278
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
add a comment |
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
– Jonas
9 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
– jbowman
2 hours ago
add a comment |
While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.
Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.
¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check thecopula
keyword on Wikipedia.
– Xi'an
8 hours ago
add a comment |
While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.
Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.
¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check thecopula
keyword on Wikipedia.
– Xi'an
8 hours ago
add a comment |
While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.
Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.
¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.
While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.
Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.
¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.
edited yesterday
answered yesterday
Xi'anXi'an
54.2k691350
54.2k691350
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check thecopula
keyword on Wikipedia.
– Xi'an
8 hours ago
add a comment |
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check thecopula
keyword on Wikipedia.
– Xi'an
8 hours ago
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
– Jonas
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the
copula
keyword on Wikipedia.– Xi'an
8 hours ago
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the
copula
keyword on Wikipedia.– Xi'an
8 hours ago
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
yesterday