Finite difference - Explicit / Implicit / Crank Nicolson - Does the implicit method require the least memory?
Examine a dynamic 2D heat equation $dot{u} = Delta u$ with zero boundary temperature. A standard finite difference approach is used on a rectangle using a $ntimes n$ grid. For the resulting linear systems a banded Cholesky solver is used. Compare the three methods explicit, implicit and Crank-Nicolson for the time stepping.
- Explicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{(Delta x)^2} \
u_{i+1,j} = u_{i,j} + frac{kappa Delta t}{(Delta x)^2}(u_{i,j-1} - 2u_{i,j} + u_{i,j+1})\
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i}\
end{gathered}
$$
- Implicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j+1}}{(Delta x)^2} \
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i+1}\
u_i = (mathbb{I}_n + kappa Delta t textbf{A}_n)^{-i}cdot vec{u}_0
end{gathered}
$$
- Crank-Nicolson
$$
begin{gathered}
frac{u_{i+1, j}-u_{i,j}}{kappa Delta t} = frac{u_{i,j-1} - 2u_{i,j} + u_{i,j + 1}}{2(Delta x)^2} + frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j + 1}}{2(Delta x)^2}\
vec{u}_{i+1} - vec{u}_i = -frac{kappa Delta t}{2}(textbf{A}_n cdot vec{u}_{i+1} + textbf{A}_n cdot vec{u}_i)
end{gathered}
$$
Here is the question:
Does the implicit method require the least memory?
Here is my proposition for an answer:
I would say that the statement if FALSE
The implicit method requires more computational effort to give an answer, because the matrix $textbf{A}_n$ needs to be inverted. However, I would say that this is not the reason why the statement is false: for the implicit method there is $textbf{no extra/less storage needed}$ (compared to the explicit method for example) because there is no extra data generated from the computation (for example no other matrix generated). Is the answer correct as well as the reasoning behind it?
Following the comments of @zimbra314, I posted the question in Computational science beta
differential-equations finite-groups
add a comment |
Examine a dynamic 2D heat equation $dot{u} = Delta u$ with zero boundary temperature. A standard finite difference approach is used on a rectangle using a $ntimes n$ grid. For the resulting linear systems a banded Cholesky solver is used. Compare the three methods explicit, implicit and Crank-Nicolson for the time stepping.
- Explicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{(Delta x)^2} \
u_{i+1,j} = u_{i,j} + frac{kappa Delta t}{(Delta x)^2}(u_{i,j-1} - 2u_{i,j} + u_{i,j+1})\
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i}\
end{gathered}
$$
- Implicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j+1}}{(Delta x)^2} \
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i+1}\
u_i = (mathbb{I}_n + kappa Delta t textbf{A}_n)^{-i}cdot vec{u}_0
end{gathered}
$$
- Crank-Nicolson
$$
begin{gathered}
frac{u_{i+1, j}-u_{i,j}}{kappa Delta t} = frac{u_{i,j-1} - 2u_{i,j} + u_{i,j + 1}}{2(Delta x)^2} + frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j + 1}}{2(Delta x)^2}\
vec{u}_{i+1} - vec{u}_i = -frac{kappa Delta t}{2}(textbf{A}_n cdot vec{u}_{i+1} + textbf{A}_n cdot vec{u}_i)
end{gathered}
$$
Here is the question:
Does the implicit method require the least memory?
Here is my proposition for an answer:
I would say that the statement if FALSE
The implicit method requires more computational effort to give an answer, because the matrix $textbf{A}_n$ needs to be inverted. However, I would say that this is not the reason why the statement is false: for the implicit method there is $textbf{no extra/less storage needed}$ (compared to the explicit method for example) because there is no extra data generated from the computation (for example no other matrix generated). Is the answer correct as well as the reasoning behind it?
Following the comments of @zimbra314, I posted the question in Computational science beta
differential-equations finite-groups
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
1
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday
add a comment |
Examine a dynamic 2D heat equation $dot{u} = Delta u$ with zero boundary temperature. A standard finite difference approach is used on a rectangle using a $ntimes n$ grid. For the resulting linear systems a banded Cholesky solver is used. Compare the three methods explicit, implicit and Crank-Nicolson for the time stepping.
- Explicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{(Delta x)^2} \
u_{i+1,j} = u_{i,j} + frac{kappa Delta t}{(Delta x)^2}(u_{i,j-1} - 2u_{i,j} + u_{i,j+1})\
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i}\
end{gathered}
$$
- Implicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j+1}}{(Delta x)^2} \
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i+1}\
u_i = (mathbb{I}_n + kappa Delta t textbf{A}_n)^{-i}cdot vec{u}_0
end{gathered}
$$
- Crank-Nicolson
$$
begin{gathered}
frac{u_{i+1, j}-u_{i,j}}{kappa Delta t} = frac{u_{i,j-1} - 2u_{i,j} + u_{i,j + 1}}{2(Delta x)^2} + frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j + 1}}{2(Delta x)^2}\
vec{u}_{i+1} - vec{u}_i = -frac{kappa Delta t}{2}(textbf{A}_n cdot vec{u}_{i+1} + textbf{A}_n cdot vec{u}_i)
end{gathered}
$$
Here is the question:
Does the implicit method require the least memory?
Here is my proposition for an answer:
I would say that the statement if FALSE
The implicit method requires more computational effort to give an answer, because the matrix $textbf{A}_n$ needs to be inverted. However, I would say that this is not the reason why the statement is false: for the implicit method there is $textbf{no extra/less storage needed}$ (compared to the explicit method for example) because there is no extra data generated from the computation (for example no other matrix generated). Is the answer correct as well as the reasoning behind it?
Following the comments of @zimbra314, I posted the question in Computational science beta
differential-equations finite-groups
Examine a dynamic 2D heat equation $dot{u} = Delta u$ with zero boundary temperature. A standard finite difference approach is used on a rectangle using a $ntimes n$ grid. For the resulting linear systems a banded Cholesky solver is used. Compare the three methods explicit, implicit and Crank-Nicolson for the time stepping.
- Explicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{(Delta x)^2} \
u_{i+1,j} = u_{i,j} + frac{kappa Delta t}{(Delta x)^2}(u_{i,j-1} - 2u_{i,j} + u_{i,j+1})\
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i}\
end{gathered}
$$
- Implicit method
$$
begin{gathered}
frac{u_{i+1,j} - u_{i,j}}{Delta t} = kappa frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j+1}}{(Delta x)^2} \
vec{u}_{i+1} = vec{u}_i - kappa Delta t textbf{A}_n cdot vec{u}_{i+1}\
u_i = (mathbb{I}_n + kappa Delta t textbf{A}_n)^{-i}cdot vec{u}_0
end{gathered}
$$
- Crank-Nicolson
$$
begin{gathered}
frac{u_{i+1, j}-u_{i,j}}{kappa Delta t} = frac{u_{i,j-1} - 2u_{i,j} + u_{i,j + 1}}{2(Delta x)^2} + frac{u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j + 1}}{2(Delta x)^2}\
vec{u}_{i+1} - vec{u}_i = -frac{kappa Delta t}{2}(textbf{A}_n cdot vec{u}_{i+1} + textbf{A}_n cdot vec{u}_i)
end{gathered}
$$
Here is the question:
Does the implicit method require the least memory?
Here is my proposition for an answer:
I would say that the statement if FALSE
The implicit method requires more computational effort to give an answer, because the matrix $textbf{A}_n$ needs to be inverted. However, I would say that this is not the reason why the statement is false: for the implicit method there is $textbf{no extra/less storage needed}$ (compared to the explicit method for example) because there is no extra data generated from the computation (for example no other matrix generated). Is the answer correct as well as the reasoning behind it?
Following the comments of @zimbra314, I posted the question in Computational science beta
differential-equations finite-groups
differential-equations finite-groups
edited yesterday
asked 2 days ago
ecjb
1398
1398
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
1
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday
add a comment |
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
1
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
1
1
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday
add a comment |
1 Answer
1
active
oldest
votes
It depends on how do you apply inversion. If you use a direct method such as LU factorization, the inverted matrix will have more non-zero entries due to fill-in. For $ntimes n$ 2D grid, $A$ matrix will have $mathcal{O}(n^2)$ entries while factored matrix will have $mathcal{O}(n^{2} log n)$ entries; so it would require more memory than explicit method.
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3060678%2ffinite-difference-explicit-implicit-crank-nicolson-does-the-implicit-met%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
It depends on how do you apply inversion. If you use a direct method such as LU factorization, the inverted matrix will have more non-zero entries due to fill-in. For $ntimes n$ 2D grid, $A$ matrix will have $mathcal{O}(n^2)$ entries while factored matrix will have $mathcal{O}(n^{2} log n)$ entries; so it would require more memory than explicit method.
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
add a comment |
It depends on how do you apply inversion. If you use a direct method such as LU factorization, the inverted matrix will have more non-zero entries due to fill-in. For $ntimes n$ 2D grid, $A$ matrix will have $mathcal{O}(n^2)$ entries while factored matrix will have $mathcal{O}(n^{2} log n)$ entries; so it would require more memory than explicit method.
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
add a comment |
It depends on how do you apply inversion. If you use a direct method such as LU factorization, the inverted matrix will have more non-zero entries due to fill-in. For $ntimes n$ 2D grid, $A$ matrix will have $mathcal{O}(n^2)$ entries while factored matrix will have $mathcal{O}(n^{2} log n)$ entries; so it would require more memory than explicit method.
It depends on how do you apply inversion. If you use a direct method such as LU factorization, the inverted matrix will have more non-zero entries due to fill-in. For $ntimes n$ 2D grid, $A$ matrix will have $mathcal{O}(n^2)$ entries while factored matrix will have $mathcal{O}(n^{2} log n)$ entries; so it would require more memory than explicit method.
answered yesterday
zimbra314
466212
466212
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
add a comment |
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
Thanks for the helpful comment @zimbra314. Then just to make the comparison straightforward. what would be the memory cost of explicit method in your example?
– ecjb
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
$O(n^{2})$ on $ntimes n$ 2D grid
– zimbra314
yesterday
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3060678%2ffinite-difference-explicit-implicit-crank-nicolson-does-the-implicit-met%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
The explicit methods should also be $u_i$ on the very right hand side of the bottom equation. then you can solve for $u_{i+1}$ explicitly.
– tch
2 days ago
1
I think your reasoning is right. All the methods require you to store a current iterate and the matrix. It is possible that solving a linear system will require some additional memory, but that wouldn't mean the implicit memory uses less. Also, everything you do in the implicit method you need to do in the Crank-Nicholson method anyway, so there is no reason the implicit method would use less memory that Crank-Nicholson.
– tch
2 days ago
Thanks for the comments @tch. I corrected the question according to your first comment. Regarding the second one, that's indeed the point which is not clear for me, but it seems we have the same intuition
– ecjb
yesterday
this question might be better suited for scicomp.stackexchange.com
– zimbra314
yesterday