Skip to content

Commit

Permalink
typos and clarifications
Browse files Browse the repository at this point in the history
  • Loading branch information
rmsrosa committed Jul 3, 2023
1 parent 7f96fed commit 92da164
Show file tree
Hide file tree
Showing 4 changed files with 23 additions and 23 deletions.
Binary file modified latex/rode_conv_em.pdf
Binary file not shown.
24 changes: 12 additions & 12 deletions latex/rode_conv_em.tex
Original file line number Diff line number Diff line change
Expand Up @@ -907,22 +907,22 @@ \section{Numerical examples}
\begin{equation}
\epsilon^N = \max_{j=0, \ldots, N} \frac{1}{M}\sum_{m=1}^M \left|X_{t_j}(\omega_m) - X_{t_j}^N(\omega_m)\right|.
\end{equation}
Then we fit the errors $\epsilon^N$ to the power law $C\Delta_N^p$, in order to find $p$, along with the 95\% confidence interval.
Then we fit the errors $\epsilon^N$ to the power law $C\Delta t_N^p$, in order to find $p$, along with the 95\% confidence interval.

Here are the main parameters for the error estimate:
\begin{enumerate}
\item $M\in\mathbb{N}$ is the number of samples for the Monte Carlo estimate of the strong error.
\item The time interval $[0, T]$ for the initial-value problem.
\item The initial condition $X_0$.
\item A series of time steps $\Delta_i = T/(N_i-1)$, usually with $N_i=2^{n_i}$.
\item A number $N_{\mathrm{tgt}}$ of mesh points for a finer discretization to compute a target solution path, typically $N_{\mathrm{tgt}} = \max_i\{N_i^2\}$, unless an exact pathwise solution is available, in which case a coarser mesh of the order of $\max_i\{N_i\}$ can be used.
\item The number $M\in\mathbb{N}$ of samples for the Monte Carlo estimate of the strong error.
\item The time interval $[0, T], $ $T > 0,$ for the initial-value problem.
\item The distribution law for the random initial condition $X_0$.
\item A series of time steps $\Delta t_{N_i} = T/N_i$, with $N_i=2^{n_i},$ for some $n_i\in\mathbb{N},$ so that finer meshes are refinements of coarser meshes.
\item A number $N_{\mathrm{tgt}}$ for a finer resolution to compute a target solution path, typically $N_{\mathrm{tgt}} = \max_i\{N_i^2\}$, unless an exact pathwise solution is available, in which case a coarser mesh of the order of $\max_i\{N_i\}$ can be used.
\end{enumerate}

And here is the method:
\begin{enumerate}
\item For each sample $m=1, \ldots, M$, we first generate a discretization $\{Y_{t_j}\}_{j=0, N_{\mathrm{tgt}}}$ of a sample path of the noise on the finest grid $\{t_j^{N_{\mathrm{tgt}}}\}$, with $N_{\mathrm{tgt}}$ points, using either an exact distribution for the noise or an approximation in a much finer mesh.
\item Next, we use the values of the noise at the target time mesh to generate the target solution $\{X_{t_j}\}_{j=0, N_{\mathrm{tgt}}}$, still on the fine mesh. This is constructed either using the Euler approximation itself, keeping in mind that the mesh is sufficiently fine, or an exact distributions of the solution, when available.
\item Then, for each time step $N_i$ in the selected range, we compute the Euler approximation using the computed noise values at the corresponding coarser mesh.
\item For each sample $\omega_m,$ $m=1, \ldots, M$, we first generate a discretization of a sample path of the noise, $\{Y_{t_j}\}_{j=0, \ldots, N_{\mathrm{tgt}}},$ on the finest grid $t_j = j \Delta t_{N_{\textrm{tgt}}}$, $j = 0, \ldots, N_{\mathrm{tgt}},$ using an exact distribution for the noise.
\item Next, we use the values of the noise at the target time mesh to generate the target solution $\{X_{t_j}\}_{j=0, \ldots, N_{\mathrm{tgt}}}$, still on the fine mesh. This is constructed either using the Euler approximation itself, keeping in mind that the mesh is sufficiently fine, or an exact distributions of the solution, when available.
\item Then, for each time resolution $N_i,$ we compute the Euler approximation using the computed noise values at the corresponding coarser mesh $t_j = j\Delta t_{N_i},$ $j=0, \ldots, N_i.$
\item We then compare each approximation $\{X_{t_j}^{N_i}\}_{j=0, \ldots, N_i}$ to the values of the target path on that coarse mesh and update the strong error
\[
\epsilon_{t_j}^{N_i} = \frac{1}{M}\sum_{m=1}^M \left|X_{t_j}(\omega_m) - X_{t_j}^{N_i}(\omega_m)\right|,
Expand All @@ -932,10 +932,10 @@ \section{Numerical examples}
\[
\epsilon^{N_i} = \max_{j=0, \ldots, N_i} \epsilon_{t_j}^{N_i}.
\]
\item We fit $(\Delta_i, \epsilon^{N_i})$ to the power law $C\Delta_i^p$, via linear least-square regression in log scale, so that $\ln \epsilon^{N_i} \sim \ln C + p \ln \Delta_i$, for suitable $C$ and $p$, with $p$ giving the order of convergence. This amounts to solving the normal equation $(A^tA)\mathbf{v} = A^t\ln(\boldsymbol{\epsilon})$, where $\mathbf{v}$ is the vector $\mathbf{v} = (\ln(C), p)$, $A$ is the Vandermonde matrix associated with the logarithm of the mesh steps $(\Delta_{N_i})_i$, and $\ln(\boldsymbol{\epsilon})$ is the vector $\ln(\boldsymbol{\epsilon}) = (\ln(\epsilon^{N_i}))_i$.
\item We fit $(\Delta t_{N_i}, \epsilon^{N_i})$ to the power law $C\Delta t_{N_i}^p$, via linear least-square regression in log scale, so that $\ln \epsilon^{N_i} \sim \ln C + p \ln \Delta t_{N_i}$, for suitable $C$ and $p$, with $p$ giving the order of convergence. This amounts to solving the normal equation $(A^tA)\mathbf{v} = A^t\ln(\boldsymbol{\epsilon})$, where $\mathbf{v}$ is the vector $\mathbf{v} = (\ln(C), p)$, $A$ is the Vandermonde matrix associated with the logarithm of the mesh steps $(\Delta t_{N_i})_i$, and $\ln(\boldsymbol{\epsilon})$ is the vector $\ln(\boldsymbol{\epsilon}) = (\ln(\epsilon^{N_i}))_i$.
\item We also compute the standard error of the Monte-Carlo sample at each time step,
\[
s_{t_j}^{N_i} = \frac{\sigma_{t_j}^{N_i}}{\sqrt{N_i}},
s_{t_j}^{N_i} = \frac{\sigma_{t_j}^{N_i}}{\sqrt{M}},
\]
where $\sigma_{t_j}^{N_i}$ is the sample standard deviation given by
\[
Expand Down Expand Up @@ -1683,7 +1683,7 @@ \subsection{A random Fisher-KPP nonlinear PDE driven by boundary noise}

The equation involves a nonlinear term which is not globally Lipschitz continuous, but, similiarly to the population dynamics model considered in \cref{secpopdyn}, the region $0 \leq u(t, x) \leq u_m$ is invariant, so that the nonlinear term can be modified outside this region in order to satisfy the required uniform global estimates without affecting the dynamics within this region. The initial condition is chosen to be within this region almost surely. The procedure is the same as that done in \cref{secpopdyn}, so the details are omited.

For the time-mesh parameters, we set $N_{\textrm{tgt}} = 2^{18}$ and $N_i = 2^5, 2^7, 2^9$. The spatial discretization is done with finite differences with the number of mesh points depending on the time mesh, for stability and convergence reasons. Indeed, the Von Neumann stability analysis requires that $2\mu\Delta t / \Delta_x^2 \leq 1.$ With that in mind, for each $N_i = 2^5, 2^7, 2^9$, we take $2^3 + 1 = 9, 2^4 + 1 = 17,$ and $2^5 + 1 = 33$ spatial points, respectively, while for the target solution, we use $2^9 + 1 = 513$ spatial points.
For the time-mesh parameters, we set $N_{\textrm{tgt}} = 2^{18}$ and $N_i = 2^5, 2^7, 2^9.$ The spatial discretization is done with finite differences, with the number of spatial points depending on the time mesh, for stability and convergence reasons. Indeed, the Von Neumann stability analysis requires that $2\mu\Delta t / \Delta_x^2 \leq 1.$ With that in mind, for each $N_i = 2^5, 2^7, 2^9$, we take the $K_i + 1$ spatial points $0 = x_0 < \ldots x_{K_i},$ with $K_i = 2^3,$ $2^4,$ and $2^5,$ respectively, while for the target solution, we use $K_{\textrm{tgt}} = 2^9.$

For the Monte-Carlo estimate of the strong error, we choose $M = 40.$ Table \ref{tablefisherkpp} shows the estimated strong error obtained with this setup, while \cref{figfisherkpp} illustrates the order of convergence.

Expand Down
Binary file modified latex/rode_conv_em_short.pdf
Binary file not shown.
22 changes: 11 additions & 11 deletions latex/rode_conv_em_short.tex
Original file line number Diff line number Diff line change
Expand Up @@ -907,22 +907,22 @@ \section{Numerical examples}
\begin{equation}
\epsilon^N = \max_{j=0, \ldots, N} \frac{1}{M}\sum_{m=1}^M \left|X_{t_j}(\omega_m) - X_{t_j}^N(\omega_m)\right|.
\end{equation}
Then we fit the errors $\epsilon^N$ to the power law $C\Delta_N^p$, in order to find $p$, along with the 95\% confidence interval.
Then we fit the errors $\epsilon^N$ to the power law $C\Delta t_N^p$, in order to find $p$, along with the 95\% confidence interval.

Here are the main parameters for the error estimate:
\begin{enumerate}
\item $M\in\mathbb{N}$ is the number of samples for the Monte Carlo estimate of the strong error.
\item The time interval $[0, T]$ for the initial-value problem.
\item The initial condition $X_0$.
\item A series of time steps $\Delta_i = T/(N_i-1)$, usually with $N_i=2^{n_i}$.
\item A number $N_{\mathrm{tgt}}$ of mesh points for a finer discretization to compute a target solution path, typically $N_{\mathrm{tgt}} = \max_i\{N_i^2\}$, unless an exact pathwise solution is available, in which case a coarser mesh of the order of $\max_i\{N_i\}$ can be used.
\item The number $M\in\mathbb{N}$ of samples for the Monte Carlo estimate of the strong error.
\item The time interval $[0, T], $ $T > 0,$ for the initial-value problem.
\item The distribution law for the random initial condition $X_0$.
\item A series of time steps $\Delta t_{N_i} = T/N_i$, with $N_i=2^{n_i},$ for some $n_i\in\mathbb{N},$ so that finer meshes are refinements of coarser meshes.
\item A number $N_{\mathrm{tgt}}$ for a finer resolution to compute a target solution path, typically $N_{\mathrm{tgt}} = \max_i\{N_i^2\}$, unless an exact pathwise solution is available, in which case a coarser mesh of the order of $\max_i\{N_i\}$ can be used.
\end{enumerate}

And here is the method:
\begin{enumerate}
\item For each sample $m=1, \ldots, M$, we first generate a discretization $\{Y_{t_j}\}_{j=0, N_{\mathrm{tgt}}}$ of a sample path of the noise on the finest grid $\{t_j^{N_{\mathrm{tgt}}}\}$, with $N_{\mathrm{tgt}}$ points, using either an exact distribution for the noise or an approximation in a much finer mesh.
\item Next, we use the values of the noise at the target time mesh to generate the target solution $\{X_{t_j}\}_{j=0, N_{\mathrm{tgt}}}$, still on the fine mesh. This is constructed either using the Euler approximation itself, keeping in mind that the mesh is sufficiently fine, or an exact distributions of the solution, when available.
\item Then, for each time step $N_i$ in the selected range, we compute the Euler approximation using the computed noise values at the corresponding coarser mesh.
\item For each sample $\omega_m,$ $m=1, \ldots, M$, we first generate a discretization of a sample path of the noise, $\{Y_{t_j}\}_{j=0, \ldots, N_{\mathrm{tgt}}},$ on the finest grid $t_j = j \Delta t_{N_{\textrm{tgt}}}$, $j = 0, \ldots, N_{\mathrm{tgt}},$ using an exact distribution for the noise.
\item Next, we use the values of the noise at the target time mesh to generate the target solution $\{X_{t_j}\}_{j=0, \ldots, N_{\mathrm{tgt}}}$, still on the fine mesh. This is constructed either using the Euler approximation itself, keeping in mind that the mesh is sufficiently fine, or an exact distributions of the solution, when available.
\item Then, for each time resolution $N_i,$ we compute the Euler approximation using the computed noise values at the corresponding coarser mesh $t_j = j\Delta t_{N_i},$ $j=0, \ldots, N_i.$
\item We then compare each approximation $\{X_{t_j}^{N_i}\}_{j=0, \ldots, N_i}$ to the values of the target path on that coarse mesh and update the strong error
\[
\epsilon_{t_j}^{N_i} = \frac{1}{M}\sum_{m=1}^M \left|X_{t_j}(\omega_m) - X_{t_j}^{N_i}(\omega_m)\right|,
Expand All @@ -932,10 +932,10 @@ \section{Numerical examples}
\[
\epsilon^{N_i} = \max_{j=0, \ldots, N_i} \epsilon_{t_j}^{N_i}.
\]
\item We fit $(\Delta_i, \epsilon^{N_i})$ to the power law $C\Delta_i^p$, via linear least-square regression in log scale, so that $\ln \epsilon^{N_i} \sim \ln C + p \ln \Delta_i$, for suitable $C$ and $p$, with $p$ giving the order of convergence. This amounts to solving the normal equation $(A^tA)\mathbf{v} = A^t\ln(\boldsymbol{\epsilon})$, where $\mathbf{v}$ is the vector $\mathbf{v} = (\ln(C), p)$, $A$ is the Vandermonde matrix associated with the logarithm of the mesh steps $(\Delta_{N_i})_i$, and $\ln(\boldsymbol{\epsilon})$ is the vector $\ln(\boldsymbol{\epsilon}) = (\ln(\epsilon^{N_i}))_i$.
\item We fit $(\Delta t_{N_i}, \epsilon^{N_i})$ to the power law $C\Delta t_{N_i}^p$, via linear least-square regression in log scale, so that $\ln \epsilon^{N_i} \sim \ln C + p \ln \Delta t_{N_i}$, for suitable $C$ and $p$, with $p$ giving the order of convergence. This amounts to solving the normal equation $(A^tA)\mathbf{v} = A^t\ln(\boldsymbol{\epsilon})$, where $\mathbf{v}$ is the vector $\mathbf{v} = (\ln(C), p)$, $A$ is the Vandermonde matrix associated with the logarithm of the mesh steps $(\Delta t_{N_i})_i$, and $\ln(\boldsymbol{\epsilon})$ is the vector $\ln(\boldsymbol{\epsilon}) = (\ln(\epsilon^{N_i}))_i$.
\item We also compute the standard error of the Monte-Carlo sample at each time step,
\[
s_{t_j}^{N_i} = \frac{\sigma_{t_j}^{N_i}}{\sqrt{N_i}},
s_{t_j}^{N_i} = \frac{\sigma_{t_j}^{N_i}}{\sqrt{M}},
\]
where $\sigma_{t_j}^{N_i}$ is the sample standard deviation given by
\[
Expand Down

0 comments on commit 92da164

Please sign in to comment.