diff --git a/docs/Project.toml b/docs/Project.toml index f45f9bb..0273538 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -1,7 +1,5 @@ [deps] BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf" -DiffEqDevTools = "f3b72e0c-5b89-59e1-b016-84e28bfd966d" -DiffEqNoiseProcess = "77a26b50-5914-5dd7-bc55-306e6241c503" Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f" Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" FFTW = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341" @@ -13,4 +11,3 @@ RODEConvergence = "3e0417cb-ed7b-43e1-a831-672d2fe3e1aa" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" Revise = "295af30f-e4ad-537b-8983-00126c2a3abe" Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2" -StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0" diff --git a/docs/make.jl b/docs/make.jl index 7d38bb4..909273f 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -42,10 +42,6 @@ include(joinpath(@__DIR__(), "literate.jl")) "examples/09-fisherkpp.md", "examples/10-risk.md" ], - "DifferentialEquations.jl" => [ - "Nonhomogenous Wiener noise" => "sciml/wiener_nonhomogeneous.md", - "Homogenous Wiener noise" => "sciml/wiener_homogeneous.md", - ], "Noises" => [ "noises/noiseintro.md", "noises/homlin.md", diff --git a/docs/src/index.md b/docs/src/index.md index f65908f..c1115c6 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -12,4 +12,4 @@ We use a few standard libraries ([Random](https://docs.julialang.org/en/v1/stdli This documentation makes use of [Documenter.jl](https://documenter.juliadocs.org/stable/) and [Literate.jl](https://fredrikekre.github.io/Literate.jl/stable/), with the help of [LiveServer.jl](https://tlienart.github.io/LiveServer.jl/stable/) and [Revise.jl](https://timholy.github.io/Revise.jl/stable/), during development. -Some extra material uses [JuliaCI/BenchmarkToolsjl](https://juliaci.github.io/BenchmarkTools.jl/stable) and the [SciML ecosystem](https://sciml.ai). +Some extra material uses [JuliaCI/BenchmarkToolsjl](https://juliaci.github.io/BenchmarkTools.jl/stable). diff --git a/docs/src/sciml/wiener_homogeneous.md b/docs/src/sciml/wiener_homogeneous.md deleted file mode 100644 index 56dc2b5..0000000 --- a/docs/src/sciml/wiener_homogeneous.md +++ /dev/null @@ -1,259 +0,0 @@ -# Homogenous linear RODE with a Wiener process as coefficient - -```@meta -Draft = false -``` - -Now we consider a homogeneous linear equation in which the coefficient is a Wiener process. - -## The equation - -More precisely, we consider the RODE -```math - \begin{cases} - \displaystyle \frac{\mathrm{d}X_t}{\mathrm{d} t} = W_t X_t, \qquad 0 \leq t \leq T, \\ - \left. X_t \right|_{t = 0} = X_0, - \end{cases} -``` -where $\{W_t\}_{t\geq 0}$ is a Wiener process. The exact solution reads -```math -X_t = e^{\int_0^t W_s \;\mathrm{d}s}X_0. -``` - -## Computing the exact solution - -Similarly to the example of [Nonhomogenous Wiener noise](wiener_nonhomogeneous.md), we cannot compute the integral $\int_0^{t_j} W_s\;\mathrm{d}s$ exactly just from the values $W_{t_j}$ of the noise on the mesh points, but we can compute its expectation. - -As before, on each mesh interval, we consider the Brownian bridge on the interval $[t_i, t_{i+1}]$, -```math -B_t^i = W_t - W_{t_i} - \frac{t - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i}) -``` -which yields -```math -\mathrm{d}W_t = \mathrm{d}B_t^i + \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\;\mathrm{d}t. -``` - -This time, we use that -```math -\begin{align*} -\mathrm{d}(tW_t) & = W_t\;\mathrm{d}t + t\;\mathrm{d}W_t \\ -& = W_t\;\mathrm{d}t + t \left(\mathrm{d}B_t^i + \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\;\mathrm{d}t\right), -\end{align*} -``` -so that -```math -\begin{align*} -\int_{t_i}^{t_{i+1}} W_s\;\mathrm{d}s & = t_{i+1}W_{t_{i+1}} - t_iW_{t_i} - \int_{t_i}^{t_{i+1}} s\;\mathrm{d}B_s^i - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\int_{t_i}^{t_{i+1}} s\;\mathrm{d}s \\ -& = t_{i+1}W_{t_{i+1}} - t_iW_{t_i} - \int_{t_i}^{t_{i+1}} s\;\mathrm{d}B_s^i - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\frac{t_{i+1}^2 - t_i^2}{2} \\ -& = \frac{1}{2}(W_{t_{i+1}}+W_{t_i})(t_{i+1}-t_i) + Z_i, -\end{align*} -``` -where -```math -Z_i = - \int_{t_i}^{t_{i+1}} s\;\mathrm{d}B_s^i. -``` -Notice the first term is precisely the trapezoidal rule. Morever, $Z_i$ is a normal variable (being the limit of Riemann sums, which are linear combinations of the normal variables $\Delta B_s^i$) with zero expectation. Its variance can be computed using the Itô isometry, which also applies to Brownian bridges: -```math -\mathbb{E}[Z_i^2] = \int_{t_i}^{t_{i+1}} s^2 \;\mathrm{d}s = \frac{t_{i+1}^3 - t_i^3}{3}. -``` -Thus, -```math -Z_i \sim \mathcal{N}\left(0, \frac{t_{i+1}^3 - t_i^3}{3}\right) = \frac{\sqrt{t_{i+1}^3 - t_i^3}}{\sqrt{3}}\mathcal{N}(0, 1). -``` - -Thus, the exact solution at the mesh points can be computed in the form -```math -X_{t_j} = e^{\int_0^{t_j} W_s \;\mathrm{d}s}X_0, -``` -with -```math -\int_0^{t_j} W_s \;\mathrm{d}s = \sum_{i=0}^{j-1} \left(\frac{1}{2}(W_{t_{i+1}}+W_{t_i})(t_{i+1}-t_i) + Z_i\right), -``` -where -```math -Z_i \sim \frac{\sqrt{t_{i+1}^3 - t_i^3}}{\sqrt{3}}\mathcal{N}(0, 1). -``` - -For a normal variable $N \sim \mathcal{N}(\mu, \sigma)$, the expectation of the random variable $e^N$ is $\mathbb{E}[e^N] = e^{\mu + \sigma^2/2}$. Hence, -```math -\mathbb{E}[e^Z_i] = e^{(t_{i+1}^3 - t_i^3)/6}. -``` - -## Numerical approximation - -As before, we first we load the necessary packages. -```julia homlinrode -using StochasticDiffEq, DiffEqDevTools, Plots, Random -``` - -### Setting up the problem - -Now we set up the RODE problem. We define the right hand side of the equation as -```julia homlinrode -f(u, p, t, W) = W * u -``` - -Next we define the function that yields the (expected) analytic solution for a given computed solution `sol`, that contains the noise `sol.W` and the info about the (still to be defined) RODE problem `prob`. - -In the numerical implementation of the expected exact solution, the summation in the expression for the time integral of the Wiener noise can be computed recursively. Indeed, if -```math -I_j = \sum_{i=0}^{j-1} \left(\frac{1}{2}(W_{t_{i+1}}+W_{t_i})(t_{i+1}-t_i) + Z_i\right), -``` -then $I_0 = 0$ and, for $j = 1, \ldots, n$, -```math -I_j = \frac{1}{2}(W_{t_{j}}+W_{t_{j-1}})(t_{j}-t_{j-1}) + Z_{j-1} + I_{j-1}, -``` -with a new $Z_{j-1}$ drawn from a normal distribution at each iteration. - -Then, we set $X_{t_j}$ to -```math -X_{t_j} = e^{I_j}X_0. -``` - -This is implemented below. -```julia homlinrode -function f_analytic!(sol) - empty!(sol.u_analytic) - - u0 = sol.prob.u0 - push!(sol.u_analytic, u0) - - ti1, Wi1 = sol.W.t[1], sol.W.W[1] - integral = 0.0 - for i in 2:length(sol) - ti, Wi = sol.W.t[i], sol.W.W[i] - integral += (Wi + Wi1) * (ti - ti1) / 2 + 0 * (ti - ti1)^3 / 24 + randn() * sqrt((ti - ti1)^3 / 12) - push!(sol.u_analytic, u0 * exp(integral + ti^3 / 6)) - ti1, Wi1 = ti, Wi - end -end -``` - -With the right-hand-side and the analytic solutions defined, we construct the `RODEFunction` to be passed on to the RODE problem builder. - -```julia homlinrode -ff = RODEFunction( - f, - analytic = f_analytic!, - analytic_full=true -) -``` - -Now we set up the RODE problem, with initial condition `X0 = 1.0`, and time span `tspan = (0.0, 1.0)`: - -```julia homlinrode -X0 = 1.0 -tspan = (0.0, 1.0) - -prob = RODEProblem(ff, X0, tspan) -``` - -### An illustrative sample path - -Just for the sake of illustration, we solve for a solution path: -```julia homlinrode -sol = solve(prob, RandomEM(), dt = 1/100, seed = 123) -``` -and display the result -```julia homlinrode -plot( - sol, - title = "Sample path of \$\\mathrm{d}X_t/\\mathrm{d}t = W_t X_t\$", - titlefont = 12, - xaxis = "\$t\$", - yaxis = "\$x\$", - label="\$X_t\$" -) -``` - -### An illustrative ensemble of solutions - -```julia homlinrode -ensprob = EnsembleProblem(prob) -``` - -```julia homlinrode -enssol = solve(ensprob, RandomEM(), dt = 1/100, trajectories=100) -``` - -```julia homlinrode -enssumm = EnsembleSummary(enssol; quantiles=[0.25,0.75]) -plot(enssumm, - title = "Ensemble of solution paths of \$\\mathrm{d}X_t/\\mathrm{d}t = W_t X_t\$\nwith 50% confidence interval", - titlefont = 12, - xaxis = "\$t\$", - yaxis = "\$x\$" -) -``` - -## Order of convergence - -Now, we use the development tools in [SciML/DiffEqDevTools](https://github.com/SciML/DiffEqDevTools.jl) to set up a set of ensemble solves and obtain the order of convergence from them. - -The `WorkPrecisionSet` with `error_estimate=:l∞`, that we use, actually compute the strong norm in the form -```math -\mathbb{E}[\max_{i = 0, \ldots, n} |X_i - X_i^N|]. -``` -instead of the form -```math -\max_{i=0, \ldots, n}\mathbb{E}[|X_i - X_i^N|]. -``` - -For that, we choose a sequence of time steps and relative and absolute tolerances to check how the error decays along the sequence. - -```julia homlinrode -reltols = 1.0 ./ 10.0 .^ (1:5) -abstols = reltols -dts = 1.0./5.0.^((1:length(reltols)) .+ 1) -N = 100 -``` - -With that, we set up and solve the `WorkPrecisionSet`: -```julia homlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) -] - -wp = WorkPrecisionSet(prob,abstols,reltols,setups;numruns=N,maxiters=1e7,error_estimate=:l∞) -``` - -There is already a plot recipe for the result of a `WorkPrecisionSet` that displays the order of convergence: -```julia homlinrode -plot(wp, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = W_tX_t\$", titlefont=12, legend=:topleft) -``` - -## Benchmark - -We complement the above convergence order with a benchmark comparing the Euler method with the tamed Euler method and the Heun method. They all seem to achieve strong order 1, but with the Heun method being a bit more efficient. - -```julia homlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) - Dict(:alg=>RandomTamedEM(), :dts => dts) - Dict(:alg=>RandomHeun(), :dts => dts) -] -wp = WorkPrecisionSet(prob,abstols,reltols,setups;numruns=N,maxiters=1e7,error_estimate=:l∞) -plot(wp, title="Benchmark with \$\\mathrm{d}X_t/\\mathrm{d}t = W_tX_t\$", titlefont=12) -``` - -Built-in recipe for order of convergence. -```julia homlinrode -plot(wp, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = W_tX_t\$", titlefont=12, legend=:topleft) -``` - -## More - -```julia homlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) - Dict(:alg=>RandomTamedEM(), :dts => dts) - Dict(:alg=>RandomHeun(), :dts => dts) -] -wps = WorkPrecisionSet(EnsembleProblem(prob),abstols,reltols,setups;trajectories=20, numruns=10,maxiters=1e7,error_estimate=:l∞) -plot(wps, title="Benchmark with \$\\mathrm{d}X_t/\\mathrm{d}t = W_tX_t\$", titlefont=12) -``` - -Built-in recipe for order of convergence. -```julia homlinrode -plot(wps, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = W_tX_t\$", titlefont=12, legend=:topleft) -``` diff --git a/docs/src/sciml/wiener_nonhomogeneous.md b/docs/src/sciml/wiener_nonhomogeneous.md deleted file mode 100644 index ac38b13..0000000 --- a/docs/src/sciml/wiener_nonhomogeneous.md +++ /dev/null @@ -1,268 +0,0 @@ -# Linear RODE with non-homogenous Wiener noise - -```@meta -Draft = false -``` - -We start by considering the Euler approximation of one of the simplest linear random ordinary differential equation, in which the noise is just a Wiener process, as the nonhomogeneous term. - -## The equation - -More precisely, we consider the RODE -```math - \begin{cases} - \displaystyle \frac{\mathrm{d}X_t}{\mathrm{d} t} = - X_t + W_t, \qquad 0 \leq t \leq T, \\ - \left. X_t \right|_{t = 0} = X_0. - \end{cases} -``` -This is one of the simplest examples of RODEs and has the explicit solution -```math -X_t = e^{-t}X_0 + \int_0^t e^{-(t - s)} W_s \;\mathrm{d}s. -``` - -## Computing the exact solution - -For estimating the order of convergence, we use the Monte Carlo method, computing a number of numerical approximations of pathwise solutions and taking the average of their absolute differences. - -The computed solution is calculated from realizations $W_{t_j}(\omega_k)$, for samples paths $\{W_t(\omega_k)\}_{t\geq 0}$, with $k = 1, \ldots, K,$ and on the mesh points $t_0, \ldots, t_n$. We cannot compute the integral $\int_0^{t_j} e^{-(t_j - s)}W_s\;\mathrm{d}s$ exactly just from the values on the mesh points, but we can compute its expectation. First we break down the sum into parts: -```math -\int_0^{t_j} e^{-(t_j - s)}W_s\;\mathrm{d}s = \sum_{i = 0}^{j-1} \int_{t_i}^{t_{i+1}} e^{-(t_j - s)}W_s\;\mathrm{d}s. -``` -On each mesh interval, we use that -```math -B_t = W_t - W_{t_i} - \frac{t - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i}) -``` -is a Brownian bridge on the interval $[t_i, t_{i+1}]$, independent of $\{W_t\}_{t\geq 0}$ (see [Almost Sure: Brownian Bridges](https://almostsuremath.com/2021/03/29/brownian-bridges/) for more on Brownian bridges). Notice that -```math -\mathrm{d}W_t = \mathrm{d}B_t + \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\;\mathrm{d}t. -``` - -Thus, -```math -\begin{align*} -\mathrm{d}(e^{-(t_j-t)}W_t) & = e^{-(t_j-t)}W_t\;\mathrm{d}t + e^{-(t_j-t)}\;\mathrm{d}W_t \\ -& = e^{-(t_j-t)}W_t\;\mathrm{d}t + e^{-(t_j-t)} \left(\mathrm{d}B_t + \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\;\mathrm{d}t\right), -\end{align*} -``` -so that -```math -\begin{align*} -\int_{t_i}^{t_{i+1}} e^{-(t_j - s)}W_s\;\mathrm{d}s & = e^{-(t_j-t_{i+1})}W_{t_{i+1}} - e^{-(t_j-t_i)}W_{t_i} - \int_{t_i}^{t_{i+1}} e^{-(t_j - s)}\;\mathrm{d}B_s \\ -& \qquad - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\int_{t_i}^{t_{i+1}} e^{-(t_j - s)}\;\mathrm{d}s. -\end{align*} -``` - -Taking the expectation, using that the expectation of an Itô integral with respect to a Brownian bridge with zero endpoints is zero, and using that the values at the mesh points are given, we find that -```math -\mathbb{E}\left[ \int_{t_i}^{t_{i+1}} e^{-(t_j - s)}W_s\;\mathrm{d}s\right] = e^{-(t_j-t_{i+1})}W_{t_{i+1}} - e^{-(t_j-t_i)}W_{t_i} - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{-(t_j - t_{i+1})} - e^{-(t_j - t_i)}\right). -``` - -Hence, given the realizations of a Wiener noise on the mesh points, -```math -\begin{align*} -\mathbb{E}[X_{t_j}] & = e^{-t_j}X_0 + \sum_{i=0}^{j-1} \left(e^{-(t_j-t_{i+1})}W_{t_{i+1}} - e^{-(t_j-t_i)}W_{t_i}\right) \\ -& \qquad - \sum_{i=0}^{j-1} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{-(t_j - t_{i+1})} - e^{-(t_j - t_i)}\right). -\end{align*} -``` - -The first summation telescopes out and, since $W_0 = 0$, we are left with -```math -\mathbb{E}[X_{t_j}] = e^{-t_j}X_0 + W_{t_j} - e^{-t_j}\sum_{i=0}^{j-1} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{t_{i+1}} - e^{-t_i}\right). -``` - -Thus, we estimate the error by calculating the difference from the numerical approximation to the above expectation. Another way of seeing this is that the exact solution is -```math -X_{t_j} = e^{-t_j}X_0 + W_{t_j} - e^{-t_j}\sum_{i=0}^{j-1} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{t_{i+1}} - e^{-t_i}\right) + Z_j, -``` -where the random variables $Z_j$ average out to zero, so the strong error to $X_{t_j}$ is the same as that to $\mathbb{E}[X_{t_j}]$. - -## Numerical approximation - -First we load the necessary packages. We use [SciML/StochasticDiffEq.jl](https://github.com/SciML/StochasticDiffEq.jl) for solving the RODEs, [SciML/DiffEqDevTools.jl](https://github.com/SciML/DiffEqDevTools.jl) for facilitate the benchmark, [JuliaPlots/Plots.jl](https://github.com/JuliaPlots/Plots.jl) for plotting the results, and [Random](https://docs.julialang.org/en/v1/stdlib/Random/) for setting the seeds for reproducibility. - -```julia nonhomlinrode -using StochasticDiffEq, DiffEqDevTools, Plots, Random -``` - -### Setting up the problem - -Now we set up the RODE problem. In [SciML/DifferentialEquations](https://diffeq.sciml.ai/stable/), the [Rode Problems](https://diffeq.sciml.ai/stable/types/rode_types/) are given in the form -```math -\frac{\mathrm{d}u}{\mathrm{d}t} = f(u, t, p, W) -``` -In our example, the right hand side takes the form -```julia nonhomlinrode -f(u, p, t, W) = u + W -``` - -Next we define the function that yields the (expected) analytic solution for a given computed solution `sol`, that contains the noise `sol.W` and the info about the (still to be defined) RODE problem `prob`. - -In the numerical implementation of the expected exact solution, the summation in the expression can be computed recursively. Indeed, if -```math -I_j = \sum_{i=0}^{j-1} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{t_{i+1}} - e^{t_i}\right), -``` -then $I_0 = 0$ and, for $j = 1, \ldots, n$, -```math -\begin{aligned} -I_j & = \sum_{i=0}^{j-1} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{t_{i+1}} - e^{t_i}\right) \\ -& = \frac{W_{t_{j}}-W_{t_{j-1}}}{t_{j}-t_{j-1}}\left( e^{t_{j}} - e^{t_{j-1}}\right) + \sum_{i=0}^{j-2} \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left( e^{t_{i+1}} - e^{t_i}\right), -\end{aligned} -``` -so that -```math -I_j = \frac{W_{t_{j}}-W_{t_{j-1}}}{t_{j}-t_{j-1}}\left( e^{t_{j}} - e^{t_{j-1}}\right) + I_{j-1}. -``` - -Then, we set $X_{t_j}$ to -```math -X_{t_j} = W_{t_j} + e^{-t_j}\left(X_0 + I_j\right). -``` - -This is implemented below. -```julia nonhomlinrode -function f_analytic!(sol) - empty!(sol.u_analytic) - - u0 = sol.prob.u0 - push!(sol.u_analytic, u0) - - ti1, Wi1 = sol.W.t[1], sol.W.W[1] - integral = 0.0 - for i in 2:length(sol) - ti, Wi = sol.W.t[i], sol.W.W[i] - integral += - (Wi - Wi1) / (ti - ti1) * (exp(ti) - exp(ti1)) - push!(sol.u_analytic, Wi + exp(-ti) * (u0 + integral)) - ti1, Wi1 = ti, Wi - end -end -``` - -With the right-hand-side and the analytic solutions defined, we construct the `RODEFunction` to be passed on to the RODE problem builder. - -```julia nonhomlinrode -ff = RODEFunction( - f, - analytic = f_analytic!, - analytic_full=true -) -``` - -Now we set up the RODE problem, with initial condition `X0 = 1.0`, and time span `tspan = (0.0, 1.0)`: - -```julia nonhomlinrode -X0 = 1.0 -tspan = (0.0, 1.0) - -prob = RODEProblem(ff, X0, tspan) -``` - -### An illustrative sample path - -Just for the sake of illustration, we solve for a solution path, using the Euler method, which in SciML is provided as `RandomEM()`, and with a fixed time step. We fix the `seed` just for the sake of reproducibility: -```julia nonhomlinrode -sol = solve(prob, RandomEM(), dt = 1/100, seed = 123) -``` - -and display the result -```julia nonhomlinrode -plot( - sol, - title = "Sample path of \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", - titlefont = 12, - xaxis = "\$t\$", - yaxis = "\$x\$", - label="\$X_t\$" -) -``` - -### An illustrative ensemble of solutions - -```julia nonhomlinrode -ensprob = EnsembleProblem(prob) -``` - -```julia nonhomlinrode -enssol = solve(ensprob, RandomEM(), dt = 1/100, trajectories=1000) -``` - -```julia nonhomlinrode -enssumm = EnsembleSummary(enssol; quantiles=[0.25,0.75]) -plot(enssumm, - title = "Ensemble of solution paths of \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$\nwith 50% confidence interval", - titlefont = 12, - xaxis = "\$t\$", - yaxis = "\$x\$" -) -``` - -## Order of convergence - -Now, we use the development tools in [SciML/DiffEqDevTools](https://github.com/SciML/DiffEqDevTools.jl) to set up a set of ensemble solves and obtain the order of convergence from them. - -The `WorkPrecisionSet` with `error_estimate=:l∞`, that we use, actually compute the strong norm in the form -```math -\mathbb{E}[\max_{i = 0, \ldots, n} |X_i - X_i^N|]. -``` -instead of the form -```math -\max_{i=0, \ldots, n}\mathbb{E}[|X_i - X_i^N|]. -``` - -For that, we choose a sequence of time steps and relative and absolute tolerances to check how the error decays along the sequence. - -```julia nonhomlinrode -reltols = 1.0 ./ 10.0 .^ (1:5) -abstols = reltols -dts = 1.0./5.0.^((1:length(reltols)) .+ 1) -N = 1_000 -``` - -With that, we set up and solve the `WorkPrecisionSet`: -```julia nonhomlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) -] - -wp = WorkPrecisionSet(prob,abstols,reltols,setups;numruns=N,maxiters=1e7,error_estimate=:l∞) -``` - -There is already a plot recipe for the result of a `WorkPrecisionSet` that displays the order of convergence: -```julia nonhomlinrode -plot(wp, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", titlefont=12, legend=:topleft) -``` - -## Benchmark - -We complement the above convergence order with a benchmark comparing the Euler method with the tamed Euler method and the Heun method. They all seem to achieve strong order 1, but with the Heun method being a bit more efficient. - -```julia nonhomlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) - Dict(:alg=>RandomTamedEM(), :dts => dts) - Dict(:alg=>RandomHeun(), :dts => dts) -] -wp = WorkPrecisionSet(prob,abstols,reltols,setups;numruns=N,maxiters=1e7,error_estimate=:l∞) -plot(wp, title="Benchmark with \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", titlefont=12) -``` - -Built-in recipe for order of convergence. -```julia nonhomlinrode -plot(wp, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", titlefont=12, legend=:topleft) -``` - -## More - -```julia nonhomlinrode -setups = [ - Dict(:alg=>RandomEM(), :dts => dts) - Dict(:alg=>RandomTamedEM(), :dts => dts) - Dict(:alg=>RandomHeun(), :dts => dts) -] -wps = WorkPrecisionSet(EnsembleProblem(prob),abstols,reltols,setups;trajectories=20, numruns=10,maxiters=1e7,error_estimate=:l∞) -plot(wps, title="Benchmark with \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", titlefont=12) -``` - -Built-in recipe for order of convergence. -```julia nonhomlinrode -plot(wps, view=:dt_convergence,title="Strong convergence with \$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$", titlefont=12, legend=:topleft) -```