SciMLHomepage for the SciML organization
https://sciml.ai
SciML Ecosystem Update: Automated Model Discovery with DataDrivenDiffEq.jl and ReservoirComputing.jl<p>You give us data and we give you back LaTeX for the differential equation system
that generated the data. That may sound like the future, but the future is here.
In this SciML ecosystem update I am pleased to announce that a lot of our
data-driven modeling components are finally released with full documentation.
Let’s dive right in!</p>
<h2 id="datadrivendiffeqjl-dynamic-mode-decomposition-and-sparse-identification-of-models">DataDrivenDiffEq.jl: Dynamic Mode Decomposition and Sparse Identification of Models</h2>
<p><a href="https://github.com/SciML/DataDrivenDiffEq.jl">DataDrivenDiffEq.jl</a> has arrived, complete with <a href="https://datadriven.sciml.ai/dev/">documentation</a>
and a <a href="https://github.com/SciML/DataDrivenDiffEq.jl/tree/master/examples">full set of examples</a>.
Thank Julius Martensen (@AlCap23) for really driving this effort.
You can use this library to identify the sparse functional form of a differential
equation via variants of the <a href="https://www.pnas.org/content/113/15/3932">SInDy method</a>
given data and discover large linear ODEs on a basis of chosen observables through
variants of <a href="https://en.wikipedia.org/wiki/Dynamic_mode_decomposition">dynamic mode decomposition</a>.
This library has many options for how the sparsification and optimization are
performed to ensure it’s robust, and integrates with
<a href="https://github.com/SciML/ModelingToolkit.jl">ModelingToolkit.jl</a> so that the
trained basis functions work with symbolic libraries and have automatic
LaTeXification via <a href="https://github.com/korsbo/Latexify.jl">Latexify.jl</a>. And,
as demonstrated in the <a href="https://arxiv.org/abs/2001.04385">universal differential equations paper</a>
and highlighted in <a href="https://www.youtube.com/watch?v=SEhMWkgcTOI">this presentation on generalized physics-informed learning</a>,
these techniques can also be mixed with DiffEqFlux.jl and neural networks to
allow for pre-specifying known physics and discovering parts of models in a
robust fashion.</p>
<p>As a demonstration, let’s generate some data from a pendulum:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">DataDrivenDiffEq</span>
<span class="k">using</span> <span class="n">ModelingToolkit</span>
<span class="k">using</span> <span class="n">OrdinaryDiffEq</span>
<span class="k">using</span> <span class="n">LinearAlgebra</span>
<span class="k">using</span> <span class="n">Plots</span>
<span class="n">gr</span><span class="x">()</span>
<span class="k">function</span><span class="nf"> pendulum</span><span class="x">(</span><span class="n">u</span><span class="x">,</span> <span class="n">p</span><span class="x">,</span> <span class="n">t</span><span class="x">)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">y</span> <span class="o">=</span> <span class="o">-</span><span class="mf">9.81</span><span class="n">sin</span><span class="x">(</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span> <span class="o">-</span> <span class="mf">0.1</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="k">return</span> <span class="x">[</span><span class="n">x</span><span class="x">;</span><span class="n">y</span><span class="x">]</span>
<span class="k">end</span>
<span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">0.4</span><span class="nb">π</span><span class="x">;</span> <span class="mf">1.0</span><span class="x">]</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0</span><span class="x">,</span> <span class="mf">20.0</span><span class="x">)</span>
<span class="n">problem</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">pendulum</span><span class="x">,</span> <span class="n">u0</span><span class="x">,</span> <span class="n">tspan</span><span class="x">)</span>
<span class="n">solution</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">problem</span><span class="x">,</span> <span class="n">Tsit5</span><span class="x">(),</span> <span class="n">atol</span> <span class="o">=</span> <span class="mf">1e-8</span><span class="x">,</span> <span class="n">rtol</span> <span class="o">=</span> <span class="mf">1e-8</span><span class="x">,</span> <span class="n">saveat</span> <span class="o">=</span> <span class="mf">0.001</span><span class="x">)</span>
<span class="n">X</span> <span class="o">=</span> <span class="kt">Array</span><span class="x">(</span><span class="n">solution</span><span class="x">)</span>
<span class="n">DX</span> <span class="o">=</span> <span class="n">solution</span><span class="x">(</span><span class="n">solution</span><span class="o">.</span><span class="n">t</span><span class="x">,</span> <span class="kt">Val</span><span class="x">{</span><span class="mi">1</span><span class="x">})</span>
</code></pre></div></div>
<p>Let’s automatically discover that differential equation from its timeseries.
Now to perform SInDy, we define a set of basis functions via ModelingToolkit.jl:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">@variables</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="o">:</span><span class="mi">2</span><span class="x">]</span>
<span class="n">h</span> <span class="o">=</span> <span class="n">Operation</span><span class="x">[</span><span class="n">u</span><span class="x">;</span> <span class="n">u</span><span class="o">.^</span><span class="mi">2</span><span class="x">;</span> <span class="n">u</span><span class="o">.^</span><span class="mi">3</span><span class="x">;</span> <span class="n">sin</span><span class="o">.</span><span class="x">(</span><span class="n">u</span><span class="x">);</span> <span class="n">cos</span><span class="o">.</span><span class="x">(</span><span class="n">u</span><span class="x">);</span> <span class="mi">1</span><span class="x">]</span>
<span class="n">basis</span> <span class="o">=</span> <span class="n">Basis</span><span class="x">(</span><span class="n">h</span><span class="x">,</span> <span class="n">u</span><span class="x">)</span>
</code></pre></div></div>
<p>Here we included a bunch of polynomials up to third order and some trigonometric
functions. Now we tell SInDy what the timeseries data is and what the basis is
and it’ll spit out the differential equation system:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">opt</span> <span class="o">=</span> <span class="n">SR3</span><span class="x">(</span><span class="mf">3e-1</span><span class="x">,</span> <span class="mf">1.0</span><span class="x">)</span>
<span class="n">Ψ</span> <span class="o">=</span> <span class="n">SInDy</span><span class="x">(</span><span class="n">X</span><span class="x">[</span><span class="o">:</span><span class="x">,</span> <span class="mi">1</span><span class="o">:</span><span class="mi">1000</span><span class="x">],</span> <span class="n">DX</span><span class="x">[</span><span class="o">:</span><span class="x">,</span> <span class="mi">1</span><span class="o">:</span><span class="mi">1000</span><span class="x">],</span> <span class="n">basis</span><span class="x">,</span> <span class="n">maxiter</span> <span class="o">=</span> <span class="mi">10000</span><span class="x">,</span> <span class="n">opt</span> <span class="o">=</span> <span class="n">opt</span><span class="x">,</span> <span class="n">normalize</span> <span class="o">=</span> <span class="nb">true</span><span class="x">)</span>
</code></pre></div></div>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="mi">2</span> <span class="n">dimensional</span> <span class="n">basis</span> <span class="k">in</span> <span class="x">[</span><span class="s">"u₁"</span><span class="x">,</span> <span class="s">"u₂"</span><span class="x">]</span>
<span class="n">du₁</span> <span class="o">=</span> <span class="n">p₁</span> <span class="o">*</span> <span class="n">u₂</span>
<span class="n">du₂</span> <span class="o">=</span> <span class="n">sin</span><span class="x">(</span><span class="n">u₁</span><span class="x">)</span> <span class="o">*</span> <span class="n">p₃</span> <span class="o">+</span> <span class="n">p₂</span> <span class="o">*</span> <span class="n">u₂</span>
</code></pre></div></div>
<p>And there you go: notice that it was able to find the right structural equations!
<code class="language-plaintext highlighter-rouge">Ψ</code> is now of the form of the right differential equation, just from the data.
We can then transform this back into DifferentialEquations.jl code to see how
well we’ve identified the system and its coefficients:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">sys</span> <span class="o">=</span> <span class="n">ODESystem</span><span class="x">(</span><span class="n">Ψ</span><span class="x">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">parameters</span><span class="x">(</span><span class="n">Ψ</span><span class="x">)</span>
<span class="n">dudt</span> <span class="o">=</span> <span class="n">ODEFunction</span><span class="x">(</span><span class="n">sys</span><span class="x">)</span>
<span class="n">estimator</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">dudt</span><span class="x">,</span> <span class="n">u0</span><span class="x">,</span> <span class="n">tspan</span><span class="x">,</span> <span class="n">p</span><span class="x">)</span>
<span class="n">estimation</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">estimator</span><span class="x">,</span> <span class="n">Tsit5</span><span class="x">(),</span> <span class="n">saveat</span> <span class="o">=</span> <span class="n">solution</span><span class="o">.</span><span class="n">t</span><span class="x">)</span>
</code></pre></div></div>
<p><img src="https://user-images.githubusercontent.com/1814174/81472998-c9e67880-91c9-11ea-919b-b712f17abc80.png" alt="" /></p>
<p>We can now do things like, reoptimize the parameters with DiffEqParamEstim.jl
or DiffEqFlux.jl, or look at the AIC/BIC of the fit, or etc.jl. See the
<a href="https://datadriven.sciml.ai/dev/">DataDrivenDiffEq.jl documentation</a> for
more details on all that you can do. We hope that by directly incorporating this
into the SciML ecosystem that it will become a standard part of the scientific
modeling workflow and will continue to improve its methods.</p>
<h2 id="automatic-discovery-of-chaotic-systems-via-reservoircomputingjl">Automatic Discovery of Chaotic Systems via ReservoirComputing.jl</h2>
<p>Traditional methods of neural differential equations do not do so well on chaotic
systems, but the Echo State Network techniques in
<a href="https://github.com/SciML/ReservoirComputing.jl">ReservoirComputing.jl</a> do!
Big thanks to @MartinuzziFrancesco who has been driving this effort.
This library is able to train neural networks that learn attractor behavior and
then predict the evolution of chaotic systems. More development will soon follow
on this library as it was
<a href="https://summerofcode.withgoogle.com/organizations/6363760870031360/?sp-page=2#5374375945043968">chosen to be one of the JuliaLang Google Summer of Code projects</a>.</p>
<p><img src="https://user-images.githubusercontent.com/10376688/72997095-1913c380-3dfc-11ea-9702-a9734a375b96.png" alt="" /></p>
<h2 id="high-weak-order-sde-integrators">High Weak Order SDE Integrators</h2>
<p>As part of our continued work on <a href="https://docs.sciml.ai/latest/">DifferentialEquations.jl</a>
we have added new stochastic differential equation integrators, <code class="language-plaintext highlighter-rouge">DRI1</code> and <code class="language-plaintext highlighter-rouge">RI1</code>,
which are able to better estimate the expected value of the solution without
requiring the computational overhead of getting high order strong convergence.
This is only the start of a much larger project that we have accepted for
<a href="https://summerofcode.withgoogle.com/organizations/6363760870031360/#5505348691034112">JuliaLang’s Google Summer of Code</a>.
Thank Frank Schafer (@frankschae) for driving this effort. He will be continuing
to add methods for high weak convergence and fast methods for SDE adjoints to
further improve DiffEqFlux.jl’s neural stochastic differential equation support.</p>
<h2 id="sundials-5-and-lapack-integration">Sundials 5 and LAPACK Integration</h2>
<p>Sundials.jl now utilizes the latest version of Sundials, Sundials 5, for its
calculations. Thanks to Jose Daniel Lara (@jd-lara) for driving this effort.
Included with this update is the ability to use LAPACK/BLAS. This is not enabled
by default because it’s slower on small matrices, but if you’re handling a large
problem with Sundials, you can now do <code class="language-plaintext highlighter-rouge">CVODE_BDF(linear_solver=:LapackDense)</code>
and boom now all of the linear algebra is multithreaded BLASy goodness.</p>
<h2 id="diffeqbayes-updates">DiffEqBayes Updates</h2>
<p>Thanks to extensive maintanance efforts by Vaibhav Dixit (@Vaibhavdixit02),
David Widmann (@devmotion), Kai Xu (@xukai92), Mohammad Tarek (@mohamed82008),
and Rob Goedman (@goedman), the DiffEqBayes.jl library has received plenty of
updates to utilize the most up-to-date versions of the <a href="https://github.com/TuringLang/Turing.jl">Turing.jl</a>,
<a href="https://github.com/tpapp/DynamicHMC.jl">DynamicHMC.jl</a>, and <a href="https://mc-stan.org/users/interfaces/julia-stan">Stan</a>
probabilistic programming libraries (<a href="https://github.com/SciML/ModelingToolkit.jl">ModelingToolkit.jl</a>
automatically transforms Julia differential equation code to Stan). Together,
this serves as a very good resource for non-Bayesian-inclined users to utilize
Bayesian parameter estimation with just one function.
<a href="https://docs.sciml.ai/latest/analysis/parameter_estimation/">See the parameter estimation documentation for more details</a>.</p>
<p>As a quick update to the probabilistic programming space, we would like to note
that the Turing.jl library performs exceptionally well in comparison to the
other libraries. A lot of work had to be done in order to
<a href="https://github.com/SciML/DiffEqBayes.jl/pull/154">specifically find robustness issues in Stan</a>
and <a href="https://github.com/SciML/DiffEqBayes.jl/pull/155">make the priors more constrained</a>,
while Turing.jl has had no issues. This has shown up in other places as well,
where <a href="https://github.com/SciML/DiffEqBenchmarks.jl/blob/510c3683aa00ffa8e96e5c25bb07ef9301a06251/pdf/ParameterEstimation/DiffEqBayesLorenz.pdf">we have not been able to update our Bayesian Lorenz parameter estimation benchmarks due to robustness issues with Stan diverging</a>
Additionally, <a href="https://benchmarks.sciml.ai/html/ParameterEstimation/DiffEqBayesLotkaVolterra.html">benchmarks on</a>
<a href="https://benchmarks.sciml.ai/html/ParameterEstimation/DiffEqBayesFitzHughNagumo.html">other ODE systems</a>
demonstrate a 5x and 3x performance advantage for Turing over Stan. Thus our
examples showcase Turing.jl as being unequivically more robust for Bayesian
parameter estimation of differential equation systems. We hope that, with the
automatic differential equation conversion making testing between all of these
libraries easy, we can easily track performance and robustness improvements to
these probabilistic programming backends over time and ensure that users can
continue to know and use the best tools for the job.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Here’s some things to look forward to:</p>
<ul>
<li>SuperLU_MT support in Sundials.jl</li>
<li>The full release of ModelingToolkit.jl</li>
<li>Automated matrix-free finite difference PDE operators</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Sat, 09 May 2020 09:00:00 +0000
https://sciml.ai/2020/05/09/ModelDiscovery.html
https://sciml.ai/2020/05/09/ModelDiscovery.htmlSciML: An Open Source Software Organization for Scientific Machine Learning<p>Computational scientific discovery is at an interesting juncture. While we have
mechanistic models of lots of different scientific phenomena, and reams of data
being generated from experiments - our computational capabilities are unable to
keep up. Our problems are too large for realistic simulation. Our problems
are multiscale and too stiff. Our problems require tedious work like
calculating gradients and getting code to run on GPUs and supercomputers.
Our next step forward is a combination of science and machine learning, which
combines mechanistic models with data based reasoning, presented as a unified
set of abstractions and a high performance implementation. We refer to this as
scientific machine learning.</p>
<p><a href="https://www.osti.gov/servlets/purl/1478744">Scientific Machine Learning, abbreviated SciML, has been taking the academic
world by storm as an interesting blend of traditional scientific mechanistic
modeling (differential equations) with machine learning methodologies like
deep learning.</a> While traditional
deep learning methodologies have had difficulties with scientific issues like
stiffness, interpretability, and enforcing physical constraints, this blend
with numerical analysis and differential equations has evolved into a field of
research with new methods, architectures, and algorithms which overcome these
problems while adding the data-driven automatic learning features of modern
deep learning. Many successes have already been found, with tools like
<a href="https://www.sciencedirect.com/science/article/pii/S0021999118307125">physics-informed neural networks</a>,
<a href="https://link.springer.com/article/10.1007%2Fs40304-017-0117-6">deep BSDE solvers for high dimensional PDEs</a>,
and <a href="https://arxiv.org/pdf/2001.08055.pdf">neural surrogates</a> showcasing how
deep learning can greatly improve scientific modeling practice. At the same time,
researchers are quickly finding that our training techniques will need to be
modified in order to work on difficult scientific models. For example the original method of
<a href="https://arxiv.org/abs/1902.10298">reversing an ODE for an adjoint or relying on backpropagation through the solver
is not numerically stable for neural ODEs</a>,
and <a href="https://arxiv.org/abs/2001.04536">traditional optimizers made for machine learning, like Stochastic
Gradient Descent and ADAM have difficulties handling the ill-conditioned Hessians
of physics-informed neural networks</a>.
New software will be required in order to accommodate the unique numerical
difficulties that occur in this field, and facilitate the connection between
scientific simulators and scientific machine learning training loops.</p>
<p>SciML is an open source software organization for the development
and maintenance of a feature-filled and high performance set of tooling for
scientific machine learning. This includes the full gamut of tools from
differential equation solvers to scientific simulators and tools for automatically
discovering scientific models. What I want to do with this post is introduce
the organization by explaining a few things:</p>
<ul>
<li>What SciML provides</li>
<li>What our goals are</li>
<li>Our next steps</li>
<li>How you can join in the process</li>
</ul>
<h1 id="the-software-that-sciml-provides">The Software that SciML Provides</h1>
<h2 id="we-provide-best-in-class-tooling-for-solving-differential-equations">We provide best-in-class tooling for solving differential equations</h2>
<p>We will continue to have <a href="https://docs.sciml.ai/dev/">DifferentialEquations.jl</a> at
the core of the organization to support high performance solving of the differential
equations that show up in scientific models. This means we plan to continue the
research and development in:</p>
<ul>
<li>Discrete equations (function maps, discrete stochastic (Gillespie/Markov) simulations)</li>
<li>Ordinary differential equations (ODEs)</li>
<li>Split and partitioned ODEs (Symplectic integrators, IMEX Methods)</li>
<li>Stochastic ordinary differential equations (SODEs or SDEs)</li>
<li>Random differential equations (RODEs or RDEs)</li>
<li>Differential algebraic equations (DAEs)</li>
<li>Delay differential equations (DDEs)</li>
<li>Mixed discrete and continuous equations (Hybrid Equations, Jump Diffusions)</li>
<li>(Stochastic) partial differential equations ((S)PDEs) (with both finite difference and finite element methods)</li>
</ul>
<p>along with continuing to push towards new domains, like stochastic delay differential
equations, fractional differential equations, and beyond. However, optimal control,
(Bayesian) parameter estimation, and automated model discovery all require every
possible bit of performance, and thus we will continue to add functionality that improves
the performance for solving both large and small differential equation models.
This includes features like:</p>
<ul>
<li>GPU acceleration through CUDAnative.jl and CuArrays.jl</li>
<li>Automated sparsity detection with <a href="https://github.com/SciML/SparsityDetection.jl">SparsityDetection.jl</a></li>
<li>Automatic Jacobian coloring with <a href="https://github.com/SciML/SparseDiffTools.jl">SparseDiffTools.jl</a>, allowing for fast solutions
to problems with sparse or structured (Tridiagonal, Banded, BlockBanded, etc.) Jacobians</li>
<li>Progress meter integration with the Juno IDE for estimated time to solution</li>
<li><a href="https://docs.SciML.ai/dev/features/ensemble/">Automatic distributed, multithreaded, and GPU parallelism of ensemble trajectories</a></li>
<li><a href="https://docs.SciML.ai/dev/analysis/sensitivity/">Forward and adjoint local sensitivity analysis</a> for fast gradient computations</li>
<li>Built-in interpolations for differential equation solutions</li>
<li>Wrappers for common C/Fortran methods like Sundials and Hairer’s radau</li>
<li>Arbitrary precision with BigFloats and Arbfloats</li>
<li>Arbitrary array types, allowing the solution of differential equations on matrices and distributed arrays</li>
</ul>
<p>We plan to continue our research into these topics and make sure our software is
best in class. We plan to keep improving the performance of
DifferentialEquations.jl until it is best-in-class in every benchmark we have,
and then we plan to add more benchmarks to find more behaviors and handle those
as well. Here is a current benchmark showcasing native DifferentialEquations.jl
methods outperforming classical Fortran methods like LSODA by 5x on a 20
equation stiff ODE benchmark:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/77687352-a0082800-6f74-11ea-924d-442a0836be6d.PNG" alt="" />
<a href="https://benchmarks.sciml.ai/html/StiffODE/Pollution.html">Reference: Pollution Model Benchmarks</a></p>
<h2 id="we-provide-tools-for-deriving-and-fitting-scientific-models">We provide tools for deriving and fitting scientific models</h2>
<p>It is very rare that someone thinks their model is perfect. Thus a large portion
of the focus of our organization is to help scientific modelers derive equations
and fit models. This includes tools for:</p>
<ul>
<li><a href="https://docs.sciml.ai/dev/analysis/parameter_estimation/">Maximum likelihood and Bayesian parameter estimation</a></li>
<li><a href="https://docs.sciml.ai/dev/analysis/sensitivity/">Forward and adjoint local sensitivity analysis</a> for fast gradients</li>
<li><a href="https://docs.sciml.ai/dev/analysis/global_sensitivity/">Global sensitivity analysis</a></li>
<li><a href="https://surrogates.sciml.ai/latest/">Building surrogates of models</a></li>
<li><a href="https://docs.sciml.ai/dev/analysis/uncertainty_quantification/">Uncertainty quantification</a></li>
</ul>
<p>Some of our newer tooling like <a href="https://github.com/SciML/DataDrivenDiffEq.jl">DataDrivenDiffEq.jl</a>
can even take in timeseries data and generate LaTeX code for the best fitting model
(for a recent demonstration, see <a href="https://drive.google.com/file/d/1NxFOtpNHl7oXpdSLM06TEN_oO8QylPYx/view">this fitting of a COVID-19 epidemic model</a>).</p>
<p>We note that while these tools will continue to be tested with differential
equation models, many of these tools apply to scientific models in
general. For example, while our global sensitivity analysis tools have been
documented in the differential equation solver, these methods actually work on
any function <code class="language-plaintext highlighter-rouge">f(p)</code>:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">QuasiMonteCarlo</span><span class="x">,</span> <span class="n">DiffEqSensitivity</span>
<span class="k">function</span><span class="nf"> ishi</span><span class="x">(</span><span class="n">X</span><span class="x">)</span>
<span class="n">A</span><span class="o">=</span> <span class="mi">7</span>
<span class="n">B</span><span class="o">=</span> <span class="mf">0.1</span>
<span class="n">sin</span><span class="x">(</span><span class="n">X</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span> <span class="o">+</span> <span class="n">A</span><span class="o">*</span><span class="n">sin</span><span class="x">(</span><span class="n">X</span><span class="x">[</span><span class="mi">2</span><span class="x">])</span><span class="o">^</span><span class="mi">2</span><span class="o">+</span> <span class="n">B</span><span class="o">*</span><span class="n">X</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">^</span><span class="mi">4</span> <span class="o">*</span><span class="n">sin</span><span class="x">(</span><span class="n">X</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span>
<span class="k">end</span>
<span class="n">n</span> <span class="o">=</span> <span class="mi">600000</span>
<span class="n">lb</span> <span class="o">=</span> <span class="o">-</span><span class="n">ones</span><span class="x">(</span><span class="mi">4</span><span class="x">)</span><span class="o">*</span><span class="nb">π</span>
<span class="n">ub</span> <span class="o">=</span> <span class="n">ones</span><span class="x">(</span><span class="mi">4</span><span class="x">)</span><span class="o">*</span><span class="nb">π</span>
<span class="n">sampler</span> <span class="o">=</span> <span class="n">SobolSample</span><span class="x">()</span>
<span class="n">A</span><span class="x">,</span><span class="n">B</span> <span class="o">=</span> <span class="n">QuasiMonteCarlo</span><span class="o">.</span><span class="n">generate_design_matrices</span><span class="x">(</span><span class="n">n</span><span class="x">,</span><span class="n">lb</span><span class="x">,</span><span class="n">ub</span><span class="x">,</span><span class="n">sampler</span><span class="x">)</span>
<span class="n">res1</span> <span class="o">=</span> <span class="n">gsa</span><span class="x">(</span><span class="n">ishi</span><span class="x">,</span><span class="n">Sobol</span><span class="x">(),</span><span class="n">A</span><span class="x">,</span><span class="n">B</span><span class="x">)</span>
</code></pre></div></div>
<p>Reorganizing under the SciML umbrella will make it easier for users to discover
and apply our global sensitivity analysis methods outside of differential equation
contexts, such as with neural networks.</p>
<h2 id="we-provide-high-level-domain-specific-modeling-tools-to-make-scientific-modeling-more-accessible">We provide high-level domain-specific modeling tools to make scientific modeling more accessible</h2>
<p>Differential equations appear in nearly every scientific domain, but most
scientific domains have their own specialized idioms and terminology.
A physicist, biologist, chemist, etc. should be able to pick up our tools and make
use of high performance scientific machine learning methods without requiring the
understanding of every component and using abstractions that make sense to
their field. To make this a reality, we provide high-level
domain-specific modeling tools as frontends for building and generating models.</p>
<p><a href="https://github.com/SciML/DiffEqBiological.jl">DiffEqBiological.jl</a> is a prime
example which generates high performance simulations from a description of the
chemical reactions. For example, the following solves the Michaelis-Menton model
using an ODE and then a Gillespie model:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">rs</span> <span class="o">=</span> <span class="nd">@reaction_network</span> <span class="k">begin</span>
<span class="n">c1</span><span class="x">,</span> <span class="n">S</span> <span class="o">+</span> <span class="n">E</span> <span class="o">--></span> <span class="n">SE</span>
<span class="n">c2</span><span class="x">,</span> <span class="n">SE</span> <span class="o">--></span> <span class="n">S</span> <span class="o">+</span> <span class="n">E</span>
<span class="n">c3</span><span class="x">,</span> <span class="n">SE</span> <span class="o">--></span> <span class="n">P</span> <span class="o">+</span> <span class="n">E</span>
<span class="k">end</span> <span class="n">c1</span> <span class="n">c2</span> <span class="n">c3</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.00166</span><span class="x">,</span><span class="mf">0.0001</span><span class="x">,</span><span class="mf">0.1</span><span class="x">)</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.</span><span class="x">,</span> <span class="mf">100.</span><span class="x">)</span>
<span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">301.</span><span class="x">,</span> <span class="mf">100.</span><span class="x">,</span> <span class="mf">0.</span><span class="x">,</span> <span class="mf">0.</span><span class="x">]</span> <span class="c"># S = 301, E = 100, SE = 0, P = 0</span>
<span class="c"># solve ODEs</span>
<span class="n">oprob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">rs</span><span class="x">,</span> <span class="n">u0</span><span class="x">,</span> <span class="n">tspan</span><span class="x">,</span> <span class="n">p</span><span class="x">)</span>
<span class="n">osol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">oprob</span><span class="x">,</span> <span class="n">Tsit5</span><span class="x">())</span>
<span class="c"># solve JumpProblem</span>
<span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mi">301</span><span class="x">,</span> <span class="mi">100</span><span class="x">,</span> <span class="mi">0</span><span class="x">,</span> <span class="mi">0</span><span class="x">]</span>
<span class="n">dprob</span> <span class="o">=</span> <span class="n">DiscreteProblem</span><span class="x">(</span><span class="n">rs</span><span class="x">,</span> <span class="n">u0</span><span class="x">,</span> <span class="n">tspan</span><span class="x">,</span> <span class="n">p</span><span class="x">)</span>
<span class="n">jprob</span> <span class="o">=</span> <span class="n">JumpProblem</span><span class="x">(</span><span class="n">dprob</span><span class="x">,</span> <span class="n">Direct</span><span class="x">(),</span> <span class="n">rs</span><span class="x">)</span>
<span class="n">jsol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">jprob</span><span class="x">,</span> <span class="n">SSAStepper</span><span class="x">())</span>
</code></pre></div></div>
<p>This builds a specific form that can then use optimized methods like <code class="language-plaintext highlighter-rouge">DirectCR</code>
and achieve an order of magnitude better performance than the classic Gillespie
SSA methods:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/77689050-6d136380-6f77-11ea-9248-175de8c1c8e6.PNG" alt="" />
<a href="https://benchmarks.sciml.ai/html/Jumps/Diffusion_CTRW.html">Reference: Diffusion Model Benchmarks</a></p>
<p>Additionally, we have physics-based tooling and support external libraries like:</p>
<ul>
<li><a href="https://github.com/SciML/NBodySimulator.jl">NBodySimulator.jl</a> for N-body systems (molecular dynamics, astrophysics)</li>
<li><a href="https://github.com/JuliaRobotics/RigidBodySim.jl">RigidBodySim.jl</a> for robotics</li>
<li><a href="https://qojulia.org/">QuantumOptics.jl</a> for quantum phenomena</li>
<li><a href="https://juliadynamics.github.io/DynamicalSystems.jl/latest/">DynamicalSystems.jl</a> for chaotic dynamics</li>
</ul>
<p>We support commercial tooling built on our software like the <a href="https://pumas.ai/">Pumas</a>
software for pharmaceutical modeling and simulation which is being adopted
throughout the industry. We make it easy to generate models of multi-scale
systems using tools like <a href="https://github.com/SciML/MultiScaleArrays.jl">MultiScaleArrays.jl</a>:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/27211626-79fe1b9a-520f-11e7-87f1-1cb33da91609.PNG" alt="" /></p>
<p>and build compilers like <a href="https://github.com/SciML/ModelingToolkit.jl">ModelingToolkit.jl</a>
that provide automatic analysis and optimization of model code. By adding automated
code parallelization and BLT transforms to ModelingToolkit, users of DiffEqBiological,
<a href="https://pumas.ai/">Pumas</a>, <a href="https://github.com/SciML/ParameterizedFunctions.jl">ParameterizedFunctions.jl</a>,
etc. will all see their code automatically become more efficient.</p>
<h2 id="we-provide-high-level-implementations-of-the-latest-algorithms-in-scientific-machine-learning">We provide high-level implementations of the latest algorithms in scientific machine learning</h2>
<p>The translational step of bringing new methods of computational science to
scientists in application areas is what will allow next-generation exploration to occur. We
provide libraries like:</p>
<ul>
<li><a href="https://github.com/SciML/DiffEqFlux.jl">DiffEqFlux.jl</a> for neural and universal differential equations</li>
<li><a href="https://github.com/SciML/DataDrivenDiffEq.jl">DataDrivenDiffEq.jl</a> for automated equation generation with Dynamic Mode Decomposition (DMD) and SInDy type methods</li>
<li><a href="https://github.com/SciML/ReservoirComputing.jl">ReservoirComputing.jl</a> for echo state networks and prediction of chaotic systems</li>
<li><a href="https://github.com/SciML/NeuralNetDiffEq.jl">NeuralNetDiffEq.jl</a> for Physics-Informed Neural Networks (PINNs) and Deep BSDE solvers of 100 dimensional PDEs</li>
</ul>
<p>We will continue to expand this portion of our offering, building tools that
automatically solve PDEs from a symbolic description using neural networks,
and generate mesh-free discretizers.</p>
<h2 id="we-provide-users-of-all-common-scientific-programming-languages-the-ability-to-use-our-tooling">We provide users of all common scientific programming languages the ability to use our tooling</h2>
<p>While the main source of our tooling is centralized in the <a href="https://sciml.ai/">Julia programming language</a>,
we see Julia as a “language of libraries”, like C++ or Fortran, for developing
scientific libraries that can be widely used across the whole community. We
have previously demonstrated this capability with tools like <a href="https://github.com/SciML/diffeqpy">diffeqpy</a>
and <a href="https://cran.r-project.org/web/packages/diffeqr/index.html">diffeqr</a> for
using DifferentialEquations.jl from Python and R respectively, and we plan to
continue along these lines to allow as much of our tooling as possible be accessible
from as many languages as possible. While there will always be some optimizations
that can only occur when used from the Julia programming language, DSL builders
like <a href="https://github.com/SciML/ModelingToolkit.jl">ModelingToolkit.jl</a> will be
used to further expand the capabilities and performance of our wrappers.</p>
<p>Here’s an example which solves stochastic differential equations with high order adaptive methods
from Python:</p>
<div class="language-py highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pip install diffeqpy
</span><span class="kn">from</span> <span class="nn">diffeqpy</span> <span class="kn">import</span> <span class="n">de</span>
<span class="c1"># diffeqpy.install()
</span>
<span class="k">def</span> <span class="nf">f</span><span class="p">(</span><span class="n">du</span><span class="p">,</span><span class="n">u</span><span class="p">,</span><span class="n">p</span><span class="p">,</span><span class="n">t</span><span class="p">):</span>
<span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">z</span> <span class="o">=</span> <span class="n">u</span>
<span class="n">sigma</span><span class="p">,</span> <span class="n">rho</span><span class="p">,</span> <span class="n">beta</span> <span class="o">=</span> <span class="n">p</span>
<span class="n">du</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="n">sigma</span> <span class="o">*</span> <span class="p">(</span><span class="n">y</span> <span class="o">-</span> <span class="n">x</span><span class="p">)</span>
<span class="n">du</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">x</span> <span class="o">*</span> <span class="p">(</span><span class="n">rho</span> <span class="o">-</span> <span class="n">z</span><span class="p">)</span> <span class="o">-</span> <span class="n">y</span>
<span class="n">du</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="n">x</span> <span class="o">*</span> <span class="n">y</span> <span class="o">-</span> <span class="n">beta</span> <span class="o">*</span> <span class="n">z</span>
<span class="k">def</span> <span class="nf">g</span><span class="p">(</span><span class="n">du</span><span class="p">,</span><span class="n">u</span><span class="p">,</span><span class="n">p</span><span class="p">,</span><span class="n">t</span><span class="p">):</span>
<span class="n">du</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="mf">0.3</span><span class="o">*</span><span class="n">u</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">du</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="mf">0.3</span><span class="o">*</span><span class="n">u</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">du</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="mf">0.3</span><span class="o">*</span><span class="n">u</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="n">numba_f</span> <span class="o">=</span> <span class="n">numba</span><span class="o">.</span><span class="n">jit</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">numba_g</span> <span class="o">=</span> <span class="n">numba</span><span class="o">.</span><span class="n">jit</span><span class="p">(</span><span class="n">g</span><span class="p">)</span>
<span class="n">u0</span> <span class="o">=</span> <span class="p">[</span><span class="mf">1.0</span><span class="p">,</span><span class="mf">0.0</span><span class="p">,</span><span class="mf">0.0</span><span class="p">]</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="p">(</span><span class="mf">0.</span><span class="p">,</span> <span class="mf">100.</span><span class="p">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="p">[</span><span class="mf">10.0</span><span class="p">,</span><span class="mf">28.0</span><span class="p">,</span><span class="mf">2.66</span><span class="p">]</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">de</span><span class="o">.</span><span class="n">SDEProblem</span><span class="p">(</span><span class="n">numba_f</span><span class="p">,</span> <span class="n">numba_g</span><span class="p">,</span> <span class="n">u0</span><span class="p">,</span> <span class="n">tspan</span><span class="p">,</span> <span class="n">p</span><span class="p">)</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">de</span><span class="o">.</span><span class="n">solve</span><span class="p">(</span><span class="n">prob</span><span class="p">)</span>
<span class="c1"># Now let's draw a phase plot
</span>
<span class="n">ut</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">sol</span><span class="o">.</span><span class="n">u</span><span class="p">)</span>
<span class="kn">from</span> <span class="nn">mpl_toolkits.mplot3d</span> <span class="kn">import</span> <span class="n">Axes3D</span>
<span class="n">fig</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">figure</span><span class="p">()</span>
<span class="n">ax</span> <span class="o">=</span> <span class="n">fig</span><span class="o">.</span><span class="n">add_subplot</span><span class="p">(</span><span class="mi">111</span><span class="p">,</span> <span class="n">projection</span><span class="o">=</span><span class="s">'3d'</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">ut</span><span class="p">[</span><span class="mi">0</span><span class="p">,:],</span><span class="n">ut</span><span class="p">[</span><span class="mi">1</span><span class="p">,:],</span><span class="n">ut</span><span class="p">[</span><span class="mi">2</span><span class="p">,:])</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</code></pre></div></div>
<h2 id="we-provide-tools-for-researching-methods-in-scientific-machine-learning">We provide tools for researching methods in scientific machine learning</h2>
<p>Last but not least, we support the research activities of practitioners in
scientific machine learning. Tools like <a href="https://github.com/SciML/DiffEqDevTools.jl">DiffEqDevTools.jl</a>
and <a href="https://github.com/SciML/RootedTrees.jl">RootedTrees.jl</a> make it easy to
create and benchmark new methods and accelerate the publication
process for numerical researchers. Our wrappers for external tools like
<a href="https://github.com/SciML/FEniCS.jl">FEniCS.jl</a> and
<a href="https://github.com/SciML/SciPyDiffEq.jl">SciPyDiffEq.jl</a> make it easy to perform
cross-platform comparisons. Our stack is entirely written within Julia,
which means every piece can be tweaked on the fly, making it easy to mix
and match Hamiltonian integrators with neural networks to discover new scientific
applications. Our issues and <a href="https://gitter.im/JuliaDiffEq/Lobby">chat channel</a>
serve as places to not just debug existing software, but also discuss new
methods and help create high performance implementations.</p>
<p>In addition, we support many student activities to bring new researchers into the
community. Many of the maintainers of our packages, like Yingbo Ma, Vaibhav Dixit,
Kanav Gupta, Kirill Zubov, etc. all started as one of our over 50 student developers
from past <a href="https://summerofcode.withgoogle.com/">Google Summer of Code</a> and other
<a href="https://julialang.org/jsoc/">Julia Seasons of Contributions</a>.</p>
<h1 id="the-goal-of-sciml">The Goal of SciML</h1>
<p>When you read a paper that is
<a href="https://arxiv.org/abs/2001.04385">mixing neural networks with differential equations (our recent paper, available as a preprint)</a>
or <a href="https://arxiv.org/abs/1903.00033">designing new neural networks that satisfy incompressibility for modeling Navier-Stokes</a>,
you should be able to go online and find tweakable, high quality, and highly
maintained package implementations of these methodologies to either start using
for your scientific research, or utilize as a starting point for furthering the
methods of scientific machine learning. For this reason, the goal of the SciML
OSS organization is to be a hub for the development of robust cross-language
scientific machine learning software. <strong>In order to make this a reality, we as
an organization commit to the following principles</strong>:</p>
<h2 id="everything-that-we-build-is-compatible-with-automatic-differentiation">Everything that we build is compatible with automatic differentiation</h2>
<p>Putting an arbitrary piece of code from the SciML group into a training loop of
some machine learning library like <a href="https://fluxml.ai/">Flux</a> will naturally work.
This means we plan to enforce coding styles that are compatible with language-wide differentiable programming
tools like <a href="https://github.com/FluxML/Zygote.jl">Zygote</a>, or provide pre-defined
forward/adjoint rules via the derivative rule package <a href="https://github.com/JuliaDiff/ChainRules.jl">ChainRules.jl</a>.</p>
<p>As demonstrated in the following animation, you can take our stochastic
differential equation solvers and train a circuit to control the solution
by simply piecing together compatible packages.</p>
<p><img src="https://user-images.githubusercontent.com/1814174/51399524-2c6abf80-1b14-11e9-96ae-0192f7debd03.gif" alt="" /></p>
<h2 id="performance-is-considered-a-priority-and-performance-issues-are-considered-bugs">Performance is considered a priority, and performance issues are considered bugs</h2>
<p>No questions asked. If you can find something else that is performing better, we
consider that an issue and should get it fixed. High performance is required for
scientific machine learning to scale, and so we take performance seriously.</p>
<h2 id="our-packages-are-routinely-and-robustly-tested-with-the-tools-for-both-scientific-simulation-and-machine-learning">Our packages are routinely and robustly tested with the tools for both scientific simulation and machine learning</h2>
<p>This means we will continue to develop tools like
<a href="https://github.com/SciML/DiffEqFlux.jl">DiffEqFlux.jl</a> which supports the
connection between the <a href="https://docs.sciml.ai/dev/">DifferentialEquations.jl</a>
differential equation solvers and the <a href="https://fluxml.ai/">Flux</a> deep learning
library. Another example includes our
<a href="https://surrogates.sciml.ai/dev/">surrogate modeling library, Surrogates.jl</a>
which is routinely tested with DifferentialEquations.jl and the machine learning
AD tooling like Zygote.jl, meaning that you can be sure that our surrogates
modeling tools can train on differential equations and then be used inside
of deep learning stacks. It is this interconnectivity that will allow
next-generation SciML methodologies to get productionized in a way that will
impact “big science” and industrial use.</p>
<h2 id="we-keep-up-with-advances-in-computational-hardware-to-ensure-compatibility-with-the-latest-high-performance-computing-tools">We keep up with advances in computational hardware to ensure compatibility with the latest high performance computing tools.</h2>
<p>Today, Intel CPUs and NVIDIA GPUs are the dominant platforms, but that won’t always
be the case. <a href="https://www.anandtech.com/show/15581/el-capitan-supercomputer-detailed-amd-cpus-gpus-2-exaflops">One of the upcoming top supercomputers will be entirely AMD-based, with AMD CPUs and AMD GPUs</a>. In addition,
<a href="https://www.anandtech.com/show/15120/intels-2021-exascale-vision-in-aurora-two-sapphire-rapids-cpus-with-six-ponte-vecchio-gpus">Intel GPUs</a>
are scheduled to be a component in future supercomputers. We are
committed to maintaining a SciML toolchain that works on all major platforms,
updating our compiler backends as new technology is released.</p>
<h1 id="our-next-steps">Our Next Steps</h1>
<p>To further facilitate our focus to SciML, the next steps that we are looking at
are the following:</p>
<ul>
<li>We will continue to advance differential equation solving in many different
directions, such as adding support for stochastic delay differential equations
and improving our methods for DAEs.</li>
<li>We plan to create a new documentation setup. Instead of having everything
inside of the <a href="https://docs.sciml.ai/latest/">DifferentialEquations.jl documentation</a>,
we plan to split out some of the SciML tools to their own complete documentation.
We have already done this for <a href="https://surrogates.sciml.ai/latest/">Surrogates.jl</a>.
Next on the list is <a href="https://github.com/SciML/DiffEqFlux.jl">DiffEqFlux.jl</a>
which by looking at the README should be clear is in need of its own full docs.
Following that we plan to fully document <a href="https://github.com/SciML/NeuralNetDiffEq.jl">NeuralNetDiffEq.jl</a>
and its Physics-Informed Neural Networks (PINN) functionality,
<a href="https://github.com/SciML/DataDrivenDiffEq.jl">DataDrivenDiffEq.jl</a>, etc.
Because it does not require differential equations, we plan to split out the
documentation of <a href="https://docs.sciml.ai/latest/analysis/global_sensitivity/">Global Sensitivity Analysis</a>
to better facilitate its wider usage.</li>
<li>We plan to continue improving the <a href="https://github.com/SciML/ModelingToolkit.jl">ModelingToolkit</a>
ecosystem utilizing its symbolic nature for <a href="https://github.com/SciML/DifferentialEquations.jl/issues/469">generic specification of PDEs</a>.
This would then be used as a backend with Auto-ML as an automated way to solve
any PDE with Physics-Informed Neural Networks.</li>
<li>We plan to continue benchmarking everything, and improve our setup to include
automatic updates to the benchmarks for better performance regression tracking.
We plan to continue adding to our benchmarks, including benchmarks with MPI
and GPUs.</li>
<li>We plan to improve the installation of the Python and R side tooling, making
it automatically download precompiled Julia binaries so that way users can
utilize the tooling just by using CRAN or pip to install the package. We
plan to extend our Python and R offerings to include our neural network
infused software like DiffEqFlux and NeuralNetDiffEq.</li>
<li>We plan to get feature-completeness in data driven modeling techniques like
<a href="https://surrogates.sciml.ai/latest/">Radial Basis Function (RBF) surrogates</a>,
<a href="https://github.com/SciML/DataDrivenDiffEq.jl">Dynamic Mode Decomposition and SInDy type methods</a>,
and <a href="https://github.com/SciML/ModelingToolkit.jl">Model Order Reduction</a>.</li>
<li>We plan to stay tightly coupled to the latest techniques in SciML, implementing
new physically-constrained neural architectures, optimizers, etc. as they
are developed.</li>
</ul>
<h1 id="how-you-can-join-in-the-process">How You Can Join in the Process</h1>
<p>If you want to be a part of SciML, that’s great, you’re in! Here are some things
you can start doing:</p>
<ul>
<li>Star our libraries like <a href="https://github.com/SciML/DifferentialEquations.jl">DifferentialEquations.jl</a>.
Such recognition drives our growth to sustain the project.</li>
<li><a href="https://gitter.im/JuliaDiffEq/Lobby">Join our chatroom</a> to discuss with us.</li>
<li>If you’re a student, <a href="https://sciml.ai/jsoc/projects/">find a summer project that interests you</a>
and apply for funding through Google Summer of Code or other processes (contact
us if you are interested)</li>
<li>Start contributing! We recommend opening up an issue to discuss first, and we
can help you get started.</li>
<li>Help update our websites, tutorials, benchmarks, and documentation</li>
<li>Help answer questions on Stack Overflow, the <a href="https://discourse.sciml.ai/">Julia Discourse</a>,
and other sites!</li>
<li>Hold workshops to train others on our tools.</li>
</ul>
<p>There are many ways to get involved, so if you’d like some help figuring out
how, please get in touch with us.</p>
Sun, 29 Mar 2020 10:00:00 +0000
https://sciml.ai/2020/03/29/SciML.html
https://sciml.ai/2020/03/29/SciML.htmlDifferentialEquations.jl v6.12.0: DAE Extravaganza<p>This release is the long-awaited DAE extravaganza! We are releasing fully-implicit
DAE integrators written in pure Julia, and thus compatible with items things like
GPUs and arbitrary precision. We have various DAE initialization schemes to
allow for automatically finding consistent initial conditions, and have also
upgraded our solvers to solve state and time dependent mass matrices. These
results have also trickled over to DiffEqFlux, with the new neural ODE structs
which support singular mass matrices (DAEs). Together this is a very comprehensive
push into the DAE world.</p>
<h2 id="dimpliciteuler-and-dbdf2-fully-implicit-dae-solvers-in-pure-julia">DImplicitEuler and DBDF2: Fully Implicit DAE Solvers in Pure Julia</h2>
<p>Yes, you saw that correctly. There is now a fully implicit DAE solver setup
in pure Julia, meaning that high performance, GPU, arbitrary precision,
uncertainty quantification, automatic differentiation, etc. all exist on a set
of fully implicit ODEs. All of the standard features, like callback support
and linear solver choices, also apply. Currently we only offer the first and
second order BDF methods, but this is the difficult part and fully implicit DAE
adaptive order BDF is coming soon, likely this summer. This checks off one of
the longest standing requests for the JuliaDiffEq ecosystem. Thank Kanav Gupta
(@kanav99) for this wonderful addition.</p>
<p><a href="https://docs.juliadiffeq.org/latest/solvers/dae_solve/">The documentation for DAE solvers has been redone, so please check it out!</a></p>
<h2 id="dae-initialization-choices">DAE Initialization Choices</h2>
<p>Along with the new DAE solvers, there’s now a setup for initialization algorithms
for finding consistent initial conditions. These work on semi-explicit mass
matrix ODEs (i.e. singular mass matrices) and fully implicit ODEs in
<code class="language-plaintext highlighter-rouge">f(u',u,p,t)=0</code> form. A dispatch system on initialization algorithms was created
so we can iteratively keep enhancing the system, and we currently have implemented
the method from Brown (i.e. DASSL) for initializing only the algebraic part, and
a collocation method from Shampine that initializes both the differential and
algebraic equations. Again, we will continue to iteratively add to this selection
over time. A large part of this is due to Kanav Gupta (@kanav99).</p>
<h2 id="state-and-time-dependent-mass-matrices-ie-muptufupt">State and time dependent mass matrices, i.e. M(u,p,t)u’=f(u,p,t)</h2>
<p>Mass matrices can now be made state and time dependent using DiffEqOperators.
For example, the following is a valid mass matrix system:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> f</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="k">end</span>
<span class="k">function</span><span class="nf"> update_func</span><span class="x">(</span><span class="n">A</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">A</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">cos</span><span class="x">(</span><span class="n">t</span><span class="x">)</span>
<span class="n">A</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">sin</span><span class="x">(</span><span class="n">t</span><span class="x">)</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span>
<span class="n">A</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">t</span><span class="o">^</span><span class="mi">2</span>
<span class="n">A</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">cos</span><span class="x">(</span><span class="n">t</span><span class="x">)</span><span class="o">*</span><span class="n">sin</span><span class="x">(</span><span class="n">t</span><span class="x">)</span>
<span class="n">A</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">cos</span><span class="x">(</span><span class="n">t</span><span class="x">)</span><span class="o">^</span><span class="mi">2</span> <span class="o">+</span> <span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="n">A</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">sin</span><span class="x">(</span><span class="n">t</span><span class="x">)</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">A</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">sin</span><span class="x">(</span><span class="n">t</span><span class="x">)</span>
<span class="n">A</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">t</span><span class="o">^</span><span class="mi">2</span>
<span class="n">A</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">t</span><span class="o">*</span><span class="n">cos</span><span class="x">(</span><span class="n">t</span><span class="x">)</span> <span class="o">+</span> <span class="mi">1</span>
<span class="k">end</span>
<span class="n">dependent_M1</span> <span class="o">=</span> <span class="n">DiffEqArrayOperator</span><span class="x">(</span><span class="n">ones</span><span class="x">(</span><span class="mi">3</span><span class="x">,</span><span class="mi">3</span><span class="x">),</span><span class="n">update_func</span><span class="o">=</span><span class="n">update_func1</span><span class="x">)</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">ODEFunction</span><span class="x">{</span><span class="n">iip</span><span class="x">,</span><span class="nb">true</span><span class="x">}(</span><span class="n">f</span><span class="x">,</span> <span class="n">mass_matrix</span><span class="o">=</span><span class="n">mm_A</span><span class="x">),</span> <span class="n">u0</span><span class="x">,</span> <span class="n">tspan</span><span class="x">)</span>
</code></pre></div></div>
<p>is a valid specification of a <code class="language-plaintext highlighter-rouge">M(u,p,t)u'=f(u,p,t)</code> system which can then be solved
with methods <a href="https://docs.juliadiffeq.org/latest/solvers/dae_solve/">as described on the DAE solver page</a>.
We have found that Yingbo Ma’s OrdinaryDiffEq.jl RadauIIA works quite well for
such systems, so do give it a try!</p>
<h2 id="neural-dae-structs-in-diffeqflux">Neural DAE Structs in DiffEqFlux</h2>
<p>Continuing with the DAE theme, we now have <code class="language-plaintext highlighter-rouge">NeuralODEMM</code> inside of DiffEqFlux.jl
for specifying semi-explicit mass matrix ODEs in order to impose constraint
equations in the time evolution of the system. For example, the following is a
neural DAE where the sum of the 3 ODE variables is constrained to 1:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">dudt2</span> <span class="o">=</span> <span class="n">FastChain</span><span class="x">(</span><span class="n">FastDense</span><span class="x">(</span><span class="mi">3</span><span class="x">,</span><span class="mi">64</span><span class="x">,</span><span class="n">tanh</span><span class="x">),</span><span class="n">FastDense</span><span class="x">(</span><span class="mi">64</span><span class="x">,</span><span class="mi">2</span><span class="x">))</span>
<span class="n">ndae</span> <span class="o">=</span> <span class="n">NeuralODEMM</span><span class="x">(</span><span class="n">dudt2</span><span class="x">,</span> <span class="x">(</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span> <span class="o">-></span> <span class="x">[</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">+</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">+</span> <span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span> <span class="o">-</span> <span class="mi">1</span><span class="x">],</span> <span class="n">tspan</span><span class="x">,</span> <span class="n">M</span><span class="x">,</span> <span class="n">Rodas5</span><span class="x">(</span><span class="n">autodiff</span><span class="o">=</span><span class="nb">false</span><span class="x">),</span><span class="n">saveat</span><span class="o">=</span><span class="mf">0.1</span><span class="x">)</span>
</code></pre></div></div>
<p>We are excited to see what kinds of applications people will come up with given
such a tool, since properties like conservation of energy can now be directly
encoded into the trained system.</p>
<h2 id="mass-matrix-dae-adjoints">Mass Matrix DAE Adjoints</h2>
<p>In conjunction with the neural ODEs with constraints provided by mass matrices,
we have released new additions to the adjoint methods which allow them to
support singular mass matrices. This is another great addition by Yingbo Ma
(@YingboMa).</p>
<h2 id="massive-neural-ode-performance-improvements">Massive Neural ODE Performance Improvements</h2>
<p>There has been another set of massive neural ODE performance improvements.
Making use of ReverseDiff.jl in strategic ways, avoiding Flux allocations, and
fast-paths for common adjoints were all part of the game. We saw another 2x speedup
from these advances.</p>
<h2 id="second-order-sensitivity-analysis-and-sciml_train-newton-methods">Second Order Sensitivity Analysis and sciml_train Newton Methods</h2>
<p>Second order sensitivity analysis has been added to the DiffEqSensitivity.jl
library. One can either query for fast Hessian calculations or for fast
Hessian-vector products. These utilize a mixture of AD and adjoint methods
for performing the computation in a time and memory efficient manner. For example,
the following return the Hessian and the Hessian-vector product of the ODE
system with respect to parameters:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> fb</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">dx</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">dy</span> <span class="o">=</span> <span class="o">-</span><span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">+</span> <span class="n">p</span><span class="x">[</span><span class="mi">4</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="k">end</span>
<span class="k">function</span><span class="nf"> jac</span><span class="x">(</span><span class="n">J</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="x">(</span><span class="n">x</span><span class="x">,</span> <span class="n">y</span><span class="x">,</span> <span class="n">a</span><span class="x">,</span> <span class="n">b</span><span class="x">,</span> <span class="n">c</span><span class="x">)</span> <span class="o">=</span> <span class="x">(</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">],</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">],</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">],</span> <span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">],</span> <span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">])</span>
<span class="n">J</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">y</span> <span class="o">*</span> <span class="n">b</span> <span class="o">*</span> <span class="o">-</span><span class="mi">1</span>
<span class="n">J</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">y</span>
<span class="n">J</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">b</span> <span class="o">*</span> <span class="n">x</span> <span class="o">*</span> <span class="o">-</span><span class="mi">1</span>
<span class="n">J</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">c</span> <span class="o">*</span> <span class="o">-</span><span class="mi">1</span> <span class="o">+</span> <span class="n">x</span>
<span class="k">end</span>
<span class="n">f</span> <span class="o">=</span> <span class="n">ODEFunction</span><span class="x">(</span><span class="n">fb</span><span class="x">,</span><span class="n">jac</span><span class="o">=</span><span class="n">jac</span><span class="x">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.5</span><span class="x">,</span><span class="mf">1.0</span><span class="x">,</span><span class="mf">3.0</span><span class="x">,</span><span class="mf">1.0</span><span class="x">];</span> <span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.0</span><span class="x">;</span><span class="mf">1.0</span><span class="x">]</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">f</span><span class="x">,</span><span class="n">u0</span><span class="x">,(</span><span class="mf">0.0</span><span class="x">,</span><span class="mf">10.0</span><span class="x">),</span><span class="n">p</span><span class="x">)</span>
<span class="n">loss</span><span class="x">(</span><span class="n">sol</span><span class="x">)</span> <span class="o">=</span> <span class="n">sum</span><span class="x">(</span><span class="n">sol</span><span class="x">)</span>
<span class="n">v</span> <span class="o">=</span> <span class="n">ones</span><span class="x">(</span><span class="mi">4</span><span class="x">)</span>
<span class="n">H</span> <span class="o">=</span> <span class="n">second_order_sensitivities</span><span class="x">(</span><span class="n">loss</span><span class="x">,</span><span class="n">prob</span><span class="x">,</span><span class="n">Vern9</span><span class="x">(),</span><span class="n">saveat</span><span class="o">=</span><span class="mf">0.1</span><span class="x">,</span><span class="n">abstol</span><span class="o">=</span><span class="mf">1e-12</span><span class="x">,</span><span class="n">reltol</span><span class="o">=</span><span class="mf">1e-12</span><span class="x">)</span>
<span class="n">Hv</span> <span class="o">=</span> <span class="n">second_order_sensitivity_product</span><span class="x">(</span><span class="n">loss</span><span class="x">,</span><span class="n">v</span><span class="x">,</span><span class="n">prob</span><span class="x">,</span><span class="n">Vern9</span><span class="x">(),</span><span class="n">saveat</span><span class="o">=</span><span class="mf">0.1</span><span class="x">,</span><span class="n">abstol</span><span class="o">=</span><span class="mf">1e-12</span><span class="x">,</span><span class="n">reltol</span><span class="o">=</span><span class="mf">1e-12</span><span class="x">)</span>
</code></pre></div></div>
<h2 id="magnus-integrators-for-uatu-and-lie-group-integrators-for-uautu">Magnus Integrators for u’=A(t)u and Lie Group Integrators for u’=A(u,t)u</h2>
<p>If your system is described by a time-dependent linear operator, like many PDE
systems, the integration can be greatly improved by exploiting this structure
of the problem. The OrdinaryDiffEq.jl now supports Magnus integrators which
utilize the Krylov exponential tooling of exponential integrators in order to
support large-scale time-dependent systems in a way that preserves the solution
manifold. For state-dependent problems, a similar set of methods, the Lie group
methods, has also been started, with the infrastructure in place and the
implementation of the LieEuler method. The next step of just adding more methods
is the easy part, and we expect a whole litany of methods in these two categories
for the next release.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Mon, 23 Mar 2020 10:00:00 +0000
https://sciml.ai/2020/03/23/DAE.html
https://sciml.ai/2020/03/23/DAE.htmlDifferentialEquations.jl v6.11.0: Universal Differential Equation Overhaul<p>After the release of the paper
<a href="https://arxiv.org/abs/2001.04385">Universal Differential Equations for Scientific Machine Learning</a>,
we have had very good feedback and have seen plenty of new users joining the
Julia differential equation ecosystem and utilizing the tools for scientific
machine learning. A lot of our work in this last release focuses around these
capability, mixing with GPU support and global sensitivity analysis to augment
the normal local tools of SciML.</p>
<h2 id="1000-stars-for-differentialequationsjl">1,000 Stars for DifferentialEquations.jl!</h2>
<p>Before the bigger updates, I wanted to announce that DifferentialEquations.jl
surpassed the 1,000 star milestone in this round. This is very helpful for the
community as an indicator of community utility. If you haven’t done so yet, please
<a href="https://github.com/JuliaDiffEq/DifferentialEquations.jl">star DifferentialEquations.jl</a>
as it is a valuble indicator for future grants and funding for student projects.</p>
<h2 id="local-sensitivity-analysis-overhaul-concrete_solve-and-sensealg">Local Sensitivity Analysis Overhaul: <code class="language-plaintext highlighter-rouge">concrete_solve</code> and <code class="language-plaintext highlighter-rouge">sensealg</code></h2>
<p>With major help from Yingbo Ma (@YingboMa), we have overhauled our sensitivity
analysis algorithms to give a lot more choice and implementation flexibility.
While all of the lower level interface is still in place, a new higher level
interface will make users especially happy. This interface is <code class="language-plaintext highlighter-rouge">concrete_solve</code>.
It’s a version of <code class="language-plaintext highlighter-rouge">solve</code> (limitation: no post-solution interpolation)
which explicitly takes in <code class="language-plaintext highlighter-rouge">u0</code> and <code class="language-plaintext highlighter-rouge">p</code>, and is setup with Zygote to automatically
utilize our built-in <code class="language-plaintext highlighter-rouge">SensitivityAlgoritm</code> methods whether Zygote (or any
ChainRules.jl-based AD system) asks for a gradient. For example:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">DiffEqSensitivity</span><span class="x">,</span> <span class="n">OrdinaryDiffEq</span><span class="x">,</span> <span class="n">Zygote</span>
<span class="k">function</span><span class="nf"> fiip</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">dx</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">dy</span> <span class="o">=</span> <span class="o">-</span><span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">+</span> <span class="n">p</span><span class="x">[</span><span class="mi">4</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="k">end</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.5</span><span class="x">,</span><span class="mf">1.0</span><span class="x">,</span><span class="mf">3.0</span><span class="x">,</span><span class="mf">1.0</span><span class="x">];</span> <span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.0</span><span class="x">;</span><span class="mf">1.0</span><span class="x">]</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">fiip</span><span class="x">,</span><span class="n">u0</span><span class="x">,(</span><span class="mf">0.0</span><span class="x">,</span><span class="mf">10.0</span><span class="x">),</span><span class="n">p</span><span class="x">)</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">concrete_solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">())</span>
</code></pre></div></div>
<p>solves the equation, while:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">du0</span><span class="x">,</span><span class="n">dp</span> <span class="o">=</span> <span class="n">Zygote</span><span class="o">.</span><span class="n">gradient</span><span class="x">((</span><span class="n">u0</span><span class="x">,</span><span class="n">p</span><span class="x">)</span><span class="o">-></span><span class="n">sum</span><span class="x">(</span><span class="n">concrete_solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">u0</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">saveat</span><span class="o">=</span><span class="mf">0.1</span><span class="x">,</span><span class="n">sensealg</span><span class="o">=</span><span class="n">QuadratureAdjoint</span><span class="x">())),</span><span class="n">u0</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
</code></pre></div></div>
<p>computes <code class="language-plaintext highlighter-rouge">du0</code> and <code class="language-plaintext highlighter-rouge">dp</code>: the gradient of the cost function with respect to the
initial condition and parameters. Notice here we have a choice of <code class="language-plaintext highlighter-rouge">sensealg</code>,
which allows the choice of a sensitivity analysis method for Zygote to use. The
choices are vast and growing, with each having pros and cons. You can ask it
to use forward sensitivity analysis, forward mode AD, Tracker.jl, O(1) adjoints
via backsolve, <strong>checkpointed adjoints</strong>, etc. all just by changing the <code class="language-plaintext highlighter-rouge">sensealg</code>
keyword argument. Thus this is the first system to offer such flexibility to
allow for the most efficient gradient calculations for a specific problem to
occur.</p>
<p>We’ve seen some pretty massive performance and stability gains by utilizing
this system!</p>
<h2 id="diffeqflux-overhaul-zygote-support-sciml_train-interface-and-fast-layers">DiffEqFlux Overhaul: Zygote Support, <code class="language-plaintext highlighter-rouge">sciml_train</code> Interface, and Fast Layers</h2>
<p>Given the workflows that we saw in <a href="https://arxiv.org/abs/2001.04385">the UDE paper</a>,
we have overhauled DiffEqFlux. The new interface, <code class="language-plaintext highlighter-rouge">sciml_train</code>, is more suitable
to scientific machine learning. We have introduced the <code class="language-plaintext highlighter-rouge">Fast</code> layer setup, i.e.
<code class="language-plaintext highlighter-rouge">FastChain</code> and <code class="language-plaintext highlighter-rouge">FastDense</code>, which give a 10x speed improvement over Flux.jl
neural architectures by avoiding expensive restructure/destructure calls. Additionally,
<code class="language-plaintext highlighter-rouge">sciml_train</code> links not just to the Flux.jl deep learning optimizer library,
but also to Optim.jl for stability-enhanced methods like L-BFGS. Lastly, this
new interface has explicit parameters, something that has helped fix a lot of
issues users have had with the interface. Together, we can train a neural ODE
in around 30 seconds in this example that mixes ADAM and BFGS optimizers:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">DiffEqFlux</span><span class="x">,</span> <span class="n">OrdinaryDiffEq</span><span class="x">,</span> <span class="n">Flux</span><span class="x">,</span> <span class="n">Optim</span><span class="x">,</span> <span class="n">Plots</span>
<span class="n">u0</span> <span class="o">=</span> <span class="kt">Float32</span><span class="x">[</span><span class="mf">2.</span><span class="x">;</span> <span class="mf">0.</span><span class="x">]</span>
<span class="n">datasize</span> <span class="o">=</span> <span class="mi">30</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0f0</span><span class="x">,</span><span class="mf">1.5f0</span><span class="x">)</span>
<span class="k">function</span><span class="nf"> trueODEfunc</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">true_A</span> <span class="o">=</span> <span class="x">[</span><span class="o">-</span><span class="mf">0.1</span> <span class="mf">2.0</span><span class="x">;</span> <span class="o">-</span><span class="mf">2.0</span> <span class="o">-</span><span class="mf">0.1</span><span class="x">]</span>
<span class="n">du</span> <span class="o">.=</span> <span class="x">((</span><span class="n">u</span><span class="o">.^</span><span class="mi">3</span><span class="x">)</span><span class="err">'</span><span class="n">true_A</span><span class="x">)</span><span class="err">'</span>
<span class="k">end</span>
<span class="n">t</span> <span class="o">=</span> <span class="n">range</span><span class="x">(</span><span class="n">tspan</span><span class="x">[</span><span class="mi">1</span><span class="x">],</span><span class="n">tspan</span><span class="x">[</span><span class="mi">2</span><span class="x">],</span><span class="n">length</span><span class="o">=</span><span class="n">datasize</span><span class="x">)</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">trueODEfunc</span><span class="x">,</span><span class="n">u0</span><span class="x">,</span><span class="n">tspan</span><span class="x">)</span>
<span class="n">ode_data</span> <span class="o">=</span> <span class="kt">Array</span><span class="x">(</span><span class="n">solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">saveat</span><span class="o">=</span><span class="n">t</span><span class="x">))</span>
<span class="n">dudt2</span> <span class="o">=</span> <span class="n">FastChain</span><span class="x">((</span><span class="n">x</span><span class="x">,</span><span class="n">p</span><span class="x">)</span> <span class="o">-></span> <span class="n">x</span><span class="o">.^</span><span class="mi">3</span><span class="x">,</span>
<span class="n">FastDense</span><span class="x">(</span><span class="mi">2</span><span class="x">,</span><span class="mi">50</span><span class="x">,</span><span class="n">tanh</span><span class="x">),</span>
<span class="n">FastDense</span><span class="x">(</span><span class="mi">50</span><span class="x">,</span><span class="mi">2</span><span class="x">))</span>
<span class="n">n_ode</span> <span class="o">=</span> <span class="n">NeuralODE</span><span class="x">(</span><span class="n">dudt2</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">saveat</span><span class="o">=</span><span class="n">t</span><span class="x">)</span>
<span class="k">function</span><span class="nf"> predict_n_ode</span><span class="x">(</span><span class="n">p</span><span class="x">)</span>
<span class="n">n_ode</span><span class="x">(</span><span class="n">u0</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="k">end</span>
<span class="k">function</span><span class="nf"> loss_n_ode</span><span class="x">(</span><span class="n">p</span><span class="x">)</span>
<span class="n">pred</span> <span class="o">=</span> <span class="n">predict_n_ode</span><span class="x">(</span><span class="n">p</span><span class="x">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">sum</span><span class="x">(</span><span class="n">abs2</span><span class="x">,</span><span class="n">ode_data</span> <span class="o">.-</span> <span class="n">pred</span><span class="x">)</span>
<span class="n">loss</span><span class="x">,</span><span class="n">pred</span>
<span class="k">end</span>
<span class="n">loss_n_ode</span><span class="x">(</span><span class="n">n_ode</span><span class="o">.</span><span class="n">p</span><span class="x">)</span> <span class="c"># n_ode.p stores the initial parameters of the neural ODE</span>
<span class="n">cb</span> <span class="o">=</span> <span class="k">function</span><span class="nf"> </span><span class="o">(</span><span class="n">p</span><span class="x">,</span><span class="n">l</span><span class="x">,</span><span class="n">pred</span><span class="x">;</span><span class="n">doplot</span><span class="o">=</span><span class="nb">false</span><span class="x">)</span> <span class="c">#callback function to observe training</span>
<span class="n">display</span><span class="x">(</span><span class="n">l</span><span class="x">)</span>
<span class="c"># plot current prediction against data</span>
<span class="k">if</span> <span class="n">doplot</span>
<span class="n">pl</span> <span class="o">=</span> <span class="n">scatter</span><span class="x">(</span><span class="n">t</span><span class="x">,</span><span class="n">ode_data</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="o">:</span><span class="x">],</span><span class="n">label</span><span class="o">=</span><span class="s">"data"</span><span class="x">)</span>
<span class="n">scatter!</span><span class="x">(</span><span class="n">pl</span><span class="x">,</span><span class="n">t</span><span class="x">,</span><span class="n">pred</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="o">:</span><span class="x">],</span><span class="n">label</span><span class="o">=</span><span class="s">"prediction"</span><span class="x">)</span>
<span class="n">display</span><span class="x">(</span><span class="n">plot</span><span class="x">(</span><span class="n">pl</span><span class="x">))</span>
<span class="k">end</span>
<span class="k">return</span> <span class="nb">false</span>
<span class="k">end</span>
<span class="c"># Display the ODE with the initial parameter values.</span>
<span class="n">cb</span><span class="x">(</span><span class="n">n_ode</span><span class="o">.</span><span class="n">p</span><span class="x">,</span><span class="n">loss_n_ode</span><span class="x">(</span><span class="n">n_ode</span><span class="o">.</span><span class="n">p</span><span class="x">)</span><span class="o">...</span><span class="x">)</span>
<span class="n">res1</span> <span class="o">=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">sciml_train</span><span class="x">(</span><span class="n">loss_n_ode</span><span class="x">,</span> <span class="n">n_ode</span><span class="o">.</span><span class="n">p</span><span class="x">,</span> <span class="n">ADAM</span><span class="x">(</span><span class="mf">0.05</span><span class="x">),</span> <span class="n">cb</span> <span class="o">=</span> <span class="n">cb</span><span class="x">,</span> <span class="n">maxiters</span> <span class="o">=</span> <span class="mi">300</span><span class="x">)</span>
<span class="n">cb</span><span class="x">(</span><span class="n">res1</span><span class="o">.</span><span class="n">minimizer</span><span class="x">,</span><span class="n">loss_n_ode</span><span class="x">(</span><span class="n">res1</span><span class="o">.</span><span class="n">minimizer</span><span class="x">)</span><span class="o">...</span><span class="x">;</span><span class="n">doplot</span><span class="o">=</span><span class="nb">true</span><span class="x">)</span>
<span class="n">res2</span> <span class="o">=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">sciml_train</span><span class="x">(</span><span class="n">loss_n_ode</span><span class="x">,</span> <span class="n">res1</span><span class="o">.</span><span class="n">minimizer</span><span class="x">,</span> <span class="n">LBFGS</span><span class="x">(),</span> <span class="n">cb</span> <span class="o">=</span> <span class="n">cb</span><span class="x">)</span>
<span class="n">cb</span><span class="x">(</span><span class="n">res2</span><span class="o">.</span><span class="n">minimizer</span><span class="x">,</span><span class="n">loss_n_ode</span><span class="x">(</span><span class="n">res2</span><span class="o">.</span><span class="n">minimizer</span><span class="x">)</span><span class="o">...</span><span class="x">;</span><span class="n">doplot</span><span class="o">=</span><span class="nb">true</span><span class="x">)</span>
</code></pre></div></div>
<h2 id="sdes-and-ad-on-diffeqgpujl">SDEs and AD on DiffEqGPU.jl</h2>
<p><a href="https://github.com/JuliaDiffEq/DiffEqGPU.jl">DiffEqGPU.jl, the library for automated parallelization of small differential equations across GPUs</a>, now supports SDEs and ForwardDiff dual numbers. This
means you can use adaptive SDE solvers to solve 100,000 simultaneous SDEs on
GPUs, or solve ODEs defined by dual numbers in order to do forward sensitivity
analysis of many parameters at once. Once again, the interface is as simple as
adding <code class="language-plaintext highlighter-rouge">EnsembleGPUArray()</code> to your ensemble solve, essentially no code change
is required to make use of these features!</p>
<h2 id="global-sensitivity-analysis-overhaul-common-interface-and-parallelism">Global Sensitivity Analysis Overhaul: Common interface and Parallelism</h2>
<p>Thanks to Vaibhav Dixit (@vaibhavdixit02), we now have a new interface for
global sensitivity analysis which allows for specifying a function that is
compatible with all forms of GSA and allows for parallelism. For example, we
can look at the global sensitivity of the mean and the maximum of the Lotka-Volterra
ODE by defining a function of the parmeters <code class="language-plaintext highlighter-rouge">p</code>:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">DiffEqSensitivity</span><span class="x">,</span> <span class="n">Statistics</span><span class="x">,</span> <span class="n">OrdinaryDiffEq</span> <span class="c">#load packages</span>
<span class="k">function</span><span class="nf"> f</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="c">#prey</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">+</span> <span class="n">p</span><span class="x">[</span><span class="mi">4</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="c">#predator</span>
<span class="k">end</span>
<span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.0</span><span class="x">;</span><span class="mf">1.0</span><span class="x">]</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0</span><span class="x">,</span><span class="mf">10.0</span><span class="x">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.5</span><span class="x">,</span><span class="mf">1.0</span><span class="x">,</span><span class="mf">3.0</span><span class="x">,</span><span class="mf">1.0</span><span class="x">]</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">f</span><span class="x">,</span><span class="n">u0</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="n">t</span> <span class="o">=</span> <span class="n">collect</span><span class="x">(</span><span class="n">range</span><span class="x">(</span><span class="mi">0</span><span class="x">,</span> <span class="n">stop</span><span class="o">=</span><span class="mi">10</span><span class="x">,</span> <span class="n">length</span><span class="o">=</span><span class="mi">200</span><span class="x">))</span>
<span class="n">f1</span> <span class="o">=</span> <span class="k">function</span><span class="nf"> </span><span class="o">(</span><span class="n">p</span><span class="x">)</span>
<span class="n">prob1</span> <span class="o">=</span> <span class="n">remake</span><span class="x">(</span><span class="n">prob</span><span class="x">;</span><span class="n">p</span><span class="o">=</span><span class="n">p</span><span class="x">)</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">prob1</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">();</span><span class="n">saveat</span><span class="o">=</span><span class="n">t</span><span class="x">)</span>
<span class="x">[</span><span class="n">mean</span><span class="x">(</span><span class="n">sol</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="o">:</span><span class="x">]),</span> <span class="n">maximum</span><span class="x">(</span><span class="n">sol</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="o">:</span><span class="x">])]</span>
<span class="k">end</span>
</code></pre></div></div>
<p>And from here we can call <code class="language-plaintext highlighter-rouge">gsa</code>:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">m</span> <span class="o">=</span> <span class="n">gsa</span><span class="x">(</span><span class="n">f1</span><span class="x">,</span><span class="n">Morris</span><span class="x">(</span><span class="n">total_num_trajectory</span><span class="o">=</span><span class="mi">1000</span><span class="x">,</span><span class="n">num_trajectory</span><span class="o">=</span><span class="mi">150</span><span class="x">),[[</span><span class="mi">1</span><span class="x">,</span><span class="mi">5</span><span class="x">],[</span><span class="mi">1</span><span class="x">,</span><span class="mi">5</span><span class="x">],[</span><span class="mi">1</span><span class="x">,</span><span class="mi">5</span><span class="x">],[</span><span class="mi">1</span><span class="x">,</span><span class="mi">5</span><span class="x">]])</span>
</code></pre></div></div>
<p>That’s GSA with the Morris method. But now Sobol is one line away, and eFAST, etc.
are all simple variations.</p>
<p>In addition, there is a parallel batching interface that works nicely with the
Ensemble interface. All that happens is that <code class="language-plaintext highlighter-rouge">p</code> becomes a matrix where each
row <code class="language-plaintext highlighter-rouge">p[i,:]</code> is a set of parameters. For example, the following does the same
global sensitivity analysis but with Sobol sensitivity and automatic GPU
parallelism:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">DiffEqGPU</span>
<span class="n">f1</span> <span class="o">=</span> <span class="k">function</span><span class="nf"> </span><span class="o">(</span><span class="n">p</span><span class="x">)</span>
<span class="n">prob_func</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">i</span><span class="x">,</span><span class="n">repeat</span><span class="x">)</span> <span class="o">=</span> <span class="n">remake</span><span class="x">(</span><span class="n">prob</span><span class="x">;</span><span class="n">p</span><span class="o">=</span><span class="n">p</span><span class="x">[</span><span class="n">i</span><span class="x">,</span><span class="o">:</span><span class="x">])</span>
<span class="n">ensemble_prob</span> <span class="o">=</span> <span class="n">EnsembleProblem</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">prob_func</span><span class="o">=</span><span class="n">prob_func</span><span class="x">)</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">ensemble_prob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">EnsembleGPUArray</span><span class="x">();</span><span class="n">saveat</span><span class="o">=</span><span class="n">t</span><span class="x">)</span>
<span class="c"># Now sol[i] is the solution for the ith set of parameters</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">zeros</span><span class="x">(</span><span class="n">size</span><span class="x">(</span><span class="n">p</span><span class="x">,</span><span class="mi">1</span><span class="x">),</span><span class="mi">2</span><span class="x">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="k">in</span> <span class="mi">1</span><span class="o">:</span><span class="n">size</span><span class="x">(</span><span class="n">p</span><span class="x">,</span><span class="mi">1</span><span class="x">)</span>
<span class="n">out</span><span class="x">[</span><span class="n">i</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">mean</span><span class="x">(</span><span class="n">sol</span><span class="x">[</span><span class="n">i</span><span class="x">][</span><span class="mi">1</span><span class="x">,</span><span class="o">:</span><span class="x">])</span>
<span class="n">out</span><span class="x">[</span><span class="n">i</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">maximum</span><span class="x">(</span><span class="n">sol</span><span class="x">[</span><span class="n">i</span><span class="x">][</span><span class="mi">2</span><span class="x">,</span><span class="o">:</span><span class="x">])</span>
<span class="k">end</span>
<span class="n">out</span>
<span class="k">end</span>
<span class="n">sobol_result</span> <span class="o">=</span> <span class="n">gsa</span><span class="x">(</span><span class="n">f1</span><span class="x">,</span><span class="n">Sobol</span><span class="x">(),</span><span class="n">A</span><span class="x">,</span><span class="n">B</span><span class="x">,</span><span class="n">batch</span><span class="o">=</span><span class="nb">true</span><span class="x">)</span>
</code></pre></div></div>
<h2 id="efast-global-sensitivity-analysis">eFAST Global Sensitivity Analysis</h2>
<p>A new global sensitivity analysis method with fast convergence, eFAST, has been
added to the library. It works on the same <code class="language-plaintext highlighter-rouge">gsa</code> interface, so code using more
traditional Sobol or Morris techniques can switch over to this faster converging
method with just a few lines changed!</p>
<h1 id="next-directions">Next Directions</h1>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Tue, 18 Feb 2020 10:00:00 +0000
https://sciml.ai/2020/02/18/Universal.html
https://sciml.ai/2020/02/18/Universal.htmlDifferentialEquations.jl v6.9.0: Automated Multi-GPU Implicit ODE Solving, SciPy/R Bindings<h2 id="cluster-multi-gpu-support-in-diffeqgpu">Cluster Multi-GPU Support in DiffEqGPU</h2>
<p>The DiffEqGPU automated GPU parallelism tools now support multiple GPUs. The
README now shows that one can do things like:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Setup processes with different CUDA devices</span>
<span class="k">using</span> <span class="n">Distributed</span>
<span class="n">addprocs</span><span class="x">(</span><span class="n">numgpus</span><span class="x">)</span>
<span class="k">import</span> <span class="n">CUDAdrv</span><span class="x">,</span> <span class="n">CUDAnative</span>
<span class="n">let</span> <span class="n">gpuworkers</span> <span class="o">=</span> <span class="n">asyncmap</span><span class="x">(</span><span class="n">collect</span><span class="x">(</span><span class="n">zip</span><span class="x">(</span><span class="n">workers</span><span class="x">(),</span> <span class="n">CUDAdrv</span><span class="o">.</span><span class="n">devices</span><span class="x">())))</span> <span class="k">do</span> <span class="x">(</span><span class="n">p</span><span class="x">,</span> <span class="n">d</span><span class="x">)</span>
<span class="n">remotecall_wait</span><span class="x">(</span><span class="n">CUDAnative</span><span class="o">.</span><span class="n">device!</span><span class="x">,</span> <span class="n">p</span><span class="x">,</span> <span class="n">d</span><span class="x">)</span>
<span class="n">p</span>
<span class="k">end</span>
</code></pre></div></div>
<p>to setup each individual process with a separate GPU, and then the standard
usage of DiffEqGPU.jl:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> lorenz</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="nd">@inbounds</span> <span class="k">begin</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="x">(</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="x">(</span><span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">])</span> <span class="o">-</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">-</span> <span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="k">end</span>
<span class="nb">nothing</span>
<span class="k">end</span>
<span class="n">u0</span> <span class="o">=</span> <span class="kt">Float32</span><span class="x">[</span><span class="mf">1.0</span><span class="x">;</span><span class="mf">0.0</span><span class="x">;</span><span class="mf">0.0</span><span class="x">]</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0f0</span><span class="x">,</span><span class="mf">100.0f0</span><span class="x">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">(</span><span class="mf">10.0f0</span><span class="x">,</span><span class="mf">28.0f0</span><span class="x">,</span><span class="mi">8</span><span class="o">/</span><span class="mf">3f0</span><span class="x">)</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">lorenz</span><span class="x">,</span><span class="n">u0</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="n">prob_func</span> <span class="o">=</span> <span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">i</span><span class="x">,</span><span class="n">repeat</span><span class="x">)</span> <span class="o">-></span> <span class="n">remake</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">p</span><span class="o">=</span><span class="n">rand</span><span class="x">(</span><span class="kt">Float32</span><span class="x">,</span><span class="mi">3</span><span class="x">)</span><span class="o">.*</span><span class="n">p</span><span class="x">)</span>
<span class="n">monteprob</span> <span class="o">=</span> <span class="n">EnsembleProblem</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span> <span class="n">prob_func</span> <span class="o">=</span> <span class="n">prob_func</span><span class="x">)</span>
<span class="nd">@time</span> <span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">monteprob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">EnsembleGPUArray</span><span class="x">(),</span><span class="n">trajectories</span><span class="o">=</span><span class="mi">100_000</span><span class="x">,</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">10_000</span><span class="x">,</span> <span class="n">saveat</span><span class="o">=</span><span class="mf">1.0f0</span><span class="x">)</span>
</code></pre></div></div>
<p>will now make use of these GPUs per batch of trajectories. We can see effective
parallel solving of over 100,000 ODEs all simultaneously using this approach
on just a few compute nodes!</p>
<h2 id="scipy-and-desolve-r-updated-matlab-common-interface-bindings-for-ease-of-translation">SciPy and deSolve (R) (+Updated MATLAB) Common Interface Bindings for Ease of Translation</h2>
<p>With the new <a href="https://github.com/JuliaDiffEq/SciPyDiffEq.jl">SciPyDiffEq.jl</a>,
<a href="https://github.com/JuliaDiffEq/deSolveDiffEq.jl">deSolveDiffEq.jl</a>, and the
update <a href="https://github.com/JuliaDiffEq/MATLABDiffEq.jl">MATLABDiffEq.jl</a> bindings,
you can now solve common interface defined ordinary differential equations using
the solver suites from Python, R, and MATLAB respectively. These libraries have
been developed due to popular demand as a large influx of users from these
communities who want to ensure that their Julia-translated models are correct.
Now, one can install these solvers can double check their models with the
original libraries to double check that the translation is correct.</p>
<p>To see this in action, the following solves the Lorenz equations with SciPy’s
<code class="language-plaintext highlighter-rouge">solve_ivp</code>’s <code class="language-plaintext highlighter-rouge">RK45</code>, deSolve’s (R) <code class="language-plaintext highlighter-rouge">lsoda</code> wrapper, and MATLAB’s <code class="language-plaintext highlighter-rouge">ode45</code>:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">SciPyDiffEq</span><span class="x">,</span> <span class="n">MATLABDiffEq</span><span class="x">,</span> <span class="n">deSolveDiffEq</span>
<span class="k">function</span><span class="nf"> lorenz</span><span class="x">(</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="n">du1</span> <span class="o">=</span> <span class="mf">10.0</span><span class="x">(</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span>
<span class="n">du2</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="x">(</span><span class="mf">28.0</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">])</span> <span class="o">-</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du3</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">-</span> <span class="x">(</span><span class="mi">8</span><span class="o">/</span><span class="mi">3</span><span class="x">)</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="x">[</span><span class="n">du1</span><span class="x">,</span> <span class="n">du2</span><span class="x">,</span> <span class="n">du3</span><span class="x">]</span>
<span class="k">end</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0</span><span class="x">,</span><span class="mf">10.0</span><span class="x">)</span>
<span class="n">u0</span> <span class="o">=</span> <span class="x">[</span><span class="mf">1.0</span><span class="x">,</span><span class="mf">0.0</span><span class="x">,</span><span class="mf">0.0</span><span class="x">]</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">lorenz</span><span class="x">,</span><span class="n">u0</span><span class="x">,</span><span class="n">tspan</span><span class="x">)</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">SciPyDiffEq</span><span class="o">.</span><span class="n">RK45</span><span class="x">())</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">MATLABDiffEq</span><span class="o">.</span><span class="n">ode45</span><span class="x">())</span>
<span class="n">sol</span> <span class="o">=</span> <span class="n">solve</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">deSolveDiffEq</span><span class="o">.</span><span class="n">lsoda</span><span class="x">())</span>
</code></pre></div></div>
<p>As an added bonus, this gives us a fairly simple way to track performance
differences between the common ODE solver packages of each language. A new
<a href="https://benchmarks.juliadiffeq.org/html/MultiLanguage/wrapper_packages">benchmark page is focused on cross language wrapper overhead</a> and showcases the performance differences
between these language’s differential equation suites on 4 ODE test problems
(non-stiff and stiff). For example, on a system of 7 stiff ODEs, we see the
following:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/69501114-bec7b680-0ecf-11ea-9095-7b7f2e98d514.png" alt="ODE benchmarks" /></p>
<p>which showcases the native Julia solvers as the fastest, benchmarking close to
50x faster than MATLAB, 100x faster than deSolve (R), and nearly 10,000x faster
than SciPy. Thus, with these new tools, users can have a one line change to both
ensure their models have translated correctly while understanding the true
performance difference in their real-world context.</p>
<h2 id="automated-gpu-based-parameter-parallelism-support-for-stiff-odes-and-event-handling">Automated GPU-based Parameter Parallelism Support for Stiff ODEs and Event Handling</h2>
<p>DiffEqGPU now supports stiff ODEs through implicit and Rosenbrock methods, and
callbacks (both <code class="language-plaintext highlighter-rouge">ContinuousCallback</code> and <code class="language-plaintext highlighter-rouge">DiscreteCallback</code>) are allowed. To
see this in action, one could for example do the following:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> lorenz</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="nd">@inbounds</span> <span class="k">begin</span>
<span class="n">du</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="x">(</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">])</span>
<span class="n">du</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="x">(</span><span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span><span class="o">-</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">])</span> <span class="o">-</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">du</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span> <span class="o">-</span> <span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span><span class="o">*</span><span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="k">end</span>
<span class="nb">nothing</span>
<span class="k">end</span>
<span class="n">u0</span> <span class="o">=</span> <span class="kt">Float32</span><span class="x">[</span><span class="mf">1.0</span><span class="x">;</span><span class="mf">0.0</span><span class="x">;</span><span class="mf">0.0</span><span class="x">]</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="x">(</span><span class="mf">0.0f0</span><span class="x">,</span><span class="mf">100.0f0</span><span class="x">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="x">(</span><span class="mf">10.0f0</span><span class="x">,</span><span class="mf">28.0f0</span><span class="x">,</span><span class="mi">8</span><span class="o">/</span><span class="mf">3f0</span><span class="x">)</span>
<span class="k">function</span><span class="nf"> lorenz_jac</span><span class="x">(</span><span class="n">J</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="nd">@inbounds</span> <span class="k">begin</span>
<span class="n">σ</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span>
<span class="n">ρ</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">β</span> <span class="o">=</span> <span class="n">p</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">z</span> <span class="o">=</span> <span class="n">u</span><span class="x">[</span><span class="mi">3</span><span class="x">]</span>
<span class="n">J</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="n">σ</span>
<span class="n">J</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">ρ</span> <span class="o">-</span> <span class="n">z</span>
<span class="n">J</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="n">y</span>
<span class="n">J</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">σ</span>
<span class="n">J</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="mi">1</span>
<span class="n">J</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">2</span><span class="x">]</span> <span class="o">=</span> <span class="n">x</span>
<span class="n">J</span><span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">J</span><span class="x">[</span><span class="mi">2</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="n">x</span>
<span class="n">J</span><span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">3</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="n">β</span>
<span class="k">end</span>
<span class="nb">nothing</span>
<span class="k">end</span>
<span class="k">function</span><span class="nf"> lorenz_tgrad</span><span class="x">(</span><span class="n">J</span><span class="x">,</span><span class="n">u</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="nb">nothing</span>
<span class="k">end</span>
<span class="n">func</span> <span class="o">=</span> <span class="n">ODEFunction</span><span class="x">(</span><span class="n">lorenz</span><span class="x">,</span><span class="n">jac</span><span class="o">=</span><span class="n">lorenz_jac</span><span class="x">,</span><span class="n">tgrad</span><span class="o">=</span><span class="n">lorenz_tgrad</span><span class="x">)</span>
<span class="n">prob_jac</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">func</span><span class="x">,</span><span class="n">u0</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="n">prob_func</span> <span class="o">=</span> <span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">i</span><span class="x">,</span><span class="n">repeat</span><span class="x">)</span> <span class="o">-></span> <span class="n">remake</span><span class="x">(</span><span class="n">prob</span><span class="x">,</span><span class="n">p</span><span class="o">=</span><span class="n">rand</span><span class="x">(</span><span class="kt">Float32</span><span class="x">,</span><span class="mi">3</span><span class="x">)</span><span class="o">.*</span><span class="n">p</span><span class="x">)</span>
<span class="n">monteprob_jac</span> <span class="o">=</span> <span class="n">EnsembleProblem</span><span class="x">(</span><span class="n">prob_jac</span><span class="x">,</span> <span class="n">prob_func</span> <span class="o">=</span> <span class="n">prob_func</span><span class="x">)</span>
<span class="n">solve</span><span class="x">(</span><span class="n">monteprob_jac</span><span class="x">,</span><span class="n">Rodas5</span><span class="x">(</span><span class="n">linsolve</span><span class="o">=</span><span class="n">LinSolveGPUSplitFactorize</span><span class="x">()),</span><span class="n">EnsembleGPUArray</span><span class="x">(),</span><span class="n">dt</span><span class="o">=</span><span class="mf">0.1</span><span class="x">,</span><span class="n">trajectories</span><span class="o">=</span><span class="mi">10_000</span><span class="x">,</span><span class="n">saveat</span><span class="o">=</span><span class="mf">1.0f0</span><span class="x">)</span>
<span class="n">solve</span><span class="x">(</span><span class="n">monteprob_jac</span><span class="x">,</span><span class="n">TRBDF2</span><span class="x">(</span><span class="n">linsolve</span><span class="o">=</span><span class="n">LinSolveGPUSplitFactorize</span><span class="x">()),</span><span class="n">EnsembleGPUArray</span><span class="x">(),</span><span class="n">dt</span><span class="o">=</span><span class="mf">0.1</span><span class="x">,</span><span class="n">trajectories</span><span class="o">=</span><span class="mi">10_000</span><span class="x">,</span><span class="n">saveat</span><span class="o">=</span><span class="mf">1.0f0</span><span class="x">)</span>
</code></pre></div></div>
<p>This solves the Lorenz equations with Rosenbrock and implicit ODE solvers for
10,000 different parameters. On an example stiff ODE we’ve been testing
(26 ODEs), a single RTX 2080 card was 5x faster than a multithreaded 16 core
Xeon computer, meaning the time savings to do a parameter sweep with just one
GPU can be tremendous, even (especially) on a stiff ODE.</p>
<h2 id="stiff-ode-linear-solver-performance-improvements">Stiff ODE Linear Solver Performance Improvements</h2>
<p>Thanks to Yingbo Ma (@YingboMa), our implicit ODE solvers got a pretty major
improvement in certain stiff ODEs which have fast oscillatory terms. Now it’s
hard to find a stiff ODE benchmark where a native Julia method isn’t performing
the best, except for super large systems where Newton-Krylov methods are used.
Our next goal is to better enhance the performance of our Newton-Krylov support.</p>
<h2 id="more-precise-package-maintenance-strict-versioning-and-bounds">More Precise Package Maintenance: Strict Versioning and Bounds</h2>
<p>All of JuliaDiffEq now has upper bounds on its packages, along with CompatHelper
installed so that every dependency change gets an automatic pull request and a
notification to the JuliaDiffEq maintainers to inform us about changes in the
wider Julia ecosystem. This should help us stay on top of all changes and keep
the system stable.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Tue, 03 Dec 2019 12:00:00 +0000
https://sciml.ai/2019/12/03/MultiGPU.html
https://sciml.ai/2019/12/03/MultiGPU.htmlDifferentialEquations.jl v6.8.0: Advanced Stiff Differential Equation Solving<p>This release covers the completion of another successful summer. We have now
completed a new round of tooling for solving large stiff and sparse differential
equations. Most of this is covered in the exciting….</p>
<h2 id="new-tutorial-solving-stiff-equations-for-advanced-users">New Tutorial: Solving Stiff Equations for Advanced Users!</h2>
<p>That is right, we now have a new tutorial added to the documentation on
<a href="https://docs.juliadiffeq.org/latest/tutorials/advanced_ode_example">solving stiff differential equations</a>.
This tutorial goes into depth, showing how to use our recent developments to
do things like automatically detect and optimize a solver with respect to
sparsity pattern, or automatically symbolically calculate a Jacobian from a
numerical code. This should serve as a great resource for the advanced users
who want to know how to get started with those finer details like sparsity
patterns and mass matrices.</p>
<h2 id="automatic-colorization-and-optimization-for-structured-matrices">Automatic Colorization and Optimization for Structured Matrices</h2>
<p>As showcased in the tutorial, if you have <code class="language-plaintext highlighter-rouge">jac_prototype</code> be a structured matrix,
then the <code class="language-plaintext highlighter-rouge">colorvec</code> is automatically computed, meaning that things like
<code class="language-plaintext highlighter-rouge">BandedMatrix</code> are now automatically optimized. The default linear solvers make
use of their special methods, meaning that DiffEq has full support for these
structured matrix objects in an optimal manner.</p>
<h2 id="implicit-extrapolation-and-parallel-dirk-for-stiff-odes">Implicit Extrapolation and Parallel DIRK for Stiff ODEs</h2>
<p>At the tail end of the summer, a set of implicit extrapolation methods were
completed. We plan to parallelize these over the next year, seeing what can
happen on small stiff ODEs if parallel W-factorizations are allowed.</p>
<h2 id="automatic-conversion-of-numerical-to-symbolic-code-with-modelingtoolkitize">Automatic Conversion of Numerical to Symbolic Code with Modelingtoolkitize</h2>
<p>This is just really cool and showcased in the new tutorial. If you give us a
function for numerically computing the ODE, we can now automatically convert
said function into a symbolic form in order to compute quantities like the
Jacobia and then build a Julia code for the generated Jacobian. Check out the
new tutorial if you’re curious, because although it sounds crazy… this is
now a standard feature!</p>
<h2 id="gpu-optimized-sparse-colored-automatic-and-finite-differentiation">GPU-Optimized Sparse (Colored) Automatic and Finite Differentiation</h2>
<p>SparseDiffTools.jl and DiffEqDiffTools.jl were made GPU-optimized, meaning that
the stiff ODE solvers now do not have a rate-limiting step at the Jacobian
construction.</p>
<h2 id="diffeqbiologicaljl-homotopy-continuation">DiffEqBiological.jl: Homotopy Continuation</h2>
<p>DiffEqBiological got support for automatic bifurcation plot generation by
connecting with HomotopyContinuation.jl. See <a href="https://github.com/JuliaDiffEq/DiffEqBiological.jl#making-bifurcation-diagram">the new tutorial</a></p>
<h2 id="greatly-improved-delay-differential-equation-solving">Greatly improved delay differential equation solving</h2>
<p>David Widmann (@devmotion) greatly improved the delay differential equation
solver’s implicit step handling, along with adding a bunch of tests to show
that it passes the special RADAR5 test suite!</p>
<h2 id="color-differentiation-integration-with-native-julia-de-solvers">Color Differentiation Integration with Native Julia DE Solvers</h2>
<p>The <code class="language-plaintext highlighter-rouge">ODEFunction</code>, <code class="language-plaintext highlighter-rouge">DDEFunction</code>, <code class="language-plaintext highlighter-rouge">SDEFunction</code>, <code class="language-plaintext highlighter-rouge">DAEFunction</code>, etc. constructors
now allow you to specify a color vector. This will reduce the number of <code class="language-plaintext highlighter-rouge">f</code>
calls required to compute a sparse Jacobian, giving a massive speedup to the
computation of a Jacobian and thus of an implicit differential equation solve.
The color vectors can be computed automatically using the SparseDiffTools.jl
library’s <code class="language-plaintext highlighter-rouge">matrix_colors</code> function. Thank JSoC student Langwen Huang
(@huanglangwen) for this contribution.</p>
<h2 id="improved-compile-times">Improved compile times</h2>
<p>Compile times should be majorly improved now thanks to work from David
Widmann (@devmotion) and others.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Thu, 07 Nov 2019 12:00:00 +0000
https://sciml.ai/2019/11/07/ParallelStiff.html
https://sciml.ai/2019/11/07/ParallelStiff.htmlDifferentialEquations.jl v6.7.0: GPU-based Ensembles and Automatic Sparsity<p>Let’s just jump right in! This time we have a bunch of new GPU tools and
sparsity handling.</p>
<h2 id="breaking-with-deprecations-diffeqgpu-gpu-based-ensemble-simulations">(Breaking with Deprecations) DiffEqGPU: GPU-based Ensemble Simulations</h2>
<p>The <code class="language-plaintext highlighter-rouge">MonteCarloProblem</code> interface received an overhaul. First of all, the
interface has been renamed to <code class="language-plaintext highlighter-rouge">Ensemble</code>. The changes are:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">MonteCarloProblem</code> -> <code class="language-plaintext highlighter-rouge">EnsembleProblem</code></li>
<li><code class="language-plaintext highlighter-rouge">MonteCarloSolution</code> -> <code class="language-plaintext highlighter-rouge">EnsembleSolution</code></li>
<li><code class="language-plaintext highlighter-rouge">MonteCarloSummary</code> -> <code class="language-plaintext highlighter-rouge">EnsembleSummary</code></li>
<li><code class="language-plaintext highlighter-rouge">num_monte</code> -> <code class="language-plaintext highlighter-rouge">trajectories</code></li>
</ul>
<p><strong>Specifying <code class="language-plaintext highlighter-rouge">parallel_type</code> has been deprecated</strong> and a deprecation warning is
thrown mentioning this. So don’t worry: your code will work but will give
warnings as to what to change. Additionally, <strong>the DiffEqMonteCarlo.jl package
is no longer necessary for any of this functionality</strong>.</p>
<p>Now, <code class="language-plaintext highlighter-rouge">solve</code> of a <code class="language-plaintext highlighter-rouge">EnsembleProblem</code> works on the same dispatch mechanism as the
rest of DiffEq, which looks like <code class="language-plaintext highlighter-rouge">solve(ensembleprob,Tsit5(),EnsembleThreads(),trajectories=n)</code>
where the third argument is an ensembling algorithm to specify the
threading-based form. Code with the deprecation warning will work until the
release of DiffEq 7.0, at which time the alternative path will be removed.</p>
<p>See the <a href="https://docs.juliadiffeq.org/latest/features/ensemble">updated ensembles page for more details</a></p>
<p>The change to dispatch was done for a reason: it allows us to build new libraries
specifically for sophisticated handling of many trajectory ODE solves without
introducing massive new dependencies to the standard DifferentialEquations.jl
user. However, many people might be interested in the first project to make
use of this: <a href="https://github.com/JuliaDiffEq/DiffEqGPU.jl">DiffEqGPU.jl</a>.
DiffEqGPU.jl lets you define a problem, like an <code class="language-plaintext highlighter-rouge">ODEProblem</code>, and then solve
thousands of trajectories in parallel using your GPU. The syntax looks like:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">monteprob</span> <span class="o">=</span> <span class="n">EnsembleProblem</span><span class="x">(</span><span class="n">my_ode_prob</span><span class="x">)</span>
<span class="n">solve</span><span class="x">(</span><span class="n">monteprob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">EnsembleGPUArray</span><span class="x">(),</span><span class="n">num_monte</span><span class="o">=</span><span class="mi">100_000</span><span class="x">)</span>
</code></pre></div></div>
<p>and it will return 100,000 ODE solves. <strong>We have seen between a 12x and 90x speedup
depending on the GPU of the test systems</strong>, meaning that this can be a massive
improvement for parameter space exploration on smaller systems of ODEs.
Currently there are a few limitations of this method, including that events
cannot be used, but those will be solved shortly. Additional methods for
GPU-based parameter parallelism are coming soon to the same interface. Also
planned are GPU-accelerated multi-level Monte Carlo methods for faster weak
convergence of SDEs.</p>
<p>Again, this is utilizing compilation tricks to take the user-defined <code class="language-plaintext highlighter-rouge">f</code>
and recompile it on the fly to a <code class="language-plaintext highlighter-rouge">.ptx</code> kernel, and generating kernel-optimized
array-based formulations of the existing ODE solvers</p>
<h2 id="automated-sparsity-detection">Automated Sparsity Detection</h2>
<p>Shashi Gowda (@shashigowda) implemented a sparsity detection algorithm which
digs through user-defined Julia functions with Cassette.jl to find out what
inputs influence the output. The basic version checks at a given trace, but
a more sophisticated version, which we are calling Concolic Combinatoric Analysis,
looks at all possible branch choices and utilizes this to conclusively build a
Jacobian whose sparsity pattern captures the possible variable interactions.</p>
<p>The nice part is that this functionality is very straightforward to use.
For example, let’s say we had the following function:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> f</span><span class="x">(</span><span class="n">dx</span><span class="x">,</span><span class="n">x</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="k">in</span> <span class="mi">2</span><span class="o">:</span><span class="n">length</span><span class="x">(</span><span class="n">x</span><span class="x">)</span><span class="o">-</span><span class="mi">1</span>
<span class="n">dx</span><span class="x">[</span><span class="n">i</span><span class="x">]</span> <span class="o">=</span> <span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="o">-</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="x">]</span> <span class="o">+</span> <span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="x">]</span>
<span class="k">end</span>
<span class="n">dx</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">+</span> <span class="n">x</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">dx</span><span class="x">[</span><span class="k">end</span><span class="x">]</span> <span class="o">=</span> <span class="n">x</span><span class="x">[</span><span class="k">end</span><span class="o">-</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="k">end</span><span class="x">]</span>
<span class="nb">nothing</span>
<span class="k">end</span>
</code></pre></div></div>
<p>If we want to find out the sparsity pattern of <code class="language-plaintext highlighter-rouge">f</code>, we would simply call:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">sparsity_pattern</span> <span class="o">=</span> <span class="n">sparsity!</span><span class="x">(</span><span class="n">f</span><span class="x">,</span><span class="n">output</span><span class="x">,</span><span class="n">input</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
</code></pre></div></div>
<p>where <code class="language-plaintext highlighter-rouge">output</code> is an array like <code class="language-plaintext highlighter-rouge">dx</code>, <code class="language-plaintext highlighter-rouge">input</code> is an array like <code class="language-plaintext highlighter-rouge">x</code>, <code class="language-plaintext highlighter-rouge">p</code>
are possible parameters, and <code class="language-plaintext highlighter-rouge">t</code> is a possible <code class="language-plaintext highlighter-rouge">t</code>. The function will then
be analyzed and <code class="language-plaintext highlighter-rouge">sparsity_pattern</code> will return a <code class="language-plaintext highlighter-rouge">Sparsity</code> type of <code class="language-plaintext highlighter-rouge">I</code> and <code class="language-plaintext highlighter-rouge">J</code>
which denotes the terms in the Jacobian with non-zero elements. By doing
<code class="language-plaintext highlighter-rouge">sparse(sparsity_pattern)</code> we can turn this into a <code class="language-plaintext highlighter-rouge">SparseMatrixCSC</code> with the
correct sparsity pattern.</p>
<p>This functionality highlights the power of Julia since there is no way to
conclusively determine the Jacobian of an arbitrary program <code class="language-plaintext highlighter-rouge">f</code> using numerical
techniques, since all sorts of scenarios lead to “fake zeros” (cancelation,
not checking a place in parameter space where a branch is false, etc.). However,
by directly utilizing Julia’s compiler and the SSA provided by a Julia function
definition we can perform a non-standard interpretation that tells all of the
possible numerical ways the program can act, thus conclusively determining
all of the possible variable interactions.</p>
<p>Of course, you can still specify analytical Jacobians and sparsity patterns
if you want, but if you’re lazy… :)</p>
<p>See <a href="https://github.com/JuliaDiffEq/SparsityDetection.jl">SparsityDetection.jl’s README for more details</a>.</p>
<h2 id="gpu-offloading-in-implicit-de-solving">GPU Offloading in Implicit DE Solving</h2>
<p>We are pleased to announce the <code class="language-plaintext highlighter-rouge">LinSolveGPUFactorize</code> option which allows for
automatic offloading of linear solves to the GPU. For a problem with a large
enough dense Jacobian, using <code class="language-plaintext highlighter-rouge">linsolve=LinSolveGPUFactorize()</code> will now
automatically perform the factorization and back-substitution on the GPU,
allowing for better scaling. For example:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">CuArrays</span>
<span class="n">Rodas5</span><span class="x">(</span><span class="n">linsolve</span> <span class="o">=</span> <span class="n">LinSolveGPUFactorize</span><span class="x">())</span>
</code></pre></div></div>
<p>This simply requires a working installation of CuArrays.jl. See
<a href="https://docs.juliadiffeq.org/latest/features/linear_nonlinear">the linear solver documentation for more details</a>.</p>
<h2 id="experimental-automated-accelerator-gpu-offloading">Experimental: Automated Accelerator (GPU) Offloading</h2>
<p>We have been dabbling in allowing automated accelerator (GPU, multithreading,
distributed, TPU, etc.) offloading when the right hardware is detected and the
problem size is sufficient to success a possible speedup.
<a href="https://github.com/JuliaDiffEq/DiffEqBase.jl/pull/273">A working implementation exists as a PR for DiffEqBase</a>
which would allow automated acceleration of linear solves in implicit DE solving.
However, this somewhat invasive of a default, and very architecture dependent,
so it is unlikely we will be releasing this soon. However, we are investigating
this concept in more detail in the <a href="https://github.com/JuliaDiffEq/AutoOffload.jl">AutoOffload.jl</a>. If you’re interested in Julia-wide automatic acceleration,
please take a look at the repo and help us get something going!</p>
<h2 id="a-complete-set-of-iterative-solver-routines-for-implicit-des">A Complete Set of Iterative Solver Routines for Implicit DEs</h2>
<p>Previous releases had only a pre-built GMRES implementation. However, as
detailed on the <a href="https://docs.juliadiffeq.org/latest/features/linear_nonlinear">linear solver page</a>,
we now have an array of iterative solvers readily available, including:</p>
<ul>
<li>LinSolveGMRES – GMRES</li>
<li>LinSolveCG – CG (Conjugate Gradient)</li>
<li>LinSolveBiCGStabl – BiCGStabl Stabilized Bi-Conjugate Gradient</li>
<li>LinSolveChebyshev – Chebyshev</li>
<li>LinSolveMINRES – MINRES</li>
</ul>
<p>These are all compatible with matrix-free implementations of a
<code class="language-plaintext highlighter-rouge">AbstractDiffEqOperator</code>.</p>
<h2 id="exponential-integrator-improvements">Exponential integrator improvements</h2>
<p>Thanks to Yingbo Ma (@YingboMa), the exprb methods have been greatly improved.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Surrogate optimization</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>GPU-Optimized Sparse (Colored) Automatic Differentiation</li>
<li>Parallelized Implicit Extrapolation of ODEs</li>
</ul>
Fri, 05 Jul 2019 12:00:00 +0000
https://sciml.ai/2019/07/05/AutomaticSparsity.html
https://sciml.ai/2019/07/05/AutomaticSparsity.htmlDifferentialEquations.jl v6.6.0: Sparse Jacobian Coloring, Quantum Computer ODE Solvers, and Stiff SDEs<h2 id="sparsity-performance-jacobian-coloring-with-numerical-and-forward-differentiation">Sparsity Performance: Jacobian coloring with numerical and forward differentiation</h2>
<p>If you have a function <code class="language-plaintext highlighter-rouge">f!(du,u)</code> which has a Tridiagonal Jacobian, you could
calculate that Jacobian by mixing perturbations. For example, instead of doing
<code class="language-plaintext highlighter-rouge">u .+ [epsilon,0,0,0,0,0,0,0,...]</code>, you’d do <code class="language-plaintext highlighter-rouge">u .+ [epsilon,0,0,epsilon,0,0,...]</code>.
Because the <code class="language-plaintext highlighter-rouge">epsilons</code> will never overlap, you can then decode this “compressed”
Jacobian into the sparse form. Do that 3 times and boom, full Jacobian in
4 calls to <code class="language-plaintext highlighter-rouge">f!</code> no matter the size of <code class="language-plaintext highlighter-rouge">u</code>! Without a color vector, this matrix
would take <code class="language-plaintext highlighter-rouge">1+length(u)</code> <code class="language-plaintext highlighter-rouge">f!</code> calls, so I’d say that’s a pretty good speedup.</p>
<p>This is called Jacobian coloring. <code class="language-plaintext highlighter-rouge">[1,2,3,1,2,3,1,2,3,...]</code> are the colors in
this example, and places with the same color can be differentiated simultaneously.
Now, the DiffEqDiffTools.jl internals allow for passing a color vector into the
numerical differentiation libraries and automatically decompressing into a
sparse Jacobian. This means that DifferentialEquations.jl will soon be compatible
with this dramatic speedup technique. In addition, other libraries in Julia with
rely on our utility libraries, like Optim.jl, could soon make good use of this.</p>
<p>What if you don’t know a good color vector for your Jacobian? No sweat! The
soon to be released SparseDiffTools.jl repository has methods for automatically
generating color vectors using heuristic graphical techniques.
DifferentialEquations.jl will soon make use of this automatically if you specify
a sparse matrix for your Jacobian!</p>
<p>Note that the SparseDiffTools.jl repository also includes functions for calculating
the sparse Jacobians using color vectors and forward-mode automatic differentiation
(using Dual numbers provided by ForwardDiff.jl). In this case, the number of Dual
partials is equal to the number of colors, which can be dramatically lower than
the <code class="language-plaintext highlighter-rouge">length(u)</code> (the dense default!), thereby dramatically reducing compile
and run time.</p>
<p>Stay tuned for the next releases which begin to auto-specialize everything
along the way based on sparsity structure. Thanks to JSoC student Pankaj (@pkj-m)
for this work.</p>
<h2 id="higher-weak-order-srock-methods-for-stiff-sdes">Higher weak order SROCK methods for stiff SDEs</h2>
<p>Deepesh Thakur (@deeepeshthakur) continues his roll with stiff stochastic
differential equation solvers by implementing not 1 but 7 new high weak order
stiff SDE solvers. SROCK1 with generalized noise, SKSROCK, and a bunch of
variants of SROCK2. Benchmark updates will come soon, but I have a feeling
that these new methods may be by far the most stable methods in the library,
and the ones which achieve the lowest error in the mean solution most efficiently.</p>
<h2 id="diffeqbot">DiffEqBot</h2>
<p>GSoC student Kanav Gupta (@kanav99) implemented a bot for the JuliaDiffEq
team that allows us to run performance regression benchmarks on demand with
preset Gitlab runners. Right now this has a dedicated machine for CPU and
parallelism performance testing, and soon we’ll have a second machine
up and running for performance testing on GPUs. If you haven’t seen the Julialang
blog post on this topic, <a href="https://sciml.ai/blog/2019/06/diffeqbot">please check it out!</a>.</p>
<h2 id="quantum-ode-solver-qulde">Quantum ODE Solver QuLDE</h2>
<p>If you happen to have a quantum computer handy, hold your horses. <code class="language-plaintext highlighter-rouge">QuLDE</code> from
QuDiffEq.jl is an ODE solver designed for quantum computers. It utilizes the
Yao.jl quantum circuit simulator to run, but once Yao.jl supports QASM then
this will compile to something compatible with (future) quantum computing
hardware. This means that, in order to enter the new age of computing, all
you have to do is change <code class="language-plaintext highlighter-rouge">solve(prob,Tsit5())</code> to <code class="language-plaintext highlighter-rouge">solve(prob,QuLDE())</code> and you’re
there. Is it practical? Who knows (please let us know). Is it cool? Oh yeah!</p>
<p>See <a href="https://nextjournal.com/dgan181/julia-soc-19-quantum-algorithms-for-differential-equations">the quantum ODE solver blog post for more details</a>.</p>
<h2 id="commutative-noise-gpu-compatibility">Commutative Noise GPU compatibility</h2>
<p>The commutative noise SDE solvers are now GPU-compatible thanks to GSoC student
Deepesh Thakur (@deeepeshthakur). The next step will be to implement high order
non-commutative noise SDE solvers and the associated iterated integral
approximations in a manner that is GPU-compatible.</p>
<h2 id="new-benchmark-and-tutorial-repository-setups">New benchmark and tutorial repository setups</h2>
<p>DiffEqBenchmarks.jl and DiffEqTutorials.jl are now fully updated to a Weave.jl
form. We still need to fix up a few benchmarks, but it’s in a state that is ready
for new contributions.</p>
<h2 id="optimized-multithreaded-extrapolation">Optimized multithreaded extrapolation</h2>
<p>The GBS extrapolation methods have gotten optimized, and they now are the one
of the most efficient methods at lower tolerances of the Float64 range for
non-stiff ODEs:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/59899185-d56a5e80-93c1-11e9-86a0-ea09bfaa59ed.png" alt="non-stiff extrapolation" /></p>
<p>Thank you to Konstantin Althaus (@AlthausKonstantin) for contributing the first
version of this algorithm and GSoC student Saurabh Agarwal (@saurabhkgp21) for
adding automatic parallelization of the method.</p>
<p>This method will soon see improvements as multithreading will soon be improved
in Julia v1.2. The new PARTR features will allow our internal <code class="language-plaintext highlighter-rouge">@threads</code> loop
to perform dynamic work-stealing which will definitely be a good improvement to
the current parallelism structure. So stay tuned: this will likely benchmark
even better in a few months.</p>
<h2 id="fully-non-allocating-exp-in-exponential-integrators">Fully non-allocating exp! in exponential integrators</h2>
<p>Thanks to Yingbo Ma (@YingboMa) for making the internal <code class="language-plaintext highlighter-rouge">exp</code> calls of the
exponential integrators non-allocating. Continued improvements to this category
of methods is starting to show promise in the area of semilinear PDEs.</p>
<h2 id="rosenbrock-w-methods">Rosenbrock-W methods</h2>
<p>JSoC student Langwen Huang (@huanglangwen) has added the Rosenbrock-W class of
methods to OrdinaryDiffEq.jl. These methods are like the Rosenbrock methods
but are able to reuse their W matrix for multiple steps, allowing the method
to scale to larger ODEs more efficiently. Since the Rosenbrock methods
benchmark as the fastest methods for small ODEs right now, this is an exciting
new set of methods which will get optimized over the course of the summer.
Efficient Jacobian reuse techniques and the ability to utilize the sparse
differentiation tooling are next on this project.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Higher order SDE methods for non-commutative noise</li>
<li>Parallelized methods for stiff ODEs</li>
<li>Integration of sparse colored differentiation into the differential equation solvers</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Exponential integrator improvements</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Surrogate optimization</li>
<li>GPU-based Monte Carlo parallelism</li>
</ul>
Mon, 24 Jun 2019 12:00:00 +0000
https://sciml.ai/2019/06/24/coloring.html
https://sciml.ai/2019/06/24/coloring.htmlDifferentialEquations.jl v6.5.0: Stiff SDEs, VectorContinuousCallback, Multithreaded Extrapolation<p>Well, we zoomed towards this one. In this release we have a lot of very compelling
new features for performance in specific domains. Large ODEs, stiff SDEs, high
accuracy ODE solving, many callbacks, etc. are all specialized on and greatly
improved in this PR.</p>
Thu, 06 Jun 2019 12:00:00 +0000
https://sciml.ai/2019/06/06/StiffSDEs.html
https://sciml.ai/2019/06/06/StiffSDEs.htmlDifferentialEquations.jl v6.4.0: Full GPU ODE, Performance, ModelingToolkit<p>This is a huge release. We should take the time to thank every contributor
to the JuliaDiffEq package ecosystem. A lot of this release focuses on performance
features. The ability to use stiff ODE solvers on the GPU, with automated
tooling for matrix-free Newton-Krylov, faster broadcast, better Jacobian
re-use algorithms, memory use reduction, etc. All of these combined give some
pretty massive performance boosts in the area of medium to large sized highly
stiff ODE systems. In addition, numerous robustness fixes have enhanced the
usability of these tools, along with a few new features like an implementation
of extrapolation for ODEs and the release of ModelingToolkit.jl.</p>
<p>Let’s start by summing up this release with an example.</p>
<h3 id="comprehensive-example">Comprehensive Example</h3>
<p>Here’s a nice showcase of DifferentialEquations.jl: Neural ODE with batching on
the GPU (without internal data transfers) with high order adaptive implicit ODE
solvers for stiff equations using matrix-free Newton-Krylov via preconditioned
GMRES and trained using checkpointed adjoint equations. Few programs work
directly with neural networks and allow for batching, few utilize GPUs, few
have methods applicable to highly stiff equations, few allow for large stiff
equations via matrix-free Newton-Krylov, and finally few have checkpointed
adjoints. This is all done in a high level programming language. What does the
code for this look like?</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">OrdinaryDiffEq</span><span class="x">,</span> <span class="n">Flux</span><span class="x">,</span> <span class="n">DiffEqFlux</span><span class="x">,</span> <span class="n">DiffEqOperators</span><span class="x">,</span> <span class="n">CuArrays</span>
<span class="n">x</span> <span class="o">=</span> <span class="kt">Float32</span><span class="x">[</span><span class="mf">2.</span><span class="x">;</span> <span class="mf">0.</span><span class="x">]</span><span class="o">|></span><span class="n">gpu</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="kt">Float32</span><span class="o">.</span><span class="x">((</span><span class="mf">0.0f0</span><span class="x">,</span><span class="mf">25.0f0</span><span class="x">))</span>
<span class="n">dudt</span> <span class="o">=</span> <span class="n">Chain</span><span class="x">(</span><span class="n">Dense</span><span class="x">(</span><span class="mi">2</span><span class="x">,</span><span class="mi">50</span><span class="x">,</span><span class="n">tanh</span><span class="x">),</span><span class="n">Dense</span><span class="x">(</span><span class="mi">50</span><span class="x">,</span><span class="mi">2</span><span class="x">))</span><span class="o">|></span><span class="n">gpu</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">destructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">)</span>
<span class="n">dudt_</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="o">::</span><span class="n">TrackedArray</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span> <span class="o">=</span> <span class="n">du</span> <span class="o">.=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">restructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">,</span><span class="n">p</span><span class="x">)(</span><span class="n">u</span><span class="x">)</span>
<span class="n">dudt_</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="o">::</span><span class="kt">AbstractArray</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span> <span class="o">=</span> <span class="n">du</span> <span class="o">.=</span> <span class="n">Flux</span><span class="o">.</span><span class="n">data</span><span class="x">(</span><span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">restructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">,</span><span class="n">p</span><span class="x">)(</span><span class="n">u</span><span class="x">))</span>
<span class="n">ff</span> <span class="o">=</span> <span class="n">ODEFunction</span><span class="x">(</span><span class="n">dudt_</span><span class="x">,</span><span class="n">jac_prototype</span> <span class="o">=</span> <span class="n">JacVecOperator</span><span class="x">(</span><span class="n">dudt_</span><span class="x">,</span><span class="n">x</span><span class="x">))</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">ff</span><span class="x">,</span><span class="n">x</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="n">diffeq_adjoint</span><span class="x">(</span><span class="n">p</span><span class="x">,</span><span class="n">prob</span><span class="x">,</span><span class="n">KenCarp4</span><span class="x">(</span><span class="n">linsolve</span><span class="o">=</span><span class="n">LinSolveGMRES</span><span class="x">());</span><span class="n">u0</span><span class="o">=</span><span class="n">x</span><span class="x">,</span>
<span class="n">saveat</span><span class="o">=</span><span class="mf">0.0</span><span class="o">:</span><span class="mf">0.1</span><span class="o">:</span><span class="mf">25.0</span><span class="x">,</span><span class="n">backsolve</span><span class="o">=</span><span class="nb">false</span><span class="x">)</span>
</code></pre></div></div>
<p>That is 10 lines of code, and we can continue to make it even more succinct.</p>
<p>Now, onto the release highlights.</p>
<h2 id="full-gpu-support-in-ode-solvers">Full GPU Support in ODE Solvers</h2>
<p>Now not just the non-stiff ODE solvers but the stiff ODE solvers allow for
the initial condition to be a GPUArray, with the internal methods not
performing any indexing in order to allow for all computations to take place
on the GPU without data transfers. This allows for expensive right-hand side
calculations, like those in neural ODEs or PDE discretizations, to utilize
GPU acceleration without worrying about whether the cost of data
transfers will overtake the solver speed enhancements.</p>
<p>While the presence of broadcast throughout the solvers might worry one about
performance…</p>
<h2 id="fast-diffeq-specific-broadcast">Fast DiffEq-Specific Broadcast</h2>
<p>Yingbo Ma (@YingboMa) implemented a fancy broadcast wrapper that allows for
all sorts of information to be passed to the compiler in the differential
equation solver’s internals, making a bunch of no-aliasing and sizing assumptions
that are normally not possible. These change the internals to all use a
special <code class="language-plaintext highlighter-rouge">@..</code> which turns out to be faster than standard loops, and this is the
magic that really enabled the GPU support to happen without performance
regressions (and in fact, we got some speedups from this, close to 2x in some
cases!)</p>
<h2 id="smart-linsolve-defaults-and-linsolvegmres">Smart linsolve defaults and LinSolveGMRES</h2>
<p>One of the biggest performance-based features to be released is smarter linsolve
defaults. If you are using dense arrays with a standard Julia build, OpenBLAS
does not perform recursive LU factorizations which we found to be suboptimal
by about 5x in some cases. Thus our default linear solver now automatically
detects the BLAS installation and utilizes RecursiveFactorizations.jl to give
this speedup for many standard stiff ODE cases. In addition, if you passed a
sparse Jacobian for the <code class="language-plaintext highlighter-rouge">jac_prototype</code>, the linear solver now automatically
switches to a form that works for sparse Jacobians. If you use an
<code class="language-plaintext highlighter-rouge">AbstractDiffEqOperator</code>, the default linear solver automatically switches to
a Krylov subspace method (GMRES) and utilizes the matrix-free operator directly.
Banded matrices and Jacobians on the GPU are now automatically handled as well.</p>
<p>Of course, that’s just the defaults, and most of this was possible before but
now has just been made more accessible. In addition to these, the ability to
easily switch to GMRES was added via <code class="language-plaintext highlighter-rouge">LinSolveGMRES</code>. Just add
<code class="language-plaintext highlighter-rouge">linsolve = LinSolveGMRES()</code> to any native Julia algorithm with a swappable
linear solver and it’ll switch to using GMRES. In this you can pass options
for preconditioners and tolerances as well. We will continue to integrate this
better into our integrators as doing so will enhance the efficiency when
solving large sparse systems.</p>
<h2 id="automated-jv-products-via-autodifferentiation">Automated J*v Products via Autodifferentiation</h2>
<p>When using <code class="language-plaintext highlighter-rouge">GMRES</code>, one does not need to construct the full Jacobian matrix.
Instead, one can simply use the directional derivatives in the direction of
<code class="language-plaintext highlighter-rouge">v</code> in order to compute <code class="language-plaintext highlighter-rouge">J*v</code>. This has now been put into an operator form
via <code class="language-plaintext highlighter-rouge">JacVecOperator(dudt_,x)</code>, so now users can directly ask for this to
occur using one line. It allows for the use of autodifferentiation or
numerical differentiation to calculate the <code class="language-plaintext highlighter-rouge">J*v</code>.</p>
<h2 id="destats">DEStats</h2>
<p>One of the nichest but nicest new features is DEStats. If you do <code class="language-plaintext highlighter-rouge">sol.destats</code>
then you will see a load of information on how many steps were taken, how many
<code class="language-plaintext highlighter-rouge">f</code> calls were done, etc. giving a broad overview of the performance of the
algorithm. Thanks to Kanav Gupta (@kanav99) and Yingbo Ma (@YingboMa) for really
driving this feature since it has allowed for a lot of these optimizations to
be more thoroughly investigated. You can expect DiffEq development to
accelerate with this information!</p>
<h2 id="improved-jacobian-reuse">Improved Jacobian Reuse</h2>
<p>One of the things which was noticed using DEStats was that the amount of Jacobians
and inversions that were being calculated could be severly reduced. Yingbo Ma (@YingboMa)
did just that, greatly increasing the performance of all implicit methods like
<code class="language-plaintext highlighter-rouge">KenCarp4</code> showing cases in the 1000+ range where OrdinaryDiffEq’s native
methods outperformed Sundials CVODE_BDF. This still has plenty of room for
improvement.</p>
<h2 id="diffeqbiological-performance-improvements-for-large-networks-speed-and-sparsity">DiffEqBiological performance improvements for large networks (speed and sparsity)</h2>
<p>Samuel Isaacson (@isaacson) has been instrumental in improving DiffEqBiological.jl
and its ability to handle large reaction networks. It can now parse the networks
much faster and can build Jacobians which utilize sparse matrices. It pairs
with his ParseRxns(???) library and has been a major source of large stiff
test problems!</p>
<h2 id="partial-neural-odes-batching-and-gpu-fixes">Partial Neural ODEs, Batching and GPU Fixes</h2>
<p>We now have working examples of partial neural differential equations, which
are equations which have pre-specified portions that are known while others
are learnable neural networks. These also allow for batched data and GPU
acceleration. Not much else to say except let your neural diffeqs go wild!</p>
<h2 id="low-memory-rk-optimality-and-alias_u0">Low Memory RK Optimality and Alias_u0</h2>
<p>Kanav Gupta (@kanav99) and Hendrik Ranocha (@ranocha) did amazing jobs at doing memory optimizations of
low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs.
Essentially these methods have a minimal number of registers which are
theoretically required for the method. Kanav added some tricks to the implementation
(using a fun <code class="language-plaintext highlighter-rouge">=</code> -> <code class="language-plaintext highlighter-rouge">+=</code> overload idea) and Henrick added the <code class="language-plaintext highlighter-rouge">alias_u0</code> argument
to allow for using the passed in initial condition as one of the registers. Unit
tests confirm that our implementations achieve the minimum possible number of
registers, allowing for large PDE discretizations to make use of
DifferentialEquations.jl without loss of memory efficiency. We hope to see
this in use in some large-scale simulation software!</p>
<h2 id="more-robust-callbacks">More Robust Callbacks</h2>
<p>Our <code class="language-plaintext highlighter-rouge">ContinuousCallback</code> implementation now has increased robustness in double
event detection, using a new strategy. Try to break it.</p>
<h2 id="gbs-extrapolation">GBS Extrapolation</h2>
<p>New contributor Konstantin Althaus (@AlthausKonstantin) implemented midpoint
extrapolation methods for ODEs using Barycentric formulas and different a
daptivity behaviors. We will be investigating these methods for their
parallelizability via multithreading in the context of stiff and non-stiff ODEs.</p>
<h2 id="modelingtoolkitjl-release">ModelingToolkit.jl Release</h2>
<p>ModelingToolkit.jl has now gotten some form of a stable release. A lot of credit
goes to Harrison Grodin (@HarrisonGrodin). While it has
already been out there and found quite a bit of use, it has really picked up
steam over the last year as a modeling framework suitable for the flexibility
DifferentialEquations.jl. We hope to continue its development and add features
like event handling to its IR.</p>
<h2 id="sundials-jv-interface-stats-and-preconditioners">SUNDIALS J*v interface, stats, and preconditioners</h2>
<p>While we are phasing out Sundials from our standard DifferentialEquations.jl
practice, the Sundials.jl continues to improve as we add more features to
benchmark against. Sundials’ J*v interface has now been exposed, so adding a
DiffEqOperator to the <code class="language-plaintext highlighter-rouge">jac_prototype</code> will work with Sundials. <code class="language-plaintext highlighter-rouge">DEStats</code> is
hooked up to Sundials, and now you can pass preconditioners to its internal
Newton-Krylov methods.</p>
<h1 id="next-directions">Next Directions</h1>
<ul>
<li>Improved nonlinear solvers for stiff SDE handling</li>
<li>More adaptive methods for SDEs</li>
<li>Better boundary condition handling in DiffEqOperators.jl</li>
<li>More native implicit ODE (DAE) solvers</li>
<li>Adaptivity in the MIRK BVP solvers</li>
<li>LSODA integrator interface</li>
<li>Improved BDF</li>
</ul>
Thu, 09 May 2019 13:00:00 +0000
https://sciml.ai/2019/05/09/GPU.html
https://sciml.ai/2019/05/09/GPU.htmlDifferentialEquations.jl 6.0: Radau5, Hyperbolic PDEs, Dependency Reductions<p>This marks the release of DifferentialEquations.jl v6.0.0. Here’s a low down
of what has happened in the timeframe.</p>
Sat, 02 Feb 2019 10:00:00 +0000
https://sciml.ai/2019/02/02/RadauAnderson.html
https://sciml.ai/2019/02/02/RadauAnderson.htmlDifferentialEquations.jl 5.0: v1.0, Jacobian Types, EPIRK<p>This marks the release of DifferentialEquations.jl. There will be an accompanying
summary blog post which goes into more detail about our current state and sets
the focus for the organization’s v6.0 release. However, for now I would like
to describe some of the large-scale changes which have been included in this
release. Much thanks goes to the Google Summer of Code students who heavily
contributed to these advances.</p>
Mon, 20 Aug 2018 10:00:00 +0000
https://sciml.ai/2018/08/20/FunctionInputEPIRK.html
https://sciml.ai/2018/08/20/FunctionInputEPIRK.htmlDifferentialEquations.jl 4.6: Global Sensitivity Analysis, Variable Order Adams<p>Tons of improvements due to Google Summer of Code. Here’s what’s happened.</p>
Thu, 05 Jul 2018 10:00:00 +0000
https://sciml.ai/2018/07/05/GSAVariableOrder.html
https://sciml.ai/2018/07/05/GSAVariableOrder.htmlDifferentialEquations.jl 4.5: ABC, Adaptive Multistep, Maximum A Posteriori<p>Once again we stayed true to form and didn’t solve the problems in the
development list but adding a ton of new features anyways. Now that Google
Summer of Code (GSoC) is in full force, a lot of these updates are due to
our very awesome and productive students. Here’s what we got.</p>
Sat, 26 May 2018 10:00:00 +0000
https://sciml.ai/2018/05/26/ABCMore.html
https://sciml.ai/2018/05/26/ABCMore.htmlA "Jupyter" of DiffEq: Introducing Python and R Bindings for DifferentialEquations.jl<p>Differential equations are used for modeling throughout the sciences from astrophysical calculations to simulations of biochemical interactions. These models have to be simulated numerically due to the complexity of the resulting equations. However, numerical solving differential equations presents interesting software engineering challenges. On one hand, speed is of utmost importance. PDE discretizations quickly turn into ODEs that take days/weeks/months to solve, so reducing time by 5x or 10x can be the difference between a doable and an impractical computation. But these methods are difficult to optimize in a higher level language since a lot of the computations are small, hard to vectorize loops with a user-defined function directly in the middle (one SciPy developer described it as a <a href="https://github.com/scipy/scipy/pull/6326#issuecomment-336877517">“worst case scenario for Python”</a>) . Thus higher level languages and problem-solving environments have resorted to a strategy of wrapping C++ and Fortran packages, and as described in a survey of differential equation solving suites, <a href="https://www.stochasticlifestyle.com/comparison-differential-equation-solver-suites-matlab-r-julia-python-c-fortran/">most differential equation packages are wrapping the same few methods</a>.</p>
Mon, 30 Apr 2018 09:00:00 +0000
https://sciml.ai/2018/04/30/Jupyter.html
https://sciml.ai/2018/04/30/Jupyter.htmlDifferentialEquations.jl 4.4: Enhanced Stability and IMEX SDE Integrators<p>These are features long hinted at. The
<a href="https://arxiv.org/abs/1804.04344">Arxiv paper</a> is finally up and the new
methods from that paper are the release. In this paper I wanted to “complete”
the methods for additive noise and attempt to start enhancing the methods for
diagonal noise SDEs. Thus while it focuses on a constrained form of noise, this
is a form of noise present in a lot of models and, by using the constrained form,
allows for extremely optimized methods. See the
<a href="https://docs.juliadiffeq.org/latest/solvers/sde_solve">updated SDE solvers documentation</a>
for details on the new methods. Here’s what’s up!</p>
Sun, 15 Apr 2018 08:00:00 +0000
https://sciml.ai/2018/04/15/StableSDE.html
https://sciml.ai/2018/04/15/StableSDE.htmlDifferentialEquations.jl 4.3: Automatic Stiffness Detection and Switching<p>Okay, this is a quick release. However, There’s so much good stuff coming out
that I don’t want them to overlap and steal each other’s thunder! This release
has two long awaited features for increasing the ability to automatically solve
difficult differential equations with less user input.</p>
Mon, 09 Apr 2018 08:00:00 +0000
https://sciml.ai/2018/04/09/AutoSwitch.html
https://sciml.ai/2018/04/09/AutoSwitch.htmlDifferentialEquations.jl 4.2: Krylov Exponential Integrators, Non-Diagonal Adaptive SDEs, Tau-Leaping<p>This is a jam packed release. A lot of new integration methods were developed
in the last month to address specific issues of community members. Some of these
methods are one of a kind!</p>
Sat, 31 Mar 2018 10:00:00 +0000
https://sciml.ai/2018/03/31/AdaptiveLowSDE.html
https://sciml.ai/2018/03/31/AdaptiveLowSDE.htmlDifferentialEquations.jl 4.1: New ReactionDSL and KLU Sundials<p>Alright, that syntax change was painful but now everything seems to have
calmed down. We thank everyone for sticking with us and helping file issues
as necessary. It seems most people have done the syntax update and now we’re
moving on. In this release we are back to our usual and focused on feature
updates. There are changes, but we can once again be deprecating any of our
changes so that’s much easier on users.</p>
Sat, 17 Feb 2018 10:00:00 +0000
https://sciml.ai/2018/02/17/Reactions.html
https://sciml.ai/2018/02/17/Reactions.htmlDifferentialEquations.jl 4.0: Breaking Syntax Changes, Adjoint Sensitivity, Bayesian Estimation, and ETDRK4<p>In this release we have a big exciting breaking change to our API. We are taking
a “now or never” approach to fixing all of the API cruft we’ve gathered as we’ve
expanded to different domains. Now that we cover the space of problems we wish
to solve, we realize many inconsistencies we’ve introduced in our syntax.
Instead of keeping them, we’ve decided to do a breaking change to fix these
problems.</p>
Wed, 24 Jan 2018 07:30:00 +0000
https://sciml.ai/2018/01/24/Parameters.html
https://sciml.ai/2018/01/24/Parameters.htmlDifferentialEquations.jl 3.4: Sundials 3.1, ARKODE, Static Arrays<p>In this release we have a big exciting breaking change to Sundials and some
performance increases.</p>
Mon, 15 Jan 2018 11:30:00 +0000
https://sciml.ai/2018/01/15/Sundials.html
https://sciml.ai/2018/01/15/Sundials.htmlDifferentialEquations.jl 3.3: IMEX Solvers<p>What’s a better way to ring in the new year than to announce new features?
This ecosystem 3.3 release we have a few exciting developments, and at the
top of the list is new IMEX schemes. Let’s get right to it.</p>
Mon, 01 Jan 2018 11:30:00 +0000
https://sciml.ai/2018/01/01/IMEX.html
https://sciml.ai/2018/01/01/IMEX.htmlDifferentialEquations.jl 3.2: Expansion of Event Compatibility<p>DifferentialEquations.jl 3.2 is just a nice feature update. This hits a few
long requested features.</p>
Mon, 11 Dec 2017 00:30:00 +0000
https://sciml.ai/2017/12/11/Events.html
https://sciml.ai/2017/12/11/Events.htmlDifferentialEquations.jl 3.1: Jacobian Passing<p>The DifferentialEquations.jl 3.0 release had most of the big features and was
<a href="https://www.stochasticlifestyle.com/differentialequations-jl-3-0-roadmap-4-0/">featured in a separate blog post</a>.
Now in this release we had a few big incremental developments. We expanded
the capabilities of our wrapped libraries and completed one of the most
requested features: passing Jacobians into the IDA and DASKR DAE solvers.
Let’s just get started there:</p>
Fri, 24 Nov 2017 01:30:00 +0000
https://sciml.ai/2017/11/24/Jacobians.html
https://sciml.ai/2017/11/24/Jacobians.htmlStiff SDE and DDE Solvers<p>The end of the summer cycle means that many things, including Google Summer of
Code projects, are being released. A large part of the current focus has been to
develop tools to make solving PDEs easier, and also creating efficient tools
for generalized stiff differential equations. I think we can claim to be one of
the first libraries to include methods for stiff SDEs, one of the first for stiff
DDEs, and one of the first to include higher order adaptive Runge-Kutta Nystrom
schemes. And that’s not even looking at a lot of the more unique stuff in this
release. Take a look.</p>
Sat, 09 Sep 2017 01:30:00 +0000
https://sciml.ai/2017/09/09/StiffDDESDE.html
https://sciml.ai/2017/09/09/StiffDDESDE.htmlSDIRK Methods<p>This has been a very productive summer! Let me start by saying that a relative
newcomer to the JuliaDiffEq team, David Widmann, has been doing some impressive
work that has really expanded the internal capabilities of the ordinary and
delay differential equation solvers. Much of the code has been streamlined
due to his efforts which has helped increase our productivity, along with helping
us identify and solve potential areas of floating point inaccuracies. In addition,
in this release we are starting to roll out some of the results of the Google
Summer of Code projects. Together, there’s some really exciting stuff!</p>
Sun, 13 Aug 2017 01:30:00 +0000
https://sciml.ai/2017/08/13/SDIRK.html
https://sciml.ai/2017/08/13/SDIRK.htmlHigh Order Rosenbrock and Symplectic Methods<p>For awhile I have been saying that JuliaDiffEq really needs some fast high
accuracy stiff solvers and symplectic methods to take it to the next level.
I am happy to report that these features have arived, along with some other
exciting updates. And yes, they benchmark really well. With new Rosenbrock methods
specifically designed for stiff nonlinear parabolic PDE discretizations, SSPRK
enhancements specifically for hyperbolic PDEs, and symplectic methods for Hamiltonian
systems, physics can look at these release notes with glee. Here’s the full ecosystem
release notes.</p>
Fri, 07 Jul 2017 01:30:00 +0000
https://sciml.ai/2017/07/07/SymplecticRosenbrock.html
https://sciml.ai/2017/07/07/SymplecticRosenbrock.htmlFilling In The Interop Packages and Rosenbrock<p>In the <a href="https://www.stochasticlifestyle.com/differentialequations-jl-2-0-state-ecosystem/">2.0 state of the ecosystem post</a>
it was noted that, now that we have a clearly laid out and expansive common API,
the next goal is to fill it in. This set of releases tackles the lowest hanging
fruits in that battle. Specifically, the interop packages were setup to be as
complete in their interfaces as possible, and the existing methods which could
expand were expanded. Time for specifics.</p>
Thu, 18 May 2017 01:30:00 +0000
https://sciml.ai/2017/05/18/Filling_in.html
https://sciml.ai/2017/05/18/Filling_in.htmlDifferentialEquations.jl 2.0<p>This marks the release of ecosystem version 2.0. All of the issues got looked
over. All (yes all!) of the API suggestions that were recorded in issues in
JuliaDiffEq packages have been addressed! Below are the API changes that have occurred.
This marks a really good moment for the JuliaDiffEq ecosystem because it means all
of the long-standing planned API changes are complete. Of course new things may come
up, but there are no more planned changes to core functionality. This means that we can simply
work on new features in the future (and of course field bug reports as they come).
A blog post detailing our full 2.0 achievements plus our 3.0 goals will come out at
our one year anniversary. But for now I want to address what the API changes are,
and the new features of this latest update.</p>
Sun, 30 Apr 2017 01:30:00 +0000
https://sciml.ai/2017/04/30/API_changes.html
https://sciml.ai/2017/04/30/API_changes.htmlDifferentialEquations.jl v1.9.1<p>DifferentialEquations v1.9.1 is a feature update which, well, brings a lot of new
features. But before we get started, there is one thing to highlight:</p>
Fri, 07 Apr 2017 01:30:00 +0000
https://sciml.ai/2017/04/07/features.html
https://sciml.ai/2017/04/07/features.htmlDifferentialEquations.jl Workshop at JuliaCon 2017<p><a href="https://juliacon.org/2017/talks.html">There will be a workshop on DifferentialEquations.jl at this year’s JuliaCon!</a>
The title is “The Unique Features and Performance of DifferentialEquations.jl”.
The goal will be to teach new users how to solve a wide variety of differential
equations, and show how to achieve the best possible performance. I hope to lead
users through an example problem: start with ODEs and build a simple model. I
will show the tools for analyzing the solution to ODEs, show how to choose the
best solver for your problem, show how to use non-standard features like arbitrary
precision arithmetic. From there, we seamlessly flow into more in-depth
analysis and models. We will start estimating parameters of the ODEs, and then
make the models more realistic by adding delays, stochasticity (randomness), and
Gillespie models (discrete stochastic models related to differential equations),
and running stochastic Monte Carlo experiments in parallel (in a way that will
automatically parallelizes across multiple nodes of an HPC!).</p>
Tue, 04 Apr 2017 11:00:00 +0000
https://sciml.ai/2017/04/04/juliacon.html
https://sciml.ai/2017/04/04/juliacon.htmlDifferentialEquations.jl v1.8.0<p>DifferentialEquations.jl v1.8.0 is a new release for the JuliaDiffEq ecosystem.
As promised, the API is stable and there should be no breaking changes. The tag
PRs have been opened and it will takes a couple days/weeks for this to be available.
For an early preview, see <a href="https://docs.juliadiffeq.org/dev/">the in-development documentation</a>.
When the release is available, a new version of the documentation will be tagged.</p>
Thu, 09 Feb 2017 17:00:00 +0000
https://sciml.ai/2017/02/09/interps.html
https://sciml.ai/2017/02/09/interps.htmlDifferentialEquations.jl v1.6.0<p>DifferentialEquations.jl v1.6.0 is a stable version of the JuliaDiffEq ecosystem.
This tag includes many new features, including:</p>
Sat, 14 Jan 2017 17:00:00 +0000
https://sciml.ai/2017/01/14/stable.html
https://sciml.ai/2017/01/14/stable.htmlBase<p>A new set of tags will be going through over the next week. I am working with Tony to make sure there is no breakage, and for the most part the API has not changed. What has changed is the API for events and callbacks, there is a PR in DiffEqDocs.jl for the new API. The translation to the new API should be really easy: it’s almost the exact same thing but now a type-based API instead of a macro-based API (and will be cross-package). Also included is a new “integrator” interface which gives step-wise control over integration routines, starting with support from OrdinaryDiffEq.jl.</p>
Sun, 08 Jan 2017 00:00:00 +0000
https://sciml.ai/2017/01/08/base.html
https://sciml.ai/2017/01/08/base.htmlPDEs Update<p>Tags since the last blog post:</p>
Wed, 21 Dec 2016 17:00:00 +0000
https://sciml.ai/2016/12/21/fem.html
https://sciml.ai/2016/12/21/fem.htmlOrdinaryDiffEq v0.5<p>OrdinaryDiffEq.jl has received two tags. This latest tag, v0.5, adds compatibility with the latest Julia v0.6 nightly (similar changes have been added to many solvers like StochasticDiffEq.jl on master, but have not yet been tagged).</p>
Wed, 21 Dec 2016 17:00:00 +0000
https://sciml.ai/2016/12/21/saveat.html
https://sciml.ai/2016/12/21/saveat.htmlHello world<p>Hello world! This is the first post from the JuliaDiffEq organization. This
blog will be used to share the most recent updates to the JuliaDiffEq ecosystem.
Hopefully this will make it easier for everyone to follow our developments.</p>
Sat, 26 Nov 2016 17:00:00 +0000
https://sciml.ai/2016/11/26/hello.html
https://sciml.ai/2016/11/26/hello.html