Numerical solutions to Ordinary differential equations

Basics

Ordinary differential equations are equations that involve ordinary derivatives of a function and the function itself. The order of the differential equation is characterized by the highest order derivative it contains. ODEs are used to model a variety of systems like the stock market, orbiting planets or predator prey systems. In this article, we will explore a few numerical methods for solving the following first order equation in JavaScript.

ordinary differential equation

Numerical methods

Since many differential equations are hard to solve mathematically, numerical methods have been developed to compute aproximate solutions.

Euler integration

Euler integration is the simplest method for numerical integration. It is the equivalent of following the slope of a function in tiny steps. At each timestep, we calculate the current derivative, multiply it by the step size h to get the change in y-direction, and add it to the previous y value to arrive at the next state.

Euler iteration step

In the following JS implementation of the Euler method ts is an array of linearly spaced values ranging from the initial time t_0 to the end time t_1. To store the resulting values, we create an empty array ys and initialize the first entry to the initial ODE condition y_0.

const N = 100, t_0 = 0, t_1 = 1, y_0 = 2
const h = (t_1 - t_0) / N  //time step size
var ts = Array.from(Array(N+1), (_, k) => k * h + t_0)
var ys = Array(N+1).fill(0)  //empty array for the results
ys[0] = y_0  //initial conditions

for (let i = 0; i < N; i++) {
  ys[i + 1] =  ys[i] + f(ts[i], ys[i]) * h
}

The biggest drawback of the Euler method is, that the error scales with O(h²) and therefore oftentimes isn't stable. An example of the emerging instability can be seen in this demo when the resolution gets decreased.

Midpoint method

The midpoint method improves upon the Euler method, by taking the "average slope" between the current location and the next one, instead of the slope at the current location. As the name suggests, it determines the derivative at the midpoint between two steps. To do this, a regular Euler iteration with half step size is performed and used to evaluate the slope. The next y value is calculated by moving along the slope by a full step from the previous position, but this time using the "midpoint slope".

Midpoint iteration step

Using this method, the Order of the error decreases to O(h³) per step and O(h²) for the total solution. The midpoint method increases stability significantly, while only requiring two evaluations of the ODE per step. In realtime applications, this is often a good balance between accuracy and performance. Here's the sample implementation in JavaScript:

for (let i = 0; i < resolution; i++) {
  const k1 = f(ts[i], ys[i])
  const k2 = f(ts[i] + h/2, ys[i] + k1 * h/2)
  ys[i + 1] = ys[i] + k2 * h
}

Runge Kutta 4th Order

Now we're getting into the fun stuff. The Runge Kutta family of methods takes the concept further, by introducing n evaluations, each more accurate than the previous one, to compute a single step. For the vast majority of applications, the fourth order method also called the standard Runge Kutta Method (RK4), is accurate enough. Even at large step sizes it retains a high stability, since the error per step only scales with O(h^5)

runge kutta equations

The RK4 method really starts to shine when working with periodic solutions and nonlinear ODEs, since the lower order methods oftentimes diverge in those situation, even when for very low step sizes.
A simple JS implementation for the RK4 method could look like this:

for (let i = 0; i < resolution; i++) {
  const k1 = f(ts[i], ys[i])

  const s1 = ys[i] + k1 * h/2
  const k2 = f(ts[i] + h/2, s1)

  const s2 = ys[i] + k2 * h/2
  const k3 = f(ts[i] + h/2, s2) 

  const s3 = ys[i] + k2 * h
  const k4 = f(ts[i] + h, s3) // f(t + h, y_n + k3*h)
  ys[i + 1] = ys[i] + (k1/6 + k2/3 + k3/3 + k4/6) * h
}

The demo below demonstrates how the accuracy of all three methods scales with the step size. Use the slider below to change their resolution!

The Limits of Resolution

As you can see, a smaller step size leads to more accurate results. This way of increasing accuracy unfortunately has it's limits. At some point, floating point roundoff errors from computers limited numeric resolution start outweighing the improvements a smaller step size gives us. To get even higher accuracies, higher order methods have to be used.

AbsoluteErrorNumericalDifferentiationExample

Moving into higher orders

Until now, we have only looked at solving first order differential equations. To solve higher order ODEs like the wave equation we first have to turn it into a system of first order ODEs. As an example, let's look at harmonic oscillators.

harmonic oscillator equation

To turn this into a system of first order ODEs, let's write the second order derivative y'' as a derivative of y'. Now we can represent y as a vector consisting of the value and it's first derivative like so

system of first order euqations (harmonic oscillator)

This reqires some changes in the implementation, since y isn't a number anymore, but rather a vector. To see one possible implementation, either inspect the site or take a look at the GitHub repository for all the demos. There's a bunch of other improvements that could be done, like moving from a fixed resolution to an adaptive step size like I did in this solar system simulation or implementing a multistep method, that takes more than one previous point into consideration for the calculation.

...and beyond

Using these methods, you can now solve a variety of problems from physics, mathematics and even biology. Here's a list of problems I find particularly interesting and an interactive implementation to go along with each one. If you enjoy these kinds of projects, why don't you check out my website!