For convenience, `odehybrid`

includes a configurable set of ordinary differential equation solvers, `rkfixed`

and `rkadapt`

. Using these two solvers, one can implement any explicit Runge-Kutta method by simply passing in the right coefficients. Further, without any coefficients, both default to very popular solvers.

These solvers were created for use with `odehybrid`

, but they can be used anywhere. Their interface is approximately the same as `ode45`

. For:

```
function x_dot = f(t, x)
...
end
```

We can propagate this via:

```
ts = [0 10]; % Start and stop times
x0 = [1; 2]; % Initial state
[t, x] = rkadapt(@f, ts, x0); % Propagate.
plot(t, x); % Plot.
```

They can take options from `odeset`

, including an `OutputFcn`

, `MaxStep`

, `InitialStep`

, `RelTol`

, and `AbsTol`

. See rkadapt and rkfixed for more.

```
options = odeset('MaxStep', 0.1);
[t, x] = rkadapt(@f, ts, x0, options);
```

These functions also take as arguments the Butcher tableau, discussed more below.

Fixed-step solvers work by dividing a specified time-step into smaller intervals, propagating the derivatives from one point to the next, and taking the final entire set of derivatives to make one update across the full time-step. This can achieve great accuracy and stability in a very easy-to-use and fast way.

Because fixed step solvers require the user to specify a time step, `rkfixed`

requires a fourth argument: either a time step or an `options`

structure from `odeset`

with the `MaxStep`

property set.

```
dt = 0.1; % Time step
[t, x] = rkfixed(@f, ts, x0, dt);
options = odeset('MaxStep', 0.1);
[t, x] = rkfixed(@f, ts, x0, options);
```

By default, `rkfixed`

implements a fourth order Runge-Kutta method using the “3/8” rule, but with additional arguments, it can be used for any explicit fixed-step Runge-Kutta method.

Finally, because of its popularity, the most common fixed-step propagator is also included in its own function, “the” fourth-order Runge-Kutta method in `rk4`

.

```
dt = 0.1; % Time step
[t, x] = rk4(@f, ts, x0, dt);
```

Adaptive-step solvers similarly divide up a larger time-step into smaller time steps to build an accurate and stable update strategy, however they goes a step further. They uses multiple sub-intervals to construct *two* different updates simultaneously. If the two updates agree closely, the update considered more accurate is accepted. If the two updates are different, the adaptive-step strategy will throw out the time step and try again. In this way, the time step changes over the course of the propagation according to how “fast” the dynamics represented by \(f\) are.

Be default, `rkadapt`

implements Dormand-Prince 5(4), a fifth-order method with embedded fourth-order error checking.

For both fixed- and adaptive-step methods, there are numerous calls to \(f\) which should be considered trial points and not “real” updates. For this reason, it's crucial that nothing happen inside \(f\) that cannot be thrown out, such as logging. If one were to log something in each call to \(f\) and then the time-step were thrown out, one would be left with false information in the log. When using logging via `odehybrid`

, this is taken care of automatically. However, if one is using these solvers by themselves (not as part of `odehybrid`

), one can use an `OutputFcn`

, which is only called with the valid updates. Here is an example `OutputFcn`

, as discussed in the documentation for MATLAB's built-in `odeset`

.

```
function status = example_outputfcn(t, x, flag)
% Return true to terminate the propagation.
status = 0;
switch flag
% On init, we receive the time span and init. states.
case 'init'
fprintf('Simulating from %fs to %fs.\n', t);
% When done, times and states are all empty.
case 'done'
fprintf('Finished the simulation.\n');
% Otherwise, we receive 1 or more samples.
otherwise
status = x(1) < 0; % End the simulation when x(1) < 0.
end
end
```

One sets this with:

```
options = odeset('OutputFcn', @example_outputfcn);
[t, x] = rkadapt(@f, ts, x0, options);
```

Both `rkfixed`

and `rkadapt`

can be used for any explicit Runge-Kutta method by providing the Butcher tableau as additional arguments, consisting of matrices `a`

, `b`

, and `c`

. These coefficients describe where the sub-intervals of the time-step are taken and how the derivatives are combined for partial updates and the final state update.

For sample 1 to \(s\), the function, \(f\), is called with a combination of the previous state and previously calculated function outputs:

$$ k_s = f(t + c_s \Delta t, x(t) + a_{s,1} k_1 + \ldots a_{s, s-1} k_{s-1}) $$The final state-update is constructed as:

$$ x(t + \Delta t) = x(t) + b_1 k_1 + b_2 k_2 + \ldots b_s k_s $$For adaptive step solvers, there's a second row to the \(b\) coefficients, the \(\hat{b}\) coefficients. These create the secondary update, used to calculate the local error by comparison with the update from the first row.

The functions are then called as:

```
[t, x] = rkfixed(@f, ts, x0, dt, a, b, c);
[t, x] = rkadapt(@f, ts, x0, a, b, c, p);
```

where `p`

is the order of the lower order update represented by the two rows of `b`

. Note that \(c_1\) and the top right triangle (including diagonal) of \(a\) are expected to be 0.*Caution*: Various texts list the \(b\) coefficients differently; the coefficients used for the state update must be on the first row for these implementations.

See rkadapt and rkfixed for examples.

Some solvers have an excellent property that the final row of \(a\) matches the first (or only) row of \(b\). This means the derivatives calculated at \(k_s\) of the prevous step will be the same as those of \(k_1\) after the state is updated. This is called the “first-same-as-last” property, and it allows the solver to skip one call to \(f\) per step for greater efficiency. `rkadapt`

looks for this and takes advantage when it exists.

Advanced Engineering Mathematics by Erwin Kreyszig contains a good introduction to the most common solvers implemented here. Check out the used ninth or eighth editions.

How Simulations Work by Tucker McClure has general discussion about ordinary differential equation propagation.

Runge-Kutta methods on Wikipedia also has a good discussion.

January 18th, 2017

©2017 An Uncommon Lab