# The JiTCDDE module¶

Note and remember that some relevant information can be found in the common JiTC*DE documentation. This includes installation instructions, compiler issues and optimisation, general performance considerations, how to implement network dynamics, and a small FAQ.

## Introduction¶

JiTCDDE (just-in-time compilation for delay differential equations) is a standalone Python implementation of the DDE integration method proposed by Shampine and Thompson [ST01], which in turn employs the Bogacki–Shampine Runge–Kutta pair [BS89]. JiTCDDE is designed in analogy to JiTCODE (which is handled very similarly to SciPy’s ODE (scipy.integrate.ode)): It takes an iterable (or generator function or dictionary) of symbolic expressions, translates them to C code, compiles them and an integrator wrapped around them on the fly, and allows you to operate this integrator from Python. Symbolic expressions are mostly handled by SymEngine, SymPy’s compiled-backend-to-be (see SymPy vs. SymEngine for details).

This approach has the following advantages:

• Speed boost through compilation Evaluating the derivative and the core operations of the Runge–Kutta integration happen in compiled C code and thus very efficiently.
• Speed boost through symbolic optimisation If your derivative is automatically generated by some routine, you can simplify it symbolically to boost the speed. In fact, blatant optimisations such as $$y·(x-x)=0$$ are done on the fly by SymEngine. This is for example interesting if you want to simulate dynamics on a sparse network, as non-existing links are not taken into account when calculating the derivative when integrating. Moreover, multiple delay terms with the same delay can be handled efficiently, requiring only one look-up (see below).
• Automatically calculated Lyapunov exponents As the derivative is provided symbolically, SymEngines’s automatic differentiation routines can be employed to obtain the DDEs for the tangent vectors, which in turn are required for for calculating the Lyapunov exponents (see Calculating Lyapunov exponents with jitcdde_lyap).
• Symbolic interface You can enter your differential equations almost like you would on paper. Also, if you are working with SymPy or SymEngine anyway – e.g., to calculate fixed points –, you do not need to bother much with translating your equations.

If compilation fails to work for whatever reason, pure Python functions can be employed as a fallback (which is much slower, however).

## A brief mathematic background¶

This documentation assumes that the delay differential equation (DDE) you want to solve is:

$\dot{y} = f(t, y, y(t-τ_1), y(t-τ_2), …)$

The gist of Shampine’s and Thompson’s method [ST01] is this: The differential equation is integrated adaptively with the Bogacki–Shampine pair [BS89], like an ODE. After every successful integration step, the state and derivative of the integration (which is an automatic by-product) are stored. Whenever the derivative $$(f)$$ is evaluated, the required past states $$\left ( y(t-τ_1), y(t-τ_2), … \right )$$ are obtained through piece-wise cubic Hermite interpolation, using previously stored pairs of state and derivative (“anchor”). In some extreme cases, they may also be extrapolated.

Note that unlike most other DDE softwares, JiTCDDE stores and accesses the initial past in exactly this way, i.e., as anchor points. Thus, if you want to have maximum control, you have to initiate the past in exactly this way, i.e., you have to give at least two such anchor points (via add_past_point). If you do not want or need this, there are utility function available that automatically determine the anchors from a given function (past_from_function) or just set it to a fixed value (constant_past). You can also use the get_state method to obtain a representation of the past that you can dissect and modify using CubicHermiteSpline.

## A simple example¶

Suppose we want to integrate the Mackey–Glass delay differential equation: $$\dot{y} = f(y)$$ with $$y∈ℝ$$,

$f(y) = β \frac{y(t-τ)}{1+y(t-τ)^n} - γ·y(t),$

and with $$β = 0.25$$, $$γ = 0.1$$, and $$n = 10$$. First we do some importing and define the constants:

from jitcdde import jitcdde, y, t
import numpy

τ = 15
n = 10
β = 0.25
γ = 0.1


Amongst our imports were the symbols for the state (y) and time (t), which have to be used to write down the differential equation such that JiTCDDE can process it. Using them, we can write down the right-hand side of the differential equation as a list of expressions. As it is one-dimensional, the list contains only one element. We can then initiate JiTCDDE:

f = [ β * y(0,t-τ) / (1 + y(0,t-τ)**n) - γ*y(0) ]
DDE = jitcdde(f)


We want the initial condition and past to be $$y(t<0) = 1$$. Hence we can use constant_past. This automatically results in the integration starting at $$t=0$$.

DDE.constant_past([1.0])


If we calculate the derivative from our initial conditions, we obtain $$f(t=0) = 0.025$$, which does not agree with the $$\dot{y}(t=0) = 0$$ as explicitly defined in the initial conditions. Practically, this would result in an error if we started integrating without further precautions. step_on_discontinuities makes some tailored integration steps to avoid this problem and to allow for the discontinuity to be smoothed out by temporal evolution. (See Dealing with initial discontinuities for alternatives and more details on this).

DDE.step_on_discontinuities()


Finally, we can perform the actual integration. In our case, we integrate for 10000 time units with a sampling rate of 10 time units. We query the current time of the integrator (DDE.t) to start wherever step_on_discontinuities ended. integrate returns the state after integration, which we collect in the list data. Finally, we use numpy.savetxt to store this to the file timeseries.dat.

data = []
for time in numpy.arange(DDE.t, DDE.t+10000, 10):
data.append( DDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)


Taking everything together, our code is:

from jitcdde import jitcdde, y, t
import numpy

τ = 15
n = 10
β = 0.25
γ = 0.1

f = [ β * y(0,t-τ) / (1 + y(0,t-τ)**n) - γ*y(0) ]
DDE = jitcdde(f)

DDE.constant_past([1.0])

DDE.step_on_discontinuities()

data = []
for time in numpy.arange(DDE.t, DDE.t+10000, 10):
data.append( DDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)


## Dealing with initial discontinuities¶

As already examplified in A simple example, $$\dot{y}$$ will usually be discontinuous at the start of the integration: Before that time, it is directly defined by an Hermite interpolation of the anchors that you supply; afterwards, it is determined via evaluating $$f$$. As future values of $$f$$ depend on the past via the delay terms, it is also non-smooth at other times, namely $$τ_1, τ_2, …, 2τ_1, τ_1 + τ_2, 2τ_2, …$$. If an integration step contains one of these points, this may violate the conditions of Runge–Kutta integrations (for a low-order discontinuity) and makes the error estimate be very high, no matter the step size. Fortunately, the discontinuities are quickly “smoothed out” (i.e., reduced in order) with time evolution and can then be ignored. To make this happen, you have four options:

• step_on_discontinuities – this chooses the integration steps such that they fall on the discontinuities. In most cases, this is the easiest and fastest solution to this problem.
• integrate_blindly – this integrates the system for some time with a fixed step size, ignoring the error estimate. You have to take care that all parameters are reasonable. This is a good choice if you have a lot of different delays or time- or state-dependent delays. The time you integrate with this should be larger than your largest delay.
• adjust_diff – this smoothens out the derivative on a small time interval, effectively causing a dent in the history. The disadvantage of this is that the derivative ma assume extreme values causing problems later on.
• Carefully chosen initial conditions – of course, you can define the past such that the derivative for the last anchor is identical to the value of $$f$$ as determined with the anchors. To find such initial conditions, you usally have to solve a non-linear equation system. If you are not interested in the general dynamics of the system, but the evolution of a very particular initial condition, this may be given by default (otherwise your model is probably worthless).

## Delays within the step (overlap)¶

If the delay becomes shorter than the step size, we need a delayed state to evaluate f before we have a final result for the required interpolation anchors. With other words, the intergration step depends on its own result and thus become implicit.

JiTCDDE addresses this problem mainly in the same manner as Shampine and Thompson [ST01]:

• If reducing the step size by a small factor (pws_factor) makes it smaller than the delay, this is done.

• Otherwise, the result of an intergration step is calculated iteratively as follows:

1. Attempt an integration step and extrapolate the required delayed states from the existing results.
2. Attempt the same step again and interpolate the required delayed states using the result of the previous attempt.
3. If the results of the last two attempts are identical within an absolute tolerance of pws_atol and relative tolerance of pws_rtol, accept the result of the last attempt. Otherwise go to step 2. If no such convergence has happened within pws_max_iterations, reduce the step size by pws_factor.

A problem of this approach is that as soon as it reduces the step size, the error estimates from the adaptive Runge–Kutta routines are not directly useful anymore since they almost always insist on increasing the step size. Ignoring this may lead to useless integration steps (and thus wasted time) due to the step size being adapted back and forth. Moreover, throttling step size increases (which is generally reasonable) may result in the step size being “trapped” at an unnecessary low value. As far as I can tell, Shampine and Thompson [ST01] offer no solution to this.

To address this issue, JiTCDDE employs the following criteria for increasing the step size when the recommended step size (from the adaptive Runge–Kutta method) is larger than the current one:

• If the shortest delay is larger than the recommended step size, the step size is increased.
• If the calculating the next step took less than pws_factor iterations and the recommended step size is bigger than pws_factor times the shortest delay, the step size is increased.
• In all other cases, the step size is increased with a chance of pws_base_increase_chance.

To be precise, the above sharp criteria are intentionally blurred such that the probability to increase the step size continuously depends on the mentioned factors. Finally, the parameter pws_fuzzy_increase determines whether the increase is actually depends on chance or is deterministic (which may be useful for some applications). This parameter and the others mentioned above can be controlled with set_integration_parameters.

## Time- and state-dependent delays¶

There is nothing in JiTCDDE’s implementation that keeps you from making delays time- or state-dependent. However, the error estimate is not accurate anymore as it does not take into account the inaccuracy caused by the changing delay. This should not be a problem if your delays change sufficiently slowly in comparison to your step size.

Note that if you want to make use of this, you must provide the maximum delay manually. See this file for an example.

## Multi-dimensional equations and networks¶

While this particular documentation contains no example for a small, multi-dimensional equation, their implementation is analogous to JiTCODE. An example for a two-dimensional DDE (the sunflower equation) can also be found in the accompanying paper.

JiTCDDE is specifically designed to be able to handle large delay differential equations, as they arise, e.g., in networks. We give an example for a network on one-dimensional oscillators below. As the caveats, tools, and tricks when doing this are the same as for JiTCODE; its documentation may be helpful to you, in particular the sections:

### Example: Kuromato oscillators with distributed delays¶

Suppose, we want to implement a network of $$n=100$$ delay-coupled Kuramoto oscillators, governed by the following differential equations:

$\dot{y}_i = ω + \frac{c}{n-1} \sum_{j=0}^{n-1} A_{ji} \sin\Big(y_j(t-τ_{ij})-y_i(t)\Big),$

where $$ω=1$$, $$c=42$$, $$τ_{ij} \sim \mathcal{U}([\frac{π}{5},2π])$$, and $$A$$ is the adjacency matrix of a random unweighted directed network with self-connections where each node exists with a probability $$q=0.05$$.

Without further ado, here is the example code; highlighted lines will be commented upon below:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 from jitcdde import jitcdde, y, t from numpy import pi, arange, random, max from symengine import sin n = 100 ω = 1 c = 42 q = 0.05 A = random.choice( [1,0], size=(n,n), p=[q,1-q] ) τ = random.uniform( pi/5, 2*pi, size=(n,n) ) def kuramotos(): for i in range(n): yield ω + c/(n-1)*sum( sin(y(j,t-τ[i,j])-y(i)) for j in range(n) if A[j,i] ) I = jitcdde(kuramotos,n=n,verbose=False) I.set_integration_parameters(rtol=0,atol=1e-5) I.constant_past( random.uniform(0,2*pi,n), time=0.0 ) I.integrate_blindly( max(τ) , 0.1 ) for time in I.t + arange(0,400,0.2): print(*I.integrate(time) % (2*pi)) 

Explanation of selected features and choices:

• Line 9 is just a quick way to generate the network described above. For moore complex network, you will either have to write more complex function or use dedicated modules. (In fact this example was chosen such that the network creation is very simple.)
• The values of $$τ$$ are initialised globally (line 10). We shoould not just define a function here, because if we were trying to calculate Lyapunov exponents or the Jacobian, the generator function would be called multiple times, and thus the value of the parameter would not be consistent (which would be desastrous).
• In line 15, we use symengine.sin – in contrast to math.sin or numpy.sin.
• In line 21, we explicitly use absolute instead of relative errors, as the latter make no sense for Kuramoto oscillators.
• In line 24, we integrate blindly with a maximum time step of 0.1 up to the maximal delay to ensure that initial discontinuities have smoothened out.

## Calculating Lyapunov exponents with jitcdde_lyap¶

jitcdde_lyap is a simple extension of jitcdde that almost automatically handles calculating Lyapunov exponents by evolving separation functions. It works just like jitcdde, except that it generates and integrates additional differential equations for the separation functions. After every call of integrate, the separation functions are orthonormalised, and the “local” Lyapunov exponents for this integration step are returned alongside with the system’s state. These can then be further processed to obtain the Lyapunov exponents. The separation functions are intialised with random data, and you have to take care of the preiterations that the separation functions require to align themselves.

The method employed here is similar to Farmer’s [F82], which in turn is an adaption of the method described by Benettin et al. [BGGS80] to delayed systems. As the state of delayed systems is also defined by their recent past, one has to consider the past of tangent vectors (as used in Benettin et. al.) as well, which are called separation functions. Farmer approximates these separation functions by fine equidistantly sampled recordings of the past on which he applies the standard scalar product for purposes of computing norms and orthonormalisation. This approach does not translate well to adaptive step sizes as JiTCDDE employs. Instead, JiTCDDE employs as a scalar product between two separation functions $$g$$ and $$h$$:

$\int_{t-τ_\text{max}}^t \mathcal{H}_g(\mathfrak{t}) \; \mathcal{H}_h(\mathfrak{t}) \; \mathrm{d} \mathfrak{t},$

where $$\mathcal{H}$$ denotes the piecewise cubic Hermite interpolant (which is also used for obtaining past states). The matrix induced by this scalar product can largely be calculated beforehand and thus the scalar product itself can be evaluated efficiently. Note that for the limit of an infinitely fine sampling, this yields the same result as Farmer’s approach.

For instance, we can calculate and print the Lyapunov exponents for the Mackey–Glass system as follows (central changes to the example from A simple example highlighted):

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 from jitcdde import jitcdde_lyap, y, t import numpy from scipy.stats import sem τ = 15 n = 10 β = 0.25 γ = 0.1 f = [ β * y(0,t-τ) / (1 + y(0,t-τ)**n) - γ*y(0) ] n_lyap = 4 DDE = jitcdde_lyap(f, n_lyap=n_lyap) DDE.constant_past([1.0]) DDE.step_on_discontinuities(max_step=1.0) data = [] lyaps = [] weights = [] for time in numpy.arange(DDE.t, DDE.t+10000, 10): state, lyap, weight = DDE.integrate(time) data.append(state) lyaps.append(lyap) weights.append(weight) numpy.savetxt("timeseries.dat", data) lyaps = numpy.vstack(lyaps) for i in range(n_lyap): Lyap = numpy.average(lyaps[400:,i], weights=weights[400:]) stderr = sem(lyaps[400:,i]) # Note that this only an estimate print("%i. Lyapunov exponent: % .4f +/- %.4f" % (i+1,Lyap,stderr)) 

Note that integrate does not only return local Lyapunov exponents but also the length of the time interval to which they apply (which differs from the time spanned by the integrate command and may even be zero). This length should be used to weigh the local Lyapunov exponents for statistical processing, like in line 31.

Be aware that the estimation of Lyapunov exponents may be considerably more sensitive numerically. Before you give up or report an issue, please follow the following protocol:

1. Run your scenario with regular jitcdde (i.e., without Lyapunov exponents).
2. Run your scenario just estimating the largest Lyapunov exponent (n_lyap=1).
3. If the integration is not successful, try to locate the point where things go awry. This may be before you actually get an error message. Look for infinite values or not-a-numbers. Reduce the respective parameters that control the frequency of resizing, namely the max_step parameter of step_on_discontinuities, the step parameter of integrate_blindly, or the sampling rate of the regular integration.
4. Increase the number of computed Lyapunov exponents one at a time. Repeat Step 3 as needed. Be aware that once you are in the negative Lyapunov exponents, it may happen that the amplitude of the next exponent is orders of magnitude higher that of the preceding one – thus requiring a much more frequent rescaling to avoid numerical underflows.

As the Lyapunov vectors (separation functions) are quite difficult to interpret, they are not returned as of now (if you need them, please make a feature request). There also two classes (jitcdde_transversal_lyap and jitcdde_restricted_lyap) that allows to calculate the largest transversal Lyapunov exponents to the synchronisation manifold and arbitrary hyperplanes, respectively. See the JiTCODE documentation for an example on how to use the former and the accompanying paper for a mathematical background (and another example).

## Command reference¶

t(*args, **kwargs) = <MagicMock name='mock.Symbol()' id='140589594410792'>

the symbol for time for defining the differential equation. You may just as well define the an analogous symbol directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCDDE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

y(index, time=<MagicMock name='mock.Symbol()' id='140589594410792'>)

the function representing the DDE’s past and present states used for defining the differential equation. The first integer argument denotes the component. The second, optional argument is a symbolic expression denoting the time. This automatically expands to using current_y, past_y, and anchors; so do not be surprised when you look at the output and it is different than what you entered or expected. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

current_y(*args, **kwargs) = <MagicMock name='mock.Function()' id='140589594435768'>

the symbol for the current state for defining the differential equation. It is a function and the integer argument denotes the component. This is only needed for specific optimisations of large DDEs; in all other cases use y instead. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

past_y(*args, **kwargs) = <MagicMock name='mock.Function()' id='140589594435768'>

the symbol for DDE’s past state for defining differential equation. It is a function with the first integer argument denoting the component and the second argument being a pair of past points (as being returned by anchors) from which the past state is interpolated (or, in rare cases, extrapolated). This is only needed for specific optimisations of large DDEs; in all other cases use y instead. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

anchors(*args, **kwargs) = <MagicMock name='mock.Function()' id='140589594435768'>

the symbol representing two anchors for defining the differential equation. It is a function and the float argument denotes the time point to which the anchors pertain. This is only needed for specific optimisations of large DDEs; in all other cases use y instead. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

quadrature(integrand, variable, lower, upper, nsteps=20, method='gauss')

If your DDE contains an integral over past points, this utility function helps you to implement it. It returns an estimator of the integral from evaluations of the past at discrete points, employing a numerical quadrature. You probably want to disable automatic simplifications when using this.

Parameters: integrand : symbolic expression variable : symbol variable of integration lower, upper : expressions lower and upper limit of integration nsteps : integer number of sampling steps. This should be chosen sufficiently high to capture all relevant aspects of your dynamics. method : "midpoint" or "gauss" which method to use for numerical integration. So far Gauß–Legendre quadrature ("gauss"; needs SymPy) and the midpoint rule ("midpoint") are available. Use the midpoint rule if you expect your integrand to exhibit structure on a time scale smaller than (upper − lower)/nsteps. Otherwise or when in doubt, use "gauss".
exception UnsuccessfulIntegration

This exception is raised when the integrator cannot meet the accuracy and step-size requirements. If you want to know the exact state of your system before the integration fails or similar, catch this exception.

test(omp=True, sympy=True)

Runs a quick simulation to test whether:

• a compiler is available and can be interfaced by Setuptools,
• OMP libraries are available and can be assessed,
• SymPy is available.

The latter two tests can be deactivated with the respective argument. This is not a full software test but rather a quick sanity check of your installation. If successful, this function just finishes without any message.

### The main class¶

class jitcdde(f_sym=(), helpers=None, n=None, delays=None, max_delay=None, control_pars=(), verbose=True, module_location=None)
Parameters: f_sym : iterable of symbolic expressions or generator function yielding symbolic expressions or dictionary If an iterable or generator function, the i-th element is the i-th component of the value of the DDE’s derivative $$f(t,y)$$. If a dictionary, it has to map the dynamical variables to its derivatives and the dynamical variables must be y(0), y(1), …. helpers : list of length-two iterables, each containing a symbol and an expression Each helper is a variable that will be calculated before evaluating the derivative and can be used in the latter’s computation. The first component of the tuple is the helper’s symbol as referenced in the derivative or other helpers, the second component describes how to compute it from t, y and other helpers. This is for example useful to realise a mean-field coupling, where the helper could look like (mean, sum(y(i) for i an range(100))/100). (See the JiTCODE documentation for an example.) n : integer Length of f_sym. While JiTCDDE can easily determine this itself (and will, if necessary), this may take some time if f_sym is a generator function and n is large. Take care that this value is correct – if it isn’t, you will not get a helpful error message. delays : iterable of expressions or floats The delays of the dynamics. If not given, JiTCDDE will determine these itself if needed. However, this may take some time if f_sym is large. Take care that these are correct – if they aren’t, you won’t get a helpful error message. max_delay : number Maximum delay. In case of constant delays and if not given, JiTCDDE will determine this itself. However, this may take some time if f_sym is large and delays is not given. Take care that this value is not too small – if it is, you will not get a helpful error message. If this value is too large, you may run into memory issues for long integration times and calculating Lyapunov exponents (with jitcdde_lyap) may take forever. control_pars : list of symbols Each symbol corresponds to a control parameter that can be used when defining the equations and set after compilation with set_parameters. Using this makes sense if you need to do a parameter scan with short integrations for each parameter and you are spending a considerable amount of time compiling. verbose : boolean Whether JiTCDDE shall give progress reports on the processing steps. module_location : string location of a module file from which functions are to be loaded (see save_compiled). If you use this, you need not give f_sym as an argument, but in this case you must give n and max_delay. Also note that the integrator may lack some functionalities, depending on the arguments you provide.
add_past_point(self, time, state, derivative)

adds an anchor from which the initial past of the DDE is interpolated.

Parameters: time : float the temporal position of the anchor. state : iterable of floats the state of the anchor. The dimension of the array must match the dimension of the differential equation (n). derivative : iterable of floats the derivative at the anchor. The dimension of the array must match the dimension of the differential equation (n).
add_past_points(self, anchors)

adds multiple anchors from which the past of the DDE is interpolated.

Parameters: anchors : iterable of tuples Each tuple must have components corresponding to the arguments of add_past_point.
constant_past(self, state, time=0)

initialises the past with a constant state.

Parameters: state : iterable of floats The length must match the dimension of the differential equation (n). time : float The time at which the integration starts.
past_from_function(self, function, times_of_interest=None, max_anchors=100, tol=5)

automatically determines anchors describing the past of the DDE from a given function, i.e., a piecewise cubic Hermite interpolation of the function at automatically selected time points will be the initial past. As this process involves heuristics, it is not perfect. For a better control of the initial conditions, use add_past_point.

Parameters: function : callable or iterable of symbolic expressions If callable, this takes the time as an argument and returns an iterable of floats that is the initial state of the past at that time. If an iterable of expressions, each expression represents how initial past of the respective component depends on t (requires SymPy). In both cases, the lengths of the iterable must match the dimension of the differential equation (n). times_of_interest : iterable of numbers or None Initial set of time points considered for the interpolation. The highest value will be the starting point of the integration. Further interpolation anchors may be added in between the given anchors depending on heuristic criteria. If None, these will be automatically chosen depending on the maximum delay and the integration will start at $$t=0$$. max_anchors : positive integer The maximum number of anchors that this routine will create (including those for the times_of_interest). tol : integer This is a parameter for the heuristics, more precisely the number of digits considered for precision in several places. The higher this value, the more likely it is that the heuristic adds anchors.
get_state(self)

Returns an object that represents all anchors currently used by the integrator, which compeletely define the current state. The object is a CubicHermiteSpline instance (with a few special extensions for JiTCDDE), which allows you to extract all sorts of information from it if you want.

The format can also be used as an argument for add_past_points. An example where this is useful is when you want to switch between plain integration and one that also obtains Lyapunov exponents.

If you reinitialise this integrator after calling this, this past will be used.

purge_past(self)

Clears the past and resets the integrator. You need to define a new past (using add_past_point) after this.

reset_integrator(self)

Resets the integrator, forgetting all integration progress and forcing re-initiation when it is needed next.

generate_lambdas(self)

Explicitly initiates a purely Python-based integrator.

compile_C(self, simplify=None, do_cse=False, chunk_size=100, extra_compile_args=None, extra_link_args=None, verbose=False, modulename=None, omp=False)

translates the derivative to C code using SymEngine’s C-code printer. For detailed information many of the arguments and other ways to tweak the compilation, read these notes.

Parameters: simplify : boolean Whether the derivative should be simplified (with ratio=1.0) before translating to C code. The main reason why you could want to disable this is if your derivative is already optimised and so large that simplifying takes a considerable amount of time. If None, this will be automatically disabled for n>10. do_cse : boolean Whether SymPy’s common-subexpression detection should be applied before translating to C code. This is worthwile if your DDE contains the same delay more than once. Otherwise it is almost always better to let the compiler do this (unless you want to set the compiler optimisation to -O2 or lower). As this requires all entries of f at once, it may void advantages gained from using generator functions as an input. Also, this feature uses SymPy and not SymEngine. chunk_size : integer If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See Handling very large differential equations on why this is useful and how to best choose this value. If smaller than 1, no chunking will happen. extra_compile_args : iterable of strings extra_link_args : iterable of strings Arguments to be handed to the C compiler or linker, respectively. verbose : boolean Whether the compiler commands shall be shown. This is the same as Setuptools’ verbose setting. modulename : string or None The name used for the compiled module. omp : pair of iterables of strings or boolean What compiler arguments shall be used for multiprocessing (using OpenMP). If True, they will be selected automatically. If empty or False, no compilation for multiprocessing will happen (unless you supply the relevant compiler arguments otherwise).
set_parameters(self, *parameters)

Set the control parameters defined by the control_pars argument of the jitcdde. Note that you probably want to use purge_past and address initial discontinuities every time after you do this.

Parameters: parameters : floats Values of the control parameters. The order must be the same as in the control_pars argument of the jitcdde.
set_integration_parameters(self, atol=1e-10, rtol=1e-05, first_step=1.0, min_step=1e-10, max_step=10.0, decrease_threshold=1.1, increase_threshold=0.5, safety_factor=0.9, max_factor=5.0, min_factor=0.2, pws_factor=3, pws_atol=0.0, pws_rtol=1e-05, pws_max_iterations=10, pws_base_increase_chance=0.1, pws_fuzzy_increase=False)

Sets the parameters for the step-size adaption. Arguments starting with pws (past within step) are only relevant if the delay is shorter than the step size.

Parameters: atol : float rtol : float The tolerance of the estimated integration error is determined as $$\texttt{atol} + \texttt{rtol}·|y|$$. The step-size adaption algorithm is the same as for the GSL. For details see its documentation. first_step : float The step-size adaption starts with this value. min_step : float Should the step-size have to be adapted below this value, the integration is aborted and UnsuccessfulIntegration is raised. max_step : float The step size will be capped at this value. decrease_threshold : float If the estimated error divided by the tolerance exceeds this, the step size is decreased. increase_threshold : float If the estimated error divided by the tolerance is smaller than this, the step size is increased. safety_factor : float To avoid frequent adaption, all freshly adapted step sizes are multiplied with this factor. max_factor : float min_factor : float The maximum and minimum factor by which the step size can be adapted in one adaption step. pws_factor : float Factor of step-size adaptions due to a delay shorter than the time step. If dividing the step size by pws_factor moves the delay out of the time step, it is done. If this is not possible, if the iterative algorithm does not converge within pws_max_iterations, or if it converges within fewer iterations than pws_factor, the step size is decreased or increased, respectively, by this factor. pws_atol : float pws_rtol : float If the difference between two successive iterations is below the tolerance determined with these parameters, the iterations are considered to have converged. pws_max_iterations : integer The maximum number of iterations before the step size is decreased. pws_base_increase_chance : float If the normal step-size adaption calls for an increase and the step size was adapted due to the past lying within the step, there is at least this chance that the step size is increased. pws_fuzzy_increase : boolean Whether the decision to try to increase the step size shall depend on chance. The upside of this is that it is less likely that the step size gets locked at a unnecessarily low value. The downside is that the integration is not deterministic anymore. If False, increase probabilities will be added up until they exceed 0.9, in which case an increase happens.
t
Returns: time : float The current time of the integrator.
integrate(self, target_time)

Tries to evolve the dynamics.

Parameters: target_time : float time until which the dynamics is evolved state : NumPy array the computed state of the system at target_time.
adjust_diff(self, shift_ratio=0.0001)

Performs a zero-amplitude (backwards) jump whose width is shift_ratio times the distance to the previous anchor into the past. This may help with addressing initial discontinuities. See the documentation of jump for the caveats of this.

Returns: minima : NumPy array of floats maxima : NumPy array of floats The minima or maxima, respectively, of each component during the jump interval. See the documentation of jump on why you may want these.
integrate_blindly(self, target_time, step=None)

Evolves the dynamics with a fixed step size ignoring any accuracy concerns. See Dealing with initial discontinuities as to why you may want to use this. If a delay is smaller than the time step, the state is extrapolated from the previous step.

If the target time equals the current time, adjust_diff is called automatically.

Parameters: target_time : float time until which the dynamics is evolved. In most cases, this should be larger than the maximum delay. step : float aspired step size. The actual step size may be slightly adapted to make it divide the integration time. If None, 0, or otherwise falsy, the maximum step size as set with max_step of set_integration_parameters is used. state : NumPy array the computed state of the system at target_time.
step_on_discontinuities(self, propagations=1, max_step=None, min_distance=1e-05)

Assumes that the derivative is discontinuous at the start of the integration and chooses steps such that propagations of this point via the delays always fall on integration steps (or very close). If the discontinuity was propagated sufficiently often, it is considered to be smoothed and the integration is stopped. See Dealing with initial discontinuities as to why you may want to use this.

This only makes sense if you just defined the past (via add_past_point) and start integrating, just reset the integrator, or changed control parameters.

In case of an ODE, adjust_diff is used automatically.

Parameters: propagations : integer how often the discontinuity has to propagate to before it’s considered smoothed max_step : float maximum step size. If None, 0, or otherwise falsy, the max_step as set with set_integration_parameters is used. min_distance : float If two required steps are closer than this, they will be treated as one. state : NumPy array the computed state of the system after integration
jump(self, amplitude, time, width=1e-05, forward=True)

Applies a jump to the state. Since this naturally introduces a discontinuity to the state, it can only be approximated. This is done by adding two anchors in a short temporal distance width. With other words, the jump is not instantaneous but just a strong change of the state over a small interval of length width. The slope after the jump is computed using the derivative f.

A potential source of numerical issues is that the Hermite interpolant becomes extreme during the jump (due to Runge’s phenomenon). Whether this is a problem primarily depends on how these extrema influence delayed dependencies of your derivative. To allow you to estimate the magnitude of this, this function returns the extrema during the jump interval. There are two ways to address this:

• Integrate with steps such that past values from the jump interval are avoided. You can use integrate_blindly to do this.
• Increase the width.

Note that due to the adapted derivative, there are no initial discontinuities after this.

Parameters: amplitude : NumPy array of floats The amplitude of the jump. time : float The time at which the jump shall happen. Usually this would be the last time to which you integrated. width : float The size of the jump interval (see above). The smaller you choose this, the sharper the jump, but the more likely are numerical problems in its wake. This should be smaller than all delays in the system. forward : boolean Whether the jump interval should begin after time. Otherwise it will end at time (equivalent to a forward jump starting at time−width). minima : NumPy array of floats maxima : NumPy array of floats The minima or maxima, respectively, of each component during the jump interval. See above on why you may want these.
check(self, fail_fast=True)

Performs a series of checks that may not be feasible at runtime (usually due to their length). Whenever you run into an error that you cannot make sense of, try running this. It checks for the following mistakes:

Parameters: fail_fast : boolean whether to abort on the first failure. If false, an error is raised only after all problems are printed.
render_and_write_code(self, expressions, name, chunk_size=100, arguments=(), omp=True)

Writes expressions to code.

Parameters: expressions: iterator expressions to be written name: string unique name of what is computed chunk_size: integer size of chunks. If smaller than 1, no chunking happens. arguments: list of tuples Each tuple contains the name, type, and size (optional, for arrays) of an argument needed by the code. This is so the arguments can be passed to chunked functions. omp: boolean whether OMP pragmas should be included
save_compiled(self, destination='', overwrite=False)

saves the module file with the compiled functions for later use (see the module_location argument). If no compiled derivative exists, it tries to compile it first using compile_C. In most circumstances, you should not rename this file, as the filename is needed to determine the module name.

Parameters: destination : string specifying a path If this specifies only a directory (don’t forget the trailing / or similar), the module will be saved to that directory. If empty (default), the module will be saved to the current working directory. Otherwise, the functions will be (re)compiled to match that filename. A file ending will be appended if needed. overwrite : boolean Whether to overwrite the specified target if it already exists. filename : string The destination that was actually used.

### Lyapunov exponents¶

class jitcdde_lyap(f_sym=(), n_lyap=1, simplify=None, **kwargs)

Calculates the Lyapunov exponents of the dynamics (see the documentation for more details). The handling is the same as that for jitcdde except for:

Parameters: n_lyap : integer Number of Lyapunov exponents to calculate. simplify : boolean Whether the differential equations for the separation function shall be simplified (with ratio=1.0). Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). If None, this will be automatically disabled for n>10.
integrate(self, target_time)

Like jitcdde’s integrate, except for orthonormalising the separation functions and:

Returns: y : one-dimensional NumPy array The state of the system. Same as the output of jitcdde’s integrate. lyaps : one-dimensional NumPy array The “local” Lyapunov exponents as estimated from the growth or shrinking of the separation function during the integration time of this very integrate command. integration time : float The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily the difference between target_time and the previous target_time as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time. If the size of the advance by integrate (the sampling step) is smaller than the actual integration step, it may also happen that integrate does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as 0, which is as nonsensical as any other result (except perhaps nan) but should not matter with a proper weighted averaging. It is essential that you choose target_time properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parameter max_step of set_integration_parameters instead.
set_integration_parameters(self, **kwargs)

Sets the parameters for the step-size adaption. Arguments starting with pws (past within step) are only relevant if the delay is shorter than the step size.

Parameters: atol : float rtol : float The tolerance of the estimated integration error is determined as $$\texttt{atol} + \texttt{rtol}·|y|$$. The step-size adaption algorithm is the same as for the GSL. For details see its documentation. first_step : float The step-size adaption starts with this value. min_step : float Should the step-size have to be adapted below this value, the integration is aborted and UnsuccessfulIntegration is raised. max_step : float The step size will be capped at this value. decrease_threshold : float If the estimated error divided by the tolerance exceeds this, the step size is decreased. increase_threshold : float If the estimated error divided by the tolerance is smaller than this, the step size is increased. safety_factor : float To avoid frequent adaption, all freshly adapted step sizes are multiplied with this factor. max_factor : float min_factor : float The maximum and minimum factor by which the step size can be adapted in one adaption step. pws_factor : float Factor of step-size adaptions due to a delay shorter than the time step. If dividing the step size by pws_factor moves the delay out of the time step, it is done. If this is not possible, if the iterative algorithm does not converge within pws_max_iterations, or if it converges within fewer iterations than pws_factor, the step size is decreased or increased, respectively, by this factor. pws_atol : float pws_rtol : float If the difference between two successive iterations is below the tolerance determined with these parameters, the iterations are considered to have converged. pws_max_iterations : integer The maximum number of iterations before the step size is decreased. pws_base_increase_chance : float If the normal step-size adaption calls for an increase and the step size was adapted due to the past lying within the step, there is at least this chance that the step size is increased. pws_fuzzy_increase : boolean Whether the decision to try to increase the step size shall depend on chance. The upside of this is that it is less likely that the step size gets locked at a unnecessarily low value. The downside is that the integration is not deterministic anymore. If False, increase probabilities will be added up until they exceed 0.9, in which case an increase happens.
integrate_blindly(self, target_time, step=None)

Like jitcdde’s integrate_blindly, except for orthonormalising the separation functions after each step and the output being analogous to jitcdde_lyap’s integrate.

class jitcdde_transversal_lyap(f_sym=(), groups=(), simplify=None, **kwargs)

Calculates the largest Lyapunov exponent in orthogonal direction to a predefined synchronisation manifold, i.e. the projection of the tangent vector onto that manifold vanishes. In contrast to jitcdde_restricted_lyap, this performs some transformations tailored to this specific application that may strongly reduce the number of differential equations and ensure a dynamics on the synchronisation manifold.

Note that all functions for defining the past differ from their analoga from jitcdde by having the dimensions of the arguments reduced to the number of groups. This means that only one initial value (of the state or derivative) per group of synchronised components has to be provided (in the same order as the groups argument of the constructor).

The handling is the same as that for jitcdde except for:

Parameters: groups : iterable of iterables of integers each group is an iterable of indices that identify dynamical variables that are synchronised on the synchronisation manifold. simplify : boolean Whether the transformed differential equations shall be subjected to SymEngine’s simplify. Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). If None, this will be automatically disabled for n>10.
integrate(self, target_time)

Like jitcdde’s integrate, except for normalising and aligning the separation function and:

Returns: y : one-dimensional NumPy array The state of the system. Only one initial value per group of synchronised components is returned (in the same order as the groups argument of the constructor). lyap : float The “local” largest transversal Lyapunov exponent as estimated from the growth or shrinking of the separation function during the integration time of this very integrate command. integration time : float The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily difference between target_time and the previous target_time, as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time. If the size of the advance by integrate (the sampling step) is smaller than the actual integration step, it may also happen that integrate does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as 0, which is as nonsensical as any other result (except perhaps nan) but should not matter with a proper weighted averaging. It is essential that you choose target_time properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parameter max_step of set_integration_parameters instead.
integrate_blindly(self, target_time, step=None)

Like jitcdde’s integrate_blindly, except for normalising and aligning the separation function after each step and the output being analogous to jitcdde_transversal_lyap’s integrate.

class jitcdde_restricted_lyap(f_sym=(), vectors=(), **kwargs)

Calculates the largest Lyapunov exponent in orthogonal direction to a predefined plane, i.e. the projection of the separation function onto that plane vanishes. See this test for an example of usage. Note that coordinate planes (i.e., planes orthogonal to vectors with only one non-zero component) are handled considerably faster. Consider transforming your differential equation to achieve this.

The handling is the same as that for jitcdde_lyap except for:

Parameters: vectors : iterable of pairs of NumPy arrays A basis of the plane, whose projection shall be removed. The first vector in each pair is the component coresponding to the the state, the second vector corresponds to the derivative.
set_integration_parameters(self, *args, **kwargs)

Sets the parameters for the step-size adaption. Arguments starting with pws (past within step) are only relevant if the delay is shorter than the step size.

Parameters: atol : float rtol : float The tolerance of the estimated integration error is determined as $$\texttt{atol} + \texttt{rtol}·|y|$$. The step-size adaption algorithm is the same as for the GSL. For details see its documentation. first_step : float The step-size adaption starts with this value. min_step : float Should the step-size have to be adapted below this value, the integration is aborted and UnsuccessfulIntegration is raised. max_step : float The step size will be capped at this value. decrease_threshold : float If the estimated error divided by the tolerance exceeds this, the step size is decreased. increase_threshold : float If the estimated error divided by the tolerance is smaller than this, the step size is increased. safety_factor : float To avoid frequent adaption, all freshly adapted step sizes are multiplied with this factor. max_factor : float min_factor : float The maximum and minimum factor by which the step size can be adapted in one adaption step. pws_factor : float Factor of step-size adaptions due to a delay shorter than the time step. If dividing the step size by pws_factor moves the delay out of the time step, it is done. If this is not possible, if the iterative algorithm does not converge within pws_max_iterations, or if it converges within fewer iterations than pws_factor, the step size is decreased or increased, respectively, by this factor. pws_atol : float pws_rtol : float If the difference between two successive iterations is below the tolerance determined with these parameters, the iterations are considered to have converged. pws_max_iterations : integer The maximum number of iterations before the step size is decreased. pws_base_increase_chance : float If the normal step-size adaption calls for an increase and the step size was adapted due to the past lying within the step, there is at least this chance that the step size is increased. pws_fuzzy_increase : boolean Whether the decision to try to increase the step size shall depend on chance. The upside of this is that it is less likely that the step size gets locked at a unnecessarily low value. The downside is that the integration is not deterministic anymore. If False, increase probabilities will be added up until they exceed 0.9, in which case an increase happens.
integrate(self, target_time)

Like jitcdde’s integrate, except for normalising and aligning the separation function and:

Returns: y : one-dimensional NumPy array The state of the system. Same as the output of jitcdde’s integrate. lyap : float The “local” largest transversal Lyapunov exponent as estimated from the growth or shrinking of the separation function during the integration time of this very integrate command. integration time : float The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily difference between target_time and the previous target_time, as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time. If the size of the advance by integrate (the sampling step) is smaller than the actual integration step, it may also happen that integrate does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as 0, which is as nonsensical as any other result (except perhaps nan) but should not matter with a proper weighted averaging. It is essential that you choose target_time properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parameter max_step of set_integration_parameters instead.
integrate_blindly(self, target_time, step=None)

Like jitcdde’s integrate_blindly, except for normalising and aligning the separation function after each step and the output being analogous to jitcdde_restricted_lyap’s integrate.