The JiTCDDE module¶
Note and remember that some relevant information can be found in the common JiTC*DE documentation. This includes installation instructions, compiler issues and optimisation, general performance considerations, how to implement network dynamics and conditional statements, and a small FAQ.
Introduction¶
JiTCDDE (justintime compilation for delay differential equations) is a standalone Python implementation of the DDE integration method proposed by Shampine and Thompson [ST01], which in turn employs the Bogacki–Shampine Runge–Kutta pair [BS89].
JiTCDDE is designed in analogy to JiTCODE (which is handled very similarly to SciPy’s ODE (scipy.integrate.ode
)):
It takes an iterable (or generator function or dictionary) of symbolic expressions, translates them to C code, compiles them and an integrator wrapped around them on the fly, and allows you to operate this integrator from Python.
Symbolic expressions are mostly handled by SymEngine, SymPy’s compiledbackendtobe (see SymPy vs. SymEngine for details).
This approach has the following advantages:
 Speed boost through compilation Evaluating the derivative and the core operations of the Runge–Kutta integration happen in compiled C code and thus very efficiently.
 Speed boost through symbolic optimisation If your derivative is automatically generated by some routine, you can simplify it symbolically to boost the speed. In fact, blatant optimisations such as \(y·(xx)=0\) are done on the fly by SymEngine. This is for example interesting if you want to simulate dynamics on a sparse network, as nonexisting links are not taken into account when calculating the derivative when integrating. Moreover, multiple delay terms with the same delay can be handled efficiently, requiring only one lookup (see below).
 Automatically calculated Lyapunov exponents As the derivative is provided symbolically, SymEngines’s automatic differentiation routines can be employed to obtain the DDEs for the tangent vectors, which in turn are required for for calculating the Lyapunov exponents (see Calculating Lyapunov exponents with jitcdde_lyap).
 Symbolic interface You can enter your differential equations almost like you would on paper. Also, if you are working with SymPy or SymEngine anyway – e.g., to calculate fixed points –, you do not need to bother much with translating your equations.
If compilation fails to work for whatever reason, pure Python functions can be employed as a fallback (which is much slower, however).
A brief mathematic background¶
This documentation assumes that the delay differential equation (DDE) you want to solve is:
The gist of Shampine’s and Thompson’s method [ST01] is this: The differential equation is integrated adaptively with the Bogacki–Shampine pair [BS89], like an ODE. After every successful integration step, the state and derivative of the integration (which is an automatic byproduct) are stored. Whenever the derivative \((f)\) is evaluated, the required past states \(\left ( y(tτ_1), y(tτ_2), … \right )\) are obtained through piecewise cubic Hermite interpolation, using previously stored pairs of state and derivative (“anchor”). In some extreme cases, they may also be extrapolated.
Note that unlike most other DDE softwares, JiTCDDE stores and accesses the initial past in exactly this way, i.e., as anchor points.
Thus, if you want to have maximum control, you have to initiate the past in exactly this way, i.e., you have to give at least two such anchor points (via add_past_point
).
If you do not want or need this, there are utility function available that automatically determine the anchors from a given function (past_from_function
) or just set it to a fixed value (constant_past
).
You can also use the get_state
method to obtain a representation of the past that you can dissect and modify using CubicHermiteSpline.
A simple example¶
Suppose we want to integrate the Mackey–Glass delay differential equation: \(\dot{y} = f(y)\) with \(y∈ℝ\),
and with \(β = 0.25\), \(γ = 0.1\), and \(n = 10\). First we do some importing and define the constants:
from jitcdde import jitcdde, y, t
import numpy
τ = 15
n = 10
β = 0.25
γ = 0.1
Amongst our imports were the symbols for the state (y
) and time (t
), which have to be used to write down the differential equation such that JiTCDDE can process it.
Using them, we can write down the righthand side of the differential equation as a list of expressions.
As it is onedimensional, the list contains only one element.
We can then initiate JiTCDDE:
f = [ β * y(0,tτ) / (1 + y(0,tτ)**n)  γ*y(0) ]
DDE = jitcdde(f)
We want the initial condition and past to be \(y(t<0) = 1\).
Hence we can use constant_past
.
This automatically results in the integration starting at \(t=0\).
DDE.constant_past([1.0])
If we calculate the derivative from our initial conditions, we obtain \(f(t=0) = 0.025\), which does not agree with the \(\dot{y}(t=0) = 0\) as explicitly defined in the initial conditions. Practically, this would result in an error if we started integrating without further precautions.
step_on_discontinuities
makes some tailored integration steps to avoid this problem and to allow for the discontinuity to be smoothed out by temporal evolution.
(See Dealing with initial discontinuities for alternatives and more details on this).
DDE.step_on_discontinuities()
Finally, we can perform the actual integration.
In our case, we integrate for 10000 time units with a sampling rate of 10 time units. We query the current time of the integrator (DDE.t
) to start wherever step_on_discontinuities
ended. integrate
returns the state after integration, which we collect in the list data
.
Finally, we use numpy.savetxt
to store this to the file timeseries.dat
.
data = []
for time in numpy.arange(DDE.t, DDE.t+10000, 10):
data.append( DDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)
Taking everything together, our code is:
from jitcdde import jitcdde, y, t
import numpy
τ = 15
n = 10
β = 0.25
γ = 0.1
f = [ β * y(0,tτ) / (1 + y(0,tτ)**n)  γ*y(0) ]
DDE = jitcdde(f)
DDE.constant_past([1.0])
DDE.step_on_discontinuities()
data = []
for time in numpy.arange(DDE.t, DDE.t+10000, 10):
data.append( DDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)
Dealing with initial discontinuities¶
As already examplified in A simple example, \(\dot{y}\) will usually be discontinuous at the start of the integration: Before that time, it is directly defined by an Hermite interpolation of the anchors that you supply; afterwards, it is determined via evaluating \(f\). As future values of \(f\) depend on the past via the delay terms, it is also nonsmooth at other times, namely \(τ_1, τ_2, …, 2τ_1, τ_1 + τ_2, 2τ_2, …\). If an integration step contains one of these points, this may violate the conditions of Runge–Kutta integrations (for a loworder discontinuity) and makes the error estimate be very high, no matter the step size. Fortunately, the discontinuities are quickly “smoothed out” (i.e., reduced in order) with time evolution and can then be ignored. To make this happen, you have four options:
step_on_discontinuities
– this chooses the integration steps such that they fall on the discontinuities. In most cases, this is the easiest and fastest solution to this problem.adjust_diff
– this smoothens out the derivative on a small time interval, effectively causing a dent in the history. The disadvantage of this is that the derivative may assume extreme values causing problems later on.integrate_blindly
– this integrates the system for some time with a fixed step size, ignoring the error estimate. You have to take care that all parameters are reasonable. This is a good choice if you have a lot of different delays or time or statedependent delays. The time you integrate with this should be larger than your largest delay. Carefully chosen initial conditions – of course, you can define the past such that the derivative for the last anchor is identical to the value of \(f\) as determined with the anchors. To find such initial conditions, you usally have to solve a nonlinear equation system. If you are not interested in the general dynamics of the system, but the evolution of a very particular initial condition, this may be given by default (otherwise your model is probably worthless).
Delays within the step (overlap)¶
If the delay becomes shorter than the step size, we need a delayed state to evaluate f
before we have a final result for the required interpolation anchors.
With other words, the intergration step depends on its own result and thus become implicit.
JiTCDDE addresses this problem mainly in the same manner as Shampine and Thompson [ST01]:
If reducing the step size by a small factor (
pws_factor
) makes it smaller than the delay, this is done.Otherwise, the result of an intergration step is calculated iteratively as follows:
 Attempt an integration step and extrapolate the required delayed states from the existing results.
 Attempt the same step again and interpolate the required delayed states using the result of the previous attempt.
 If the results of the last two attempts are identical within an absolute tolerance of
pws_atol
and relative tolerance ofpws_rtol
, accept the result of the last attempt. Otherwise go to step 2. If no such convergence has happened withinpws_max_iterations
, reduce the step size bypws_factor
.
A problem of this approach is that as soon as it reduces the step size, the error estimates from the adaptive Runge–Kutta routines are not directly useful anymore since they almost always insist on increasing the step size. Ignoring this may lead to useless integration steps (and thus wasted time) due to the step size being adapted back and forth. Moreover, throttling step size increases (which is generally reasonable) may result in the step size being “trapped” at an unnecessary low value. As far as I can tell, Shampine and Thompson [ST01] offer no solution to this.
To address this issue, JiTCDDE employs the following criteria for increasing the step size when the recommended step size (from the adaptive Runge–Kutta method) is larger than the current one:
 If the shortest delay is larger than the recommended step size, the step size is increased.
 If the calculating the next step took less than
pws_factor
iterations and the recommended step size is bigger thanpws_factor
times the shortest delay, the step size is increased.  In all other cases, the step size is increased with a chance of
pws_base_increase_chance
.
To be precise, the above sharp criteria are intentionally blurred such that the probability to increase the step size continuously depends on the mentioned factors.
Finally, the parameter pws_fuzzy_increase
determines whether the increase is actually depends on chance or is deterministic (which may be useful for some applications).
This parameter and the others mentioned above can be controlled with set_integration_parameters
.
Time and statedependent delays¶
There is nothing in JiTCDDE’s implementation that keeps you from making delays time or statedependent. However, the error estimate is not accurate anymore as it does not take into account the inaccuracy caused by the changing delay. This should not be a problem if your delays change sufficiently slowly in comparison to your step size.
Note that if you want to make use of this, you must provide the maximum delay manually. See this file for an example.
Multidimensional equations and networks¶
While this particular documentation contains no example for a small, multidimensional equation, their implementation is analogous to JiTCODE. An example for a twodimensional DDE (the sunflower equation) can also be found in the accompanying paper.
JiTCDDE is specifically designed to be able to handle large delay differential equations, as they arise, e.g., in networks. We give an example for a network on onedimensional oscillators below. As the caveats, tools, and tricks when doing this are the same as for JiTCODE; its documentation may be helpful to you, in particular the sections:
 Handling very large differential equations
 A more complicated example (featuring multidimensional oscillators)
Example: Kuromato oscillators with distributed delays¶
Suppose, we want to implement a network of \(n=100\) delaycoupled Kuramoto oscillators, governed by the following differential equations:
where \(ω=1\), \(c=42\), \(τ_{ij} \sim \mathcal{U}([\frac{π}{5},2π])\), and \(A\) is the adjacency matrix of a random unweighted directed network with selfconnections where each node exists with a probability \(q=0.05\).
Without further ado, here is the example code; highlighted lines will be commented upon below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27  from jitcdde import jitcdde, y, t
from numpy import pi, arange, random, max
from symengine import sin
n = 100
ω = 1
c = 42
q = 0.05
A = random.choice( [1,0], size=(n,n), p=[q,1q] )
τ = random.uniform( pi/5, 2*pi, size=(n,n) )
def kuramotos():
for i in range(n):
yield ω + c/(n1)*sum(
sin(y(j,tτ[i,j])y(i))
for j in range(n)
if A[j,i]
)
I = jitcdde(kuramotos,n=n,verbose=False)
I.set_integration_parameters(rtol=0,atol=1e5)
I.constant_past( random.uniform(0,2*pi,n), time=0.0 )
I.integrate_blindly( max(τ) , 0.1 )
for time in I.t + arange(0,400,0.2):
print(*I.integrate(time) % (2*pi))

Explanation of selected features and choices:
 Line 9 is just a quick way to generate the network described above. For moore complex network, you will either have to write more complex function or use dedicated modules. (In fact this example was chosen such that the network creation is very simple.)
 The values of \(τ\) are initialised globally (line 10). We shoould not just define a function here, because if we were trying to calculate Lyapunov exponents or the Jacobian, the generator function would be called multiple times, and thus the value of the parameter would not be consistent (which would be desastrous).
 In line 15, we use
symengine.sin
– in contrast tomath.sin
ornumpy.sin
.  In line 21, we explicitly use absolute instead of relative errors, as the latter make no sense for Kuramoto oscillators.
 In line 24, we integrate blindly with a maximum time step of 0.1 up to the maximal delay to ensure that initial discontinuities have smoothened out.
Calculating Lyapunov exponents with jitcdde_lyap
¶
jitcdde_lyap
is a simple extension of jitcdde
that almost automatically handles calculating Lyapunov exponents by evolving separation functions.
It works just like jitcdde
, except that it generates and integrates additional differential equations for the separation functions.
After every call of integrate
, the separation functions are orthonormalised, and the “local” Lyapunov exponents for this integration step are returned alongside with the system’s state.
These can then be further processed to obtain the Lyapunov exponents.
The separation functions are intialised with random data, and you have to take care of the preiterations that the separation functions require to align themselves.
The method employed here is similar to Farmer’s [F82], which in turn is an adaption of the method described by Benettin et al. [BGGS80] to delayed systems. As the state of delayed systems is also defined by their recent past, one has to consider the past of tangent vectors (as used in Benettin et. al.) as well, which are called separation functions. Farmer approximates these separation functions by fine equidistantly sampled recordings of the past on which he applies the standard scalar product for purposes of computing norms and orthonormalisation. This approach does not translate well to adaptive step sizes as JiTCDDE employs. Instead, JiTCDDE employs as a scalar product between two separation functions \(g\) and \(h\):
where \(\mathcal{H}\) denotes the piecewise cubic Hermite interpolant (which is also used for obtaining past states). The matrix induced by this scalar product can largely be calculated beforehand and thus the scalar product itself can be evaluated efficiently. Note that for the limit of an infinitely fine sampling, this yields the same result as Farmer’s approach.
For instance, we can calculate and print the Lyapunov exponents for the Mackey–Glass system as follows (central changes to the example from A simple example highlighted):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33  from jitcdde import jitcdde_lyap, y, t
import numpy
from scipy.stats import sem
τ = 15
n = 10
β = 0.25
γ = 0.1
f = [ β * y(0,tτ) / (1 + y(0,tτ)**n)  γ*y(0) ]
n_lyap = 4
DDE = jitcdde_lyap(f, n_lyap=n_lyap)
DDE.constant_past([1.0])
DDE.step_on_discontinuities()
data = []
lyaps = []
weights = []
for time in numpy.arange(DDE.t, DDE.t+10000, 10):
state, lyap, weight = DDE.integrate(time)
data.append(state)
lyaps.append(lyap)
weights.append(weight)
numpy.savetxt("timeseries.dat", data)
lyaps = numpy.vstack(lyaps)
for i in range(n_lyap):
Lyap = numpy.average(lyaps[400:,i], weights=weights[400:])
stderr = sem(lyaps[400:,i]) # Note that this only an estimate
print("%i. Lyapunov exponent: % .4f +/ %.4f" % (i+1,Lyap,stderr))

Note that integrate
does not only return local Lyapunov exponents but also the length of the time interval to which they apply (which differs from the time spanned by the integrate
command and may even be zero). This length should be used to weigh the local Lyapunov exponents for statistical processing, like in line 31.
Be aware that the estimation of Lyapunov exponents may be considerably more sensitive numerically. Before you give up or report an issue, please follow the following protocol:
 Run your scenario with regular
jitcdde
(i.e., without Lyapunov exponents).  Run your scenario just estimating the largest Lyapunov exponent (
n_lyap=1
).  If the integration is not successful, try to locate the point where things go awry. This may be before you actually get an error message. Look for infinite values or notanumbers. Reduce the respective parameters that control the frequency of resizing, namely the
max_step
parameter ofstep_on_discontinuities
, thestep
parameter ofintegrate_blindly
, or the sampling rate of the regular integration.  Increase the number of computed Lyapunov exponents one at a time. Repeat Step 3 as needed. Be aware that once you are in the negative Lyapunov exponents, it may happen that the amplitude of the next exponent is orders of magnitude higher that of the preceding one – thus requiring a much more frequent rescaling to avoid numerical underflows.
As the Lyapunov vectors (separation functions) are quite difficult to interpret, they are not returned as of now (if you need them, please make a feature request).
There also two classes (jitcdde_transversal_lyap
and jitcdde_restricted_lyap
) that allows to calculate the largest transversal Lyapunov exponents to the synchronisation manifold and arbitrary hyperplanes, respectively.
See the JiTCODE documentation for an example on how to use the former and the accompanying paper for a mathematical background (and another example).
More Features and Examples¶
JiTCDDE has several more features for which there are no extensively documented examples, but that are pretty selfexplanatory. The following is a list of example scripts that may help you with specific problems:
 Laminar Chaos and State Dependent are examples employing statedependent delays.
 Mackey–Glass with Jumps shows how to use the
jump
method.  Simple Neutral and Neutral show how to implement neutral DDES. The latter additionally shows how to optimise a DDE with many redundant delay requests, making it considerably faster.
 If you want to have input that cannot be expressed in a simple function, you can use jitcdde_input or use a callback (see the next point). This example demonstrates how to use this.
 If you want to call a Python function within the derivative, use the
callback_functions
argument. This example demonstrates how to use this.  If you want to do some sort of event detection, the best way is to use
get_state
, and use the features of CHSPy to determine the time of events as precisely as you need them. As the interpolant has the same error as the integration, you won’t gain a much better event location by integrating again at a finer step size. Particularly note thesolve
,extrema
, andtruncate
methods ofCubicHermiteSpline
.
Command reference¶

t
= <MagicMock name='mock.Symbol()' id='139959031112984'>¶ the symbol for time for defining the differential equation. You may just as well define the an analogous symbol directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCDDE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule
sympy_symbols
instead (see SymPy vs. SymEngine for details).

y
(index, time=<MagicMock name='mock.Symbol()' id='139959031112984'>)¶ the function representing the DDE’s past and present states used for defining the differential equation. The first integer argument denotes the component. The second, optional argument is a symbolic expression denoting the time. This automatically expands to using
current_y
,past_y
, andanchors
; so do not be surprised when you look at the output and it is different than what you entered or expected. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

dy
(index, time)¶ This is the function representing the DDE’s past derivative used for defining the differential equation. The first integer argument denotes the component. The second, argument denotes the time. If you use this, you get a neutral DDE which may make addressing initial discontinuities more difficult. Do not use this to get the current derivative. Instead compute it using your dynamical equations. This will not work with tangential Lyapunov exponents.
This feature is experimental.
This automatically expands to using
current_y
,past_y
, andanchors
; so do not be surprised when you look at the output and it is different than what you entered or expected. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

current_y
= <MagicMock name='mock.Function()' id='139959031125608'>¶ the symbol for the current state for defining the differential equation. It is a function and the integer argument denotes the component. This is only needed for specific optimisations of large DDEs; in all other cases use
y
instead. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

past_y
= <MagicMock name='mock.Function()' id='139959031125608'>¶ the symbol for DDE’s past state for defining differential equation. It is a function with the first integer argument denoting the component and the second argument being a pair of past points (as being returned by
anchors
) from which the past state is interpolated (or, in rare cases, extrapolated). This is only needed for specific optimisations of large DDEs; in all other cases usey
instead. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

past_dy
= <MagicMock name='mock.Function()' id='139959031125608'>¶ the symbol for DDE’s past derivative for defining differential equation. It is a function with the first integer argument denoting the component and the second argument being a pair of past points (as being returned by
anchors
) from which the past state is interpolated (or, in rare cases, extrapolated). This is only needed for specific optimisations of large DDEs; in all other cases usedy
instead. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

anchors
= <MagicMock name='mock.Function()' id='139959031125608'>¶ the symbol representing two anchors for defining the differential equation. It is a function and the float argument denotes the time point to which the anchors pertain. This is only needed for specific optimisations of large DDEs; in all other cases use
y
instead. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

quadrature
(integrand, variable, lower, upper, nsteps=20, method='gauss')¶ If your DDE contains an integral over past points, this utility function helps you to implement it. It returns an estimator of the integral from evaluations of the past at discrete points, employing a numerical quadrature. You probably want to disable automatic simplifications when using this.
Parameters:  integrand : symbolic expression
 variable : symbol
variable of integration
 lower, upper : expressions
lower and upper limit of integration
 nsteps : integer
number of sampling steps. This should be chosen sufficiently high to capture all relevant aspects of your dynamics.
 method :
"midpoint"
or"gauss"
which method to use for numerical integration. So far Gauß–Legendre quadrature (
"gauss"
; needs SymPy) and the midpoint rule ("midpoint"
) are available. Use the midpoint rule if you expect your integrand to exhibit structure on a time scale smaller than (upper
−lower
)/nsteps
. Otherwise or when in doubt, use"gauss"
.

exception
UnsuccessfulIntegration
¶ This exception is raised when the integrator cannot meet the accuracy and stepsize requirements. If you want to know the exact state of your system before the integration fails or similar, catch this exception.

test
(omp=True, sympy=True)¶ Runs a quick simulation to test whether:
 a compiler is available and can be interfaced by Setuptools,
 OMP libraries are available and can be assessed,
 SymPy is available.
The latter two tests can be deactivated with the respective argument. This is not a full software test but rather a quick sanity check of your installation. If successful, this function just finishes without any message.
The main class¶

class
jitcdde
(f_sym=(), *, helpers=None, n=None, delays=None, max_delay=None, control_pars=(), callback_functions=(), verbose=True, module_location=None)¶ Parameters:  f_sym : iterable of symbolic expressions or generator function yielding symbolic expressions or dictionary
If an iterable or generator function, the
i
th element is thei
th component of the value of the DDE’s derivative \(f(t,y)\). If a dictionary, it has to map the dynamical variables to its derivatives and the dynamical variables must bey(0), y(1), …
. helpers : list of lengthtwo iterables, each containing a symbol and an expression
Each helper is a variable that will be calculated before evaluating the derivative and can be used in the latter’s computation. The first component of the tuple is the helper’s symbol as referenced in the derivative or other helpers, the second component describes how to compute it from
t
,y
and other helpers. This is for example useful to realise a meanfield coupling, where the helper could look like(mean, sum(y(i) for i an range(100))/100)
. (See the JiTCODE documentation for an example.) n : integer
Length of
f_sym
. While JiTCDDE can easily determine this itself (and will, if necessary), this may take some time iff_sym
is a generator function andn
is large. Take care that this value is correct – if it isn’t, you will not get a helpful error message. delays : iterable of expressions or floats
The delays of the dynamics. If not given, JiTCDDE will determine these itself if needed. However, this may take some time if
f_sym
is large. Take care that these are correct – if they aren’t, you won’t get a helpful error message. max_delay : number
Maximum delay. In case of constant delays and if not given, JiTCDDE will determine this itself. However, this may take some time if
f_sym
is large anddelays
is not given. Take care that this value is not too small – if it is, you will not get a helpful error message. If this value is too large, you may run into memory issues for long integration times and calculating Lyapunov exponents (withjitcdde_lyap
) may take forever. control_pars : list of symbols
Each symbol corresponds to a control parameter that can be used when defining the equations and set after compilation with
set_parameters
. Using this makes sense if you need to do a parameter scan with short integrations for each parameter and you are spending a considerable amount of time compiling. callback_functions : iterable
Python functions that should be called at integration time (callback) when evaluating the derivative. Each element of the iterable represents one callback function as a tuple containing (in that order):
 A SymEngine function object used in
f_sym
to represent the function call. If you want to use any JiTCDDE features that need the derivative, this must have a properly definedf_diff
method with the derivative being another callback function (or constant).  The Python function to be called. This function will receive the state array (
y
) as the first argument. All further arguments are whatever you use as arguments of the SymEngine function inf_sym
. These can be any expression that you might use in the definition of the derivative and contain, e.g., dynamical variables (current or delayed), time, control parameters, and helpers. The only restriction is that the arguments are floats (and not vectors, anchors or similar). The return value must also be a float (or something castable to float). It is your responsibility to ensure that this function adheres to these criteria, is deterministic and sufficiently smooth with respect its arguments; expect nasty errors otherwise.  The number of arguments, excluding the state array as mandatory first argument. This means if you have a variadic Python function, you cannot just call it with different numbers of arguments in
f_sym
, but you have to define separate callbacks for each of numer of arguments.
See this example for how to use this.
 A SymEngine function object used in
 verbose : boolean
Whether JiTCDDE shall give progress reports on the processing steps.
 module_location : string
location of a module file from which functions are to be loaded (see
save_compiled
). If you use this, you need not givef_sym
as an argument, but then you must given
andmax_delay
. Also note that the integrator may lack some functionalities, depending on the arguments you provide.

add_past_point
(time, state, derivative)¶ adds an anchor from which the initial past of the DDE is interpolated.
Parameters:  time : float
the temporal position of the anchor.
 state : iterable of floats
the state of the anchor. The dimension of the array must match the dimension of the differential equation (
n
). derivative : iterable of floats
the derivative at the anchor. The dimension of the array must match the dimension of the differential equation (
n
).

add_past_points
(anchors)¶ adds multiple anchors from which the past of the DDE is interpolated.
Parameters:  anchors : iterable of tuples
Each tuple must have components corresponding to the arguments of
add_past_point
.

constant_past
(state, time=0)¶ initialises the past with a constant state.
Parameters:  state : iterable of floats
The length must match the dimension of the differential equation (
n
). time : float
The time at which the integration starts.

past_from_function
(function, times_of_interest=None, max_anchors=100, tol=5)¶ automatically determines anchors describing the past of the DDE from a given function, i.e., a piecewise cubic Hermite interpolation of the function at automatically selected time points will be the initial past. As this process involves heuristics, it is not perfect. For a better control of the initial conditions, use
add_past_point
.Parameters:  function : callable or iterable of symbolic expressions
If callable, this takes the time as an argument and returns an iterable of floats that is the initial state of the past at that time. If an iterable of expressions, each expression represents how initial past of the respective component depends on
t
(requires SymPy). In both cases, the lengths of the iterable must match the dimension of the differential equation (n
). times_of_interest : iterable of numbers or
None
Initial set of time points considered for the interpolation. The highest value will be the starting point of the integration. Further interpolation anchors may be added in between the given anchors depending on heuristic criteria. If
None
, these will be automatically chosen depending on the maximum delay and the integration will start at \(t=0\). max_anchors : positive integer
The maximum number of anchors that this routine will create (including those for the times_of_interest).
 tol : integer
This is a parameter for the heuristics, more precisely the number of digits considered for precision in several places. The higher this value, the more likely it is that the heuristic adds anchors.

get_state
()¶ Returns an object that represents all anchors currently used by the integrator, which compeletely define the current state. The object is a CubicHermiteSpline instance (with a few special extensions for JiTCDDE), which allows you to extract all sorts of information from it if you want.
The format can also be used as an argument for
add_past_points
. An example where this is useful is when you want to switch between plain integration and one that also obtains Lyapunov exponents.If you reinitialise this integrator after calling this, this past will be used.

purge_past
()¶ Clears the past and resets the integrator. You need to define a new past (using
add_past_point
) after this.

reset_integrator
(idc_unhandled=False)¶ Resets the integrator, forgetting all integration progress and forcing reinitiation when it is needed next.

generate_lambdas
(simplify=None)¶ Explicitly initiates a purely Pythonbased integrator.

compile_C
(simplify=None, do_cse=False, chunk_size=100, extra_compile_args=None, extra_link_args=None, verbose=False, modulename=None, omp=False)¶ translates the derivative to C code using SymEngine’s Ccode printer. For detailed information many of the arguments and other ways to tweak the compilation, read these notes.
Parameters:  simplify : boolean
Whether the derivative should be simplified (with
ratio=1.0
) before translating to C code. The main reason why you could want to disable this is if your derivative is already optimised and so large that simplifying takes a considerable amount of time. IfNone
, this will be automatically disabled forn>10
. do_cse : boolean
Whether SymPy’s commonsubexpression detection should be applied before translating to C code. This is worthwile if your DDE contains the same delay more than once. Otherwise it is almost always better to let the compiler do this (unless you want to set the compiler optimisation to
O2
or lower). As this requires all entries off
at once, it may void advantages gained from using generator functions as an input. Also, this feature uses SymPy and not SymEngine. chunk_size : integer
If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See Handling very large differential equations on why this is useful and how to best choose this value. If smaller than 1, no chunking will happen.
 extra_compile_args : iterable of strings
 extra_link_args : iterable of strings
Arguments to be handed to the C compiler or linker, respectively.
 verbose : boolean
Whether the compiler commands shall be shown. This is the same as Setuptools’
verbose
setting. modulename : string or
None
The name used for the compiled module.
 omp : pair of iterables of strings or boolean
What compiler arguments shall be used for multiprocessing (using OpenMP). If
True
, they will be selected automatically. If empty orFalse
, no compilation for multiprocessing will happen (unless you supply the relevant compiler arguments otherwise).

set_parameters
(*parameters)¶ Set the control parameters defined by the
control_pars
argument of thejitcdde
. Note that you probably want to usepurge_past
and address initial discontinuities every time after you do this.Parameters:  parameters : floats
Values of the control parameters. You can also use a single iterable containing these. Either way, the order must be the same as in the
control_pars
argument of thejitcdde
.

set_integration_parameters
(atol=1e10, rtol=1e05, first_step=1.0, min_step=1e10, max_step=10.0, decrease_threshold=1.1, increase_threshold=0.5, safety_factor=0.9, max_factor=5.0, min_factor=0.2, pws_factor=3, pws_atol=0.0, pws_rtol=1e05, pws_max_iterations=10, pws_base_increase_chance=0.1, pws_fuzzy_increase=False)¶ Sets the parameters for the stepsize adaption. Arguments starting with
pws
(past within step) are only relevant if the delay is shorter than the step size.Parameters:  atol : float
 rtol : float
The tolerance of the estimated integration error is determined as \(\texttt{atol} + \texttt{rtol}·y\). The stepsize adaption algorithm is the same as for the GSL. For details see its documentation.
 first_step : float
The stepsize adaption starts with this value.
 min_step : float
Should the stepsize have to be adapted below this value, the integration is aborted and
UnsuccessfulIntegration
is raised. max_step : float
The step size will be capped at this value.
 decrease_threshold : float
If the estimated error divided by the tolerance exceeds this, the step size is decreased.
 increase_threshold : float
If the estimated error divided by the tolerance is smaller than this, the step size is increased.
 safety_factor : float
To avoid frequent adaption, all freshly adapted step sizes are multiplied with this factor.
 max_factor : float
 min_factor : float
The maximum and minimum factor by which the step size can be adapted in one adaption step.
 pws_factor : float
Factor of stepsize adaptions due to a delay shorter than the time step. If dividing the step size by
pws_factor
moves the delay out of the time step, it is done. If this is not possible, if the iterative algorithm does not converge withinpws_max_iterations
, or if it converges within fewer iterations thanpws_factor
, the step size is decreased or increased, respectively, by this factor. pws_atol : float
 pws_rtol : float
If the difference between two successive iterations is below the tolerance determined with these parameters, the iterations are considered to have converged.
 pws_max_iterations : integer
The maximum number of iterations before the step size is decreased.
 pws_base_increase_chance : float
If the normal stepsize adaption calls for an increase and the step size was adapted due to the past lying within the step, there is at least this chance that the step size is increased.
 pws_fuzzy_increase : boolean
Whether the decision to try to increase the step size shall depend on chance. The upside of this is that it is less likely that the step size gets locked at a unnecessarily low value. The downside is that the integration is not deterministic anymore. If False, increase probabilities will be added up until they exceed 0.9, in which case an increase happens.

t
¶ Returns:  time : float
 The current time of the integrator.

integrate
(target_time)¶ Tries to evolve the dynamics.
Parameters:  target_time : float
time until which the dynamics is evolved
Returns:  state : NumPy array
the computed state of the system at
target_time
.

adjust_diff
(shift_ratio=0.0001)¶ Performs a zeroamplitude (backwards)
jump
whosewidth
isshift_ratio
times the distance to the previous anchor into the past. See the documentation ofjump
for the caveats of this and see Dealing with initial discontinuities for more information on why you almost certainly need to use this or an alternative way to address initial discontinuities.Returns:  minima : NumPy array of floats
 maxima : NumPy array of floats
The minima or maxima, respectively, of each component during the jump interval. See the documentation of
jump
on why you may want these.

integrate_blindly
(target_time, step=None)¶ Evolves the dynamics with a fixed step size ignoring any accuracy concerns. If a delay is smaller than the time step, the state is extrapolated from the previous step. See Dealing with initial discontinuities for more information on why you almost certainly need to use this or an alternative way to address initial discontinuities.
If the target time equals the current time,
adjust_diff
is called automatically.Parameters:  target_time : float
time until which the dynamics is evolved. In most cases, this should be larger than the maximum delay.
 step : float
aspired step size. The actual step size may be slightly adapted to make it divide the integration time. If
None
,0
, or otherwise falsy, the maximum step size as set withmax_step
ofset_integration_parameters
is used.
Returns:  state : NumPy array
the computed state of the system at
target_time
.

step_on_discontinuities
(*, propagations=1, min_distance=1e05, max_step=None)¶ Assumes that the derivative is discontinuous at the start of the integration and chooses steps such that propagations of this point via the delays always fall on integration steps (or very close). Between these, adaptive steps are taken if necessary to keep the error within the bounds set by
set_integration_parameters
. If the discontinuity was propagated sufficiently often, it is considered to be smoothed and the integration is stopped. See Dealing with initial discontinuities for more information on why you almost certainly need to use this or an alternative way to address initial discontinuities.This only makes sense if you just defined the past (via
add_past_point
) and start integrating, just reset the integrator, or changed control parameters.In case of an ODE,
adjust_diff
is used automatically.Parameters:  propagations : integer
how often the discontinuity has to propagate to before it’s considered smoothed
 min_distance : float
If two required steps are closer than this, they will be treated as one.
 max_step : float
Retired parameter. Steps are now automatically adapted.
Returns:  state : NumPy array
the computed state of the system after integration

jump
(amplitude, time, width=1e05, forward=True)¶ Applies a jump to the state. Since this naturally introduces a discontinuity to the state, it can only be approximated. This is done by adding two anchors in a short temporal distance
width
. With other words, the jump is not instantaneous but just a strong change of the state over a small interval of lengthwidth
. The slope after the jump is computed using the derivativef
.A potential source of numerical issues is that the Hermite interpolant becomes extreme during the jump (due to Runge’s phenomenon). Whether this is a problem primarily depends on how these extrema influence delayed dependencies of your derivative. To allow you to estimate the magnitude of this, this function returns the extrema during the jump interval. There are two ways to address this:
 Integrate with steps such that past values from the jump interval are avoided. You can use
integrate_blindly
to do this.  Increase the
width
.
Note that due to the adapted derivative, there are no initial discontinuities after this.
Parameters:  amplitude : NumPy array of floats
The amplitude of the jump.
 time : float
The time at which the jump shall happen. Usually this would be the last time to which you integrated.
 width : float
The size of the jump interval (see above). The smaller you choose this, the sharper the jump, but the more likely are numerical problems in its wake. This should be smaller than all delays in the system.
 forward : boolean
Whether the jump interval should begin after
time
. Otherwise it will end attime
(equivalent to a forward jump starting attime`−`width
).
Returns:  minima : NumPy array of floats
 maxima : NumPy array of floats
The minima or maxima, respectively, of each component during the jump interval. See above on why you may want these.
 Integrate with steps such that past values from the jump interval are avoided. You can use

check
(fail_fast=True)¶ Performs a series of checks that may not be feasible at runtime (usually due to their length). Whenever you run into an error that you cannot make sense of, try running this. It checks for the following mistakes:
 negative arguments of
y
 arguments of
y
that are higher than the system’s dimensionn
 unused variables
Parameters:  fail_fast : boolean
whether to abort on the first failure. If false, an error is raised only after all problems are printed.
 negative arguments of

render_and_write_code
(expressions, name, chunk_size=100, arguments=(), omp=True)¶ Writes expressions to code.
Parameters:  expressions: iterator
expressions to be written
 name: string
unique name of what is computed
 chunk_size: integer
size of chunks. If smaller than 1, no chunking happens.
 arguments: list of tuples
Each tuple contains the name, type, and size (optional, for arrays) of an argument needed by the code. This is so the arguments can be passed to chunked functions.
 omp: boolean
whether OMP pragmas should be included

save_compiled
(destination='', overwrite=False)¶ saves the module file with the compiled functions for later use (see the
module_location
argument). If no compiled derivative exists, it tries to compile it first usingcompile_C
. In most circumstances, you should not rename this file, as the filename is needed to determine the module name.Parameters:  destination : string specifying a path
If this specifies only a directory (don’t forget the trailing
/
or similar), the module will be saved to that directory. If empty (default), the module will be saved to the current working directory. Otherwise, the functions will be (re)compiled to match that filename. A file ending will be appended if needed. overwrite : boolean
Whether to overwrite the specified target if it already exists.
Returns:  filename : string
The destination that was actually used.
Lyapunov exponents¶

class
jitcdde_lyap
(f_sym=(), n_lyap=1, simplify=None, **kwargs)¶ Calculates the Lyapunov exponents of the dynamics (see the documentation for more details). The handling is the same as that for
jitcdde
except for:Parameters:  n_lyap : integer
Number of Lyapunov exponents to calculate.
 simplify : boolean
Whether the differential equations for the separation function shall be simplified (with
ratio=1.0
). Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). IfNone
, this will be automatically disabled forn>10
.

integrate
(target_time)¶ Like
jitcdde
’sintegrate
, except for orthonormalising the separation functions and:Returns:  y : onedimensional NumPy array
The state of the system. Same as the output of
jitcdde
’sintegrate
. lyaps : onedimensional NumPy array
The “local” Lyapunov exponents as estimated from the growth or shrinking of the separation function during the integration time of this very
integrate
command. integration time : float
The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily the difference between
target_time
and the previoustarget_time
as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time.If the size of the advance by
integrate
(the sampling step) is smaller than the actual integration step, it may also happen thatintegrate
does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as0
, which is as nonsensical as any other result (except perhapsnan
) but should not matter with a proper weighted averaging. It is essential that you choose
target_time
properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parametermax_step
ofset_integration_parameters
instead.

integrate_blindly
(target_time, step=None)¶ Like
jitcdde
’sintegrate_blindly
, except for orthonormalising the separation functions after each step and the output being analogous tojitcdde_lyap
’sintegrate
.

class
jitcdde_transversal_lyap
(f_sym=(), groups=(), simplify=None, **kwargs)¶ Calculates the largest Lyapunov exponent in orthogonal direction to a predefined synchronisation manifold, i.e. the projection of the tangent vector onto that manifold vanishes. In contrast to
jitcdde_restricted_lyap
, this performs some transformations tailored to this specific application that may strongly reduce the number of differential equations and ensure a dynamics on the synchronisation manifold.Note that all functions for defining the past differ from their analoga from
jitcdde
by having the dimensions of the arguments reduced to the number of groups. This means that only one initial value (of the state or derivative) per group of synchronised components has to be provided (in the same order as thegroups
argument of the constructor).The handling is the same as that for
jitcdde
except for:Parameters:  groups : iterable of iterables of integers
each group is an iterable of indices that identify dynamical variables that are synchronised on the synchronisation manifold.
 simplify : boolean
Whether the transformed differential equations shall be subjected to SymEngine’s
simplify
. Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). IfNone
, this will be automatically disabled forn>10
.

integrate
(target_time)¶ Like
jitcdde
’sintegrate
, except for normalising and aligning the separation function and:Returns:  y : onedimensional NumPy array
The state of the system. Only one initial value per group of synchronised components is returned (in the same order as the
groups
argument of the constructor). lyap : float
The “local” largest transversal Lyapunov exponent as estimated from the growth or shrinking of the separation function during the integration time of this very
integrate
command. integration time : float
The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily difference between
target_time
and the previoustarget_time
, as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time.If the size of the advance by
integrate
(the sampling step) is smaller than the actual integration step, it may also happen thatintegrate
does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as0
, which is as nonsensical as any other result (except perhapsnan
) but should not matter with a proper weighted averaging. It is essential that you choose
target_time
properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parametermax_step
ofset_integration_parameters
instead.

integrate_blindly
(target_time, step=None)¶ Like
jitcdde
’sintegrate_blindly
, except for normalising and aligning the separation function after each step and the output being analogous tojitcdde_transversal_lyap
’sintegrate
.

class
jitcdde_restricted_lyap
(f_sym=(), vectors=(), **kwargs)¶ Calculates the largest Lyapunov exponent in orthogonal direction to a predefined plane, i.e. the projection of the separation function onto that plane vanishes. See this test for an example of usage. Note that coordinate planes (i.e., planes orthogonal to vectors with only one nonzero component) are handled considerably faster. Consider transforming your differential equation to achieve this.
The handling is the same as that for
jitcdde_lyap
except for:Parameters:  vectors : iterable of pairs of NumPy arrays
A basis of the plane, whose projection shall be removed. The first vector in each pair is the component coresponding to the the state, the second vector corresponds to the derivative.

integrate
(target_time)¶ Like
jitcdde
’sintegrate
, except for normalising and aligning the separation function and:Returns:  y : onedimensional NumPy array
The state of the system. Same as the output of
jitcdde
’sintegrate
. lyap : float
The “local” largest transversal Lyapunov exponent as estimated from the growth or shrinking of the separation function during the integration time of this very
integrate
command. integration time : float
The actual integration time during to which the local Lyapunov exponents apply. Note that this is not necessarily difference between
target_time
and the previoustarget_time
, as JiTCDDE usually integrates a bit ahead and estimates the output via interpolation. When averaging the Lyapunov exponents, you almost always want to weigh them with the integration time.If the size of the advance by
integrate
(the sampling step) is smaller than the actual integration step, it may also happen thatintegrate
does not integrate at all and the integration time is zero. In this case, the local Lyapunov exponents are returned as0
, which is as nonsensical as any other result (except perhapsnan
) but should not matter with a proper weighted averaging. It is essential that you choose
target_time
properly such that orthonormalisation does not happen too rarely. If you want to control the maximum step size, use the parametermax_step
ofset_integration_parameters
instead.

integrate_blindly
(target_time, step=None)¶ Like
jitcdde
’sintegrate_blindly
, except for normalising and aligning the separation function after each step and the output being analogous tojitcdde_restricted_lyap
’sintegrate
.
Input¶
Please also note this example for using this subclass.

input
(index, time=<MagicMock name='mock.Symbol()' id='139959031112984'>)¶ Function representing an external input (for
jitcdde_input
). The first integer argument denotes the component. The second, optional argument is a symbolic expression denoting the time. This automatically expands to usingcurrent_y
,past_y
,anchors
,input_base_n
, andinput_shift
; so do not be surprised when you look at the output and it is different than what you entered or expected. You can import a SymPy variant from the submodulesympy_symbols
instead (see SymPy vs. SymEngine for details).

class
jitcdde_input
(f_sym=(), input=None, **kwargs)¶ Allows to integrate the DDE (or an ODE) with input. Under the hood, this is handled by adding dummy dynamical variables, in whose past state the input is stored. In contrast to other variants of JiTCDDE, the integration must start at \(t=0\).
All parameters and methods are as for
jitcdde
, except:Parameters:  input : CHSPy CubicHermiteSpline
The input. This has to be a CubicHermiteSpline (specifying the values and derivatives at certain anchor points). Be sure to plot the input to check whether it conforms with your expectations.
References¶
[ST01]  (1, 2, 3, 4) L.F. Shampine, S. Thompson: Solving DDEs in Matlab, Applied Numerical Mathematics 37, pp. 441–458 (2001), 10.1016/S01689274(00)000556. 
[BS89]  (1, 2) P. Bogacki, L.F. Shampine: A 3(2) pair of Runge–Kutta formulas, Applied Mathematics Letters 2, pp. 321–325 (1989), 10.1016/08939659(89)900797. 
[F82]  J.D. Farmer: Chaotic attractors of an infinitedimensional dynamical system, Physica D 4, pp. 366–393 (1982), 10.1016/01672789(82)900422. 
[BGGS80]  G. Benettin, L. Galgani, A. Giorgilli, and J.M. Strelcyn: Lyapunov Characteristic Exponents for smooth dynamical systems and for Hamiltonian systems; A method for computing all of them. Meccanica 15, pp. 9–30 (1980), 10.1007/BF02128236. 