In the case of the parallelism across the problem, the various components of the system of ODEs are distributed amongst the available processors. This is especially effective in explicit methods, since they frequently need the evaluation of the right-hand side function. The most simple example can be introduced by assigning each ODE equation to a different processor.
Parallelism across the method, in this context, is the employ of the parallelism inherently available within the method. Concurrent evaluations of the entire function f for various values of its argument and the simultaneous solution of various (nonlinear) system of equations are examples of parallelism across the method. Remark that this form of parallelism is also effective in case of a scalar ODE, whereas parallelism across the problem aims a large N values. Parallelism across method can be achieved, for example, by computing the stages of a Runge-Kutta method on different processor.
In the parallelism across the time, contrary to the step-by-step idea, a number of steps is performed simultaneously, yielding numerical approximations in many points on the t-axis in parallel.