Next Reaction Method

Gibson and Bruck's next reaction method is an adaptation of the first reaction method [Gibson 2000]. Instead of computing the time to each reaction, one deals with the time at which a reaction will occur. These times are not computed anew at each time step, but re-used. The reaction times are stored in an indexed priority queue (indexed because the reaction indices are stored with the reaction times). Also, propensities are computed only when they have changed. Below is the algorithm for a single step.

  1. Get the reaction index μ and the reaction time τ by removing the minimum element from the priority queue.
  2. t = τ
  3. X = X + Vμ
  4. For each propensity m (except μ) that is affected by reaction μ:
    1. α = updated propensity.
    2. τm = (am / α) (τm - t) + t
    3. am = α
    4. Update the priority queue with the new value of taum.
  5. Generate an exponential random variable r with mean aμ.
  6. τm = t + r
  7. Push τm into the priority queue.

Consider the computational complexity of the next reaction method. We assume that the reactions are loosely coupled and hence computing a propensity am is O(1). Let D be an upper bound on the number of propensities that are affected by firing a single reaction. Then the cost of updating the propensities and the reaction times is O(D). Since the cost of inserting or changing a value in the priority queue is O(log M), the cost of updating the priority queue is O(D log M). Therefore the computational complexity of a step with the next reaction method is O(D log M).

One can reformulate the next reaction method to obtain a more efficient algorithm. The most expensive parts of the algorithm are maintaining the binary heap, updating the state, and generating exponential deviates. Improving the generation of exponential deviates is a minimally invasive procedure. Instead of using the inversion method, one can use the ziggurat method [Marsaglia 2000] or the acceptance complement method [Rubin 2006]. Reducing the cost of the binary heap operations is a more complicated affair. We present several approaches below.

Indexed Priority Queues
The term priority queue has almost become synonymous with binary heap. For most applications, a binary heap is an efficient way of implementing a priority queue. For a heap with M elements, one can access the minimum element in constant time. The cost to insert or extract an element or to change the value of an element is O(log M). Also, the storage requirements are linear in the number of elements. While a binary heap is rarely the most efficient data structure for a particular application, it is usually efficient enough. If performance is important and the heap operations constitute a significant portion of the computational cost in an application, then it may be profitable to consider other data structures.

Linear Search
The simplest method of implementing a priority queue is to store the elements in an array and use a linear search to find the minimum element. The computational complexity of finding the minimum element is O(M). Inserting, deleting, and modifying elements can be done in constant time. For the next reaction method, linear search is the most efficient algorithm when the number of reactions is small.

Partitioning
For larger problem sizes, one can utilize the under-appreciated method of partitioning. One stores the elements in an array, but classifies the elements into two categories: lower and upper. One uses a splitting value to discriminate; the elements in the lower partition are less than the splitting value. Then one can determine the minimum value in the queue with a linear search on the elements in the lower partition. Inserting, erasing, and modifying values can all be done in constant time. However, there is the overhead of determining in which partition an element belongs. When the lower partition becomes empty, one must choose a new splitting value and re-partition the elements (at cost O(M)). By choosing the splitting value so that there are O(M1/2) elements in the lower partition, one can attain an average cost of O(M1/2) for determining the minimum element. This choice balances the costs of searching and re-partitioning. The cost of a search, O(M1/2), times the number of searches before one needs to re-partition, O(M1/2), has the same complexity as the cost of re-partitioning. There are several strategies for choosing the splitting value and partitioning the elements. Partitioning with a linear search is an efficient method for problems of moderate size.

Binary Heaps
When using indexed binary heaps, there are a few implementation details that have a significant impact on performance. See the documentation of the source code for details. Binary heaps have decent performance for a wide range of problem sizes. Because the algorithms are fairly simple, they perform well for small problems. Because of the logarithmic complexity, they are suitable for fairly large problems.

Hashing
There is a data structure that can perform each of the operations (finding the minimum element, inserting, removing, and modifying) in constant time. This is accomplished with hashing. (One could also refer to the method as bucketing.) The reaction times are stored in a hash table [Cormen 2001]. The hashing function is a linear function of the reaction time (with a truncation to convert from a floating point value to an integer index). The constant in the linear function is chosen to give the desired load. For hashing with chaining, if the load is O(1), then all operations can be done in constant time. As with binary heaps, the implementation is important.

The following options are available with the next reaction method.