Weighted next reaction method and parameter selection for efficient simulation of rare events in biochemical reaction systems

The weighted stochastic simulation algorithm (wSSA) recently developed by Kuwahara and Mura and the refined wSSA proposed by Gillespie et al. based on the importance sampling technique open the door for efficient estimation of the probability of rare events in biochemical reaction systems. In this paper, we first apply the importance sampling technique to the next reaction method (NRM) of the stochastic simulation algorithm and develop a weighted NRM (wNRM). We then develop a systematic method for selecting the values of importance sampling parameters, which can be applied to both the wSSA and the wNRM. Numerical results demonstrate that our parameter selection method can substantially improve the performance of the wSSA and the wNRM in terms of simulation efficiency and accuracy.


Introduction
Biochemical reaction systems in living cells exhibit significant stochastic fluctuations due to a small number of molecules involved in processes such as the transcription and translation of genes [1]. A number of exact [2][3][4][5][6][7] or approximate simulation algorithms [8][9][10][11][12][13][14][15][16][17][18][19] have been developed for simulating the stochastic dynamics of such systems. Recent research shows that some rare events occurring in biochemical reaction system with an extremely small probability within a specified limited time can have profound and sometimes devastating effects [20,21]. Hence, it is important that computational simulation and analysis of systems with critical rare events can efficiently capture such rare events. However, the existing exact simulation methods such as Gillespie's exact SSA [2,3] often require prohibitive computation to estimate the probability of a rare events, while the approximate methods may not be able to estimate such probability accurately.
The weighted stochastic simulation algorithm (wSSA) recently developed by Kuwahara and Mura [22] based on the importance sampling technique enables one to efficiently estimate the probability of a rare event.
However, the wSSA does not provide any method for selecting optimal values for importance sampling parameters. More recently, Gillespie et al. [23] analyzed the accuracy of the results yielded from the wSSA and proposed a refined wSSA that employed a try-and-test method for selecting optimal values for importance sampling parameters. It was shown that the refined wSSA could further improve the performance of wSSA. However, the try-and-test method requires some initial guessing for the sets of values from which the parameters can take. If the guessed values do not include the optimal value, then one cannot get appropriate values for the parameters. Moreover, if the number of parameters is greater than one, a very large set of values need to be guessed and tested, which may increase the likelihood of missing the optimal values and also increase computational overhead.
In this paper, we first apply the importance sampling technique to the next reaction method (NRM) of the SSA [4] and develop a weighted NRM (wNRM) as an alternative to the wSSA. We then develop a systematic method for selecting optimal values for importance sampling parameters that can be incorporated into both the wSSA and the wNRM. Our method does not need initial guess and thus can guarantee near optimal values for the parameters. Our numerical results in Section 5 demonstrate that the variance of the estimated probability of the rare event provided by the wSSA and wNRM with our parameter selection method can be more than one order magnitude lower than that provided by the wSSA or the refined wSSA for a given number of simulation runs. Moreover, the wSSA and wNRM with our parameter selection method require less simulation time than the refined wSSA for the same number of simulation runs. When this paper was under review, a method named doubly weighted SSA (dwSSA) was developed to automatically choose parameter values for the wSSA [24]. The dwSSA reduces the computational overhead required by the wSSA and the refined wSSA to select parameter values, but it produces similar variance for the estimated probability as the refined wSSA.
The remaining part of this paper is organized as follows. In Section 2, we first describe the system setup and then briefly review Gillespie's exact SSA [2,3], the wSSA [22] and the refined wSSA [23]. In Section 3, we develop the wNRM. In Section 4, we develop a systematic method for selecting optimal values for importance sampling parameters and incorporate the parameter selection procedure into both the wSSA and the NRM. In Section 5, we give some numerical examples that illustrate the advantages of our parameter selection method. Finally in Section 6, we draw several conclusions.

System Description
Suppose a chemical reaction system involves a well-stirred mixture of N ≥ 1 molecular species {S 1 , ..., S N } that chemically interact through M ≥ 1 reaction channels {R 1 , ..., R M }. The dynamic state of this chemical system is described by the state vector X(t) = [X 1 (t), ..., X N (t)] T , where X n (t), n = 1, ..., N, is the number of S n molecules at time t, and [·] T denotes the transpose of the vector in the brackets. Following Gillespie [8], we define the dynamics of reaction R m by a state-change vector ν m = [ν 1m , ..., ν Nm ] T , where ν nm gives the changes in the S n molecular population produced by one R m reaction, and a propensity function a m (x) together with the fundamental premise of stochastic chemical kinetics: am(x)dt the probability, given X(t) = x, that one reaction Rm will occur in the next infinitesimal time interval [t, t + dt). (1)

Gillespie's exact SSA
Based on the fundamental premise (1), Gillespie developed an exact SSA to simulate the occurrence of every reaction when the time evolves [3]. Specifically, Gillespie's SSA simulates the following event in each step: E : no reaction occurs in the time interval [t, t + τ ], and a reaction Rμ occurs in the infinitesimal time interval (t + τ , t + τ + dτ ).
(2) It has been shown by Gillespie [2,3] that τ and μ are two independent random variables and have the following probability density functions (PDF) and probability mass function (PMF), respectively, and where a 0 (x) = M m=1 a m (x). Therefore, Gillespie's direct method (DM) for the SSA generates a realization of τ and μ according to PDF (3) and PMF (4), respectively, in each step of the simulation, and then updates the system state as X(t + τ) = x + ν μ .

Weighted SSA
In order to estimate the probability of a rare event that occurs with an extremely low probability in a given time period, Gillespie's SSA may require huge computation. Recently, the wSSA [22] and the refined wSSA [23] were developed to estimate the probability of a rare event with substantial reduction of computation. Following Kuwahara and Mura [22], and Gillespie et al.
[23], we define the rare event E R as follows: ER is an event that starting at time 0 in state x0, the system will first reach any state in a specific set at some time ≤ T, and the probability of ER is very small, i.e., P(ER) 1.

(5)
If we employ the SSA to estimate P(E R ), we would have to make a large number n of simulation runs, with each starting at time 0 in state x 0 and terminating either when some state x Ω is first reached or when the system time reaches T. If k is the number of those n runs that terminate for the first reason, then P(E R ) is estimated asP(E R ) = k n. Since P(E R ) ≪ 1, n should be very large to get a reasonably accurate estimate of P(E R ). The wSSA employs the importance sampling technique to reduce the number of runs needed to estimate P(E R ).
Specifically, wSSA generates τ from its PDF (3) in the same way as used in Gillespie's DM method, but generates the reaction index μ from the following PMF: where b μ (x) = g μ a μ (x), μ = 1, ..., M, b 0 (x) = M μ=1 b μ (x) and g μ , μ = 1, ..., M are positive constants that need to be chosen carefully before simulations are run. Suppose a trajectory J generated in a simulation run contains h reactions and the ith reaction occurs at time t i , then the wSSA changes the PDF of the , where t 0 = 0. By choosing appropriate g μ , μ = 1, ..., M, one can increase the probability of the trajectories that lead to the rare event. If k trajectories out of n simulation runs lead to the rare event, then the importance sampling technique tells us that an unbiased estimate of P(E R ) is given bŷ where j and i are indices of the trajectories and reactions in a trajectory, respectively, b which can be obtained in each simulation step. Kuwahara and Mura [22] did not provide any method for choosing g μ , although their numerical results with some pre-specified g μ for several reaction systems demonstrated that the wSSA could reduce computation substantially. Gillespie et al. [23] analyzed the variance ofP(E R ) obtained from the wSSA and refined the wSSA by proposing a try-and-test method for choosing g μ . In the try-and-test method, several sets of values are prespecified for g μ , μ = 1, ..., M. A relatively small number of simulation runs of the standard SSA are made for each set of the values to obtain an estimate of the variance ofP(E R ), and then the set of values that yielded the smallest variance is chosen. Although the try-andtest method provides a way of choosing g μ , it requires some guessing to get several sets of pre-specified values for all g μ and also some computational overhead to estimate the variance ofP(E R ) for each set of values. More recently, the dwSSA was developed in [24] to automatically choose parameter values for the wSSA by applying the cross-entropy method originally proposed in [25] for optimizing the importance sampling method.

Weighted NRM
The wSSA is based on the DM for the SSA, which needs to generate two random variables in each simulation step. However, the NRM of Gibson and Bruck [4] requires only one random variable in each simulation step. In this section, we apply the importance sampling technique to the NRM and develop the wNRM.
The key to making the wSSA more efficient than the standard SSA is to change the probability of each reaction appropriately but without changing the distribution of the time τ between any two consecutive reactions. Since the NRM determines the reaction occurring in a simulation step by choosing the reaction that requires the smallest waiting time, it seems difficult to change the probability of each reaction without changing the distribution of τ. However, we notice that the PDF of τ in (3) only depends on a 0 (x) not individual a μ (x). Hence, we can change the probability of each reaction by changing the corresponding propensity function but without changing the distribution of τ, so long as we keep the sum of the propensity functions equal to a 0 (x). To this end, we define where b m (x) = g m a m (x) is defined in the same way as in the wSSA. It is easy to verify that d 0 (x) = M m=1 d m (x) = a 0 (x). If we generate τ m from an exponential distribution p(τ m ) = d m (x) exp(-d m (x)τ m ), τ m > 0, as the waiting time of reaction channel m, and choose μ = arg m min{τ m , m = 1, ..., M} as the index of the channel that fires, then it can be easily shown that the PDF of τ = min{τ m , m = 1, ..., M} follows the exponential distribution in (3) and that the probability of reaction μ is q μ = d μ (x)/d 0 (x) = b μ (x)/b 0 (x). If we repeat this procedure in each simulation step, we would have modified the first reaction method (FRM) [3] for the standard SSA and got a weighted FRM (wFRM). Clearly, the wFRM is not efficient since it generates M random variables in each step. However, following Gibson and Bruck [4], we can convert the wFRM into a more efficient wNRM by reusing τ m s.
In the FRM, we used τ m to denote the putative waiting or relative time for the mth reaction channel to fire. Following Gibson and Bruck [4], we will use τ m to denote the putative absolute time when the mth reaction channel will fire. Suppose that the μth reaction channel fires at time t in the current step. After updating the state vector and propensity functions, we calculate new d m (x), m = 1, ..., M, which we denote as d new m (x). Then, we generate a random variableτ μ from an exponential distribution with parameter d new μ (x) and set τ μ = t +τ μ . For other channels with an index m ≠ μ, we update τ m as follows: Following Gibson and Bruck [4], we can show that the new τ m -t, m = 1, ..., M, are independent exponential random variables with parameters d new m (x), m = 1, ..., M, respectively. Therefore, in the next step, we can choose μ = arg m min{τ m , m = 1, ..., M} as the index of the channel that fires as done in NRM, update t as t = τ μ , and then repeat the process just described. Clearly, the wNRM only needs to generate one random variable in each step. We can further improve the efficiency of the wNRM by using the dependency graph G and the indexed priority queue P defined by Gibson and Bruck [4]. The dependency graph G tells precisely which propensity functions need to be updated after a reaction occurs. The indexed priority queue P can be exploited to find the minimum τ m and the reaction index in each step more efficiently than finding the reaction index from the PMF (4) directly as done in the DM. However, some computational overhead is needed to maintain the data structure of P.
Essentially, our wNRM runs simulation in the same way as the NRM except that the wNM generates τ m using a parameter d m (x) instead of a m (x). To estimate the probability of the rare eventP(E R ), we calculate a in each step and getP(E R ) using (7). The wNRM is summarized in the following algorithm: Algorithm 1 (wNRM) 1. k 1 0, k 2 0, set values for all g m ; generate a dependency graph G. 2. for i = 1 to n, do 3. t 0, x x 0 , w 1. 4. evaluate a m (x) and b m (x) for all m; calculate all d m (x). 5. for each m, generate a unit-interval uniform random variable r m ; τ m = ln(1/r m )/d m (x). 6. store τ m in an indexed priority queue P.
break out the while loop 11. end if 12.

15.
Find a m (x) need to be updated from G; evaluate these a m (x) and the corresponding b m (x); calculate all d new m (x).

16.
for all m ≠ μ, generate a unit-interval uniform random variable r μ ; Note that Gibson and Bruck [4] argued that the NRM is more efficient than the DM of Gillespie's SSA for the loosely coupled chemical reaction systems. On the other hand, Cao et al. [5] optimized the DM and argued that the optimized DM is more efficient for most practical reaction systems. Regardless of the debate about the efficiency, here we propose the wNRM as an alternative of the wSSA which is based on the DM. While our simulation results in Section 5 demonstrate that the wNRM is more efficient than the refined wSSA for the three reaction systems tested, the wSSA may be more efficient in simulating some other systems.
As in the wSSA, Algorithm 1 does not provide a method for selecting the values of parameters g m , m = 1, ..., M. Although we could incorporate the try-and-test method in refined wSSA into Algorithm 1, we will develop a more systematic method for selecting parameters in the next section. This parameter selection method will be applicable to both the wSSA and the wNRM and can significantly improve the performance of both algorithms as will be demonstrated in Section 5.

Parameter selection for wSSA and wNRM
Let us denote the set of all possible state trajectories in the time interval [0 T] as J and the set of trajectories that first reach any state in Ω during [0 T] as J E . Let the probability of a trajectory J be P J . Then, we have ∈ J E . Importance sampling used in the wSSA and the wNRM arises from the factor that we can write P(E R ) as where Q J is the probability used in simulation to generate trajectory J, which is different from the true probability P J if the original system evolves naturally. If we make n simulation runs with altered trajectory probabilities, (11) implies that we can estimate P(E R ) aŝ P(E R ) = 1 n J∈J which is essentially (7). The variance ofP(E R ) depends on Q J s. Appropriate Q J s yield small variance, thereby improving the accuracy of the estimate or equivalently reducing the number of runs for a given variance. The "rule of thumb" [23, [26][27][28] for choosing good Q J s is that Q J should be roughly proportional to P J 1(J ∈ J E ). However, at least two difficulties arise if we apply the rule of thumb based on (11). First, the number of all possible trajectories is very large and we do not know the trajectories that lead to the rare event and their probabilities. Second, since we can only adjust the probability of each reaction in each step, it is not clear how this adjustment can affect the probability of a trajectory. To overcome these difficulties, we next use an alternative expression for P(E R ) based on which we apply the importance sampling technique. Let us denote the number of reactions occurring in the time interval [0 t] as K t and the maximum value of K T as K max T . Let E K be the rare event occurring at the Kth (K ≤ K max T ) reaction at any t ≤ T, and P(E K ) be the probability of E K in the original system that evolve naturally with the original probability rate constants. Then, we have If Q(E K ) is the probability of event E K in the weighted system that evolves with adjusted probability rate constants, the rule of thumb for choosing good Q(E K ) is that we should make Q(E K ) approximately proportional to P(E K ). However, it is still difficult to apply the rule of thumb, because it is difficult to control every Q(E K ) simultaneously. Hence, we relax the rule of thumb and will maximize the Q(E K ) corresponding to the maximum P(E K ) or the one near maximum if the exact maximum P(E K ) cannot be determined precisely. The rationale of this heuristic rule is based on the following argument. If P(E K E ) is the maximum one among all P(E K ), K = 1, . . . , K max T , the sum of P(E K E ) and its closely related terms, such as P(E K E −1 ), P(E K E +1 ), P(E K E −2 ) and P(E K E +2 ), very likely dominates the sum in the right-hand side of (12). Maximizing Q(E K E ) not only proportionally increases Q(E K E ), and its closely related terms, such as Q( , but also significantly increases the probability of the occurrence of the rare event. Note that a similar heuristic rule relying on the event with maximum probability was proposed in [29] for estimating the probability of rare events in highly reliable Markovian systems. Before proceeding with our derivations, we need to specify Ω. In the rest of the paper, we assume that Ω contains one single state X defined as X i = X i (0) + h, where h is a constant and i {1, 2, ..., N}. Let us denote the number of firings of the mth reaction channel in the trajectory leading to the rare event as K m . Then, we have We first divide all reactions into three groups using the following general rule: G 1 group consists of reactions with ν im h > 0, G 2 group consists of reactions with ν im h < 0, and G 3 group consists of reactions with ν im = 0. The rationale for the partition rule is that the reactions in G 1 (G 2 ) group increase (decrease) the probability of the rare event and that the reactions in G 3 group do not affect X i (t) directly. We further refine the partition rule as follows. If a reaction R m is in the G 1 group based on the general rule but a m (x) = 0 whenever one R m reaction occurs, we move R m into the G 3 group. Similarly, if a reaction R m is in the G 2 group based on the general rule but a m (x) = 0 whenever one R m reaction occurs, we move R m into the G 3 group. For most cases, we only need the general partition rule. The refining rule described here is to deal with the situation where one or several X i (t)s always take values 1 or 0 as in the system considered in Section 5.3. More refining rules may be added following the rationale just described, after we see more real-world reaction systems.
We typically only need to consider elementary reactions including bimolecular and monomolecular reactions [30]. Hence, the possible values for all ν im are 0, ±1, ±2. For the simplicity of derivations, we now only consider the case where ν im = 0, ±1, i.e., we assume that the system does not have any bimolecular reactions with two identical reactant molecules or dimerization reactions. We will later generalize our method to the system with dimerization reactions. Let us define Let us denote K t as the expected value of K t . Since the number of reactions occurring in any small time interval is approximately a Poisson random variable [8], K t is the sum of a large number of independent Poisson random variables when t is relatively large. Then, by the central limit theorem, K t can be approximated as a Gaussian random variable with mean K t . Indeed, in all chemical reaction systems [6,19,31] we tested so far, we observed that Kt follows a unimodal distribution with a peak at K t and its standard deviation is small relative to K t . Since the mean first passage time of the rare event is much larger than T [23], the rare event most likely occurs at a time near T. Based on these two observations, we argue that P(E KT ) > P(E K ) for all K < K T . Therefore, we should have K E ≥ K T . When E K E occurs, we have Since both (14) and (15) need to be satisfied in order for the event E K E to occur and since K G 1 ≥ 0, K G 2 ≥ 0 and K G 3 ≥ 0, we get the second requirement for K E : K E ≥ |h|. Combining the two requirements on K E , we obtain K E ≥ max{K T , |η|}.
The probability P(E K ) can be expressed as T 0 P(K t = K)dt quickly decreases to zero when K increases beyond K T . In other words, T 0 P(K t = K)dt is approximately a constant for K ≤ K T and quickly decreases to zero when K > K T . Now let us consider event E K with K = |h| in the case |η| > K T . In this case, Consequently, we suggest that we choose K E = |η| + σ K T , where σ K T is the standard deviation of K T which can be estimated by making hundreds of runs of the standard SSA. In case K T > |η|, we choose K E = K T based on the same argument that T 0 P(K t = K E )dt decreases quickly if we further increase K E .
Applying the relaxed rule of thumb, we need to adjust probability rate constants in simulation to maximize Since we do not change the distribution of τ, we do not change the distribution of K T and thus T 0 Q(K t = K E )dt. Hence, maximizing Q(E K ) is equivalent to maximizing Q(X(t) Ω|K t = K E ). Now we are in a position to summarize our strategy of applying the important sampling technique in simulation as follows: we will choose probability para- We next consider systems with only G 1 and G 2 reaction groups and then consider more general systems with all three reaction groups.

Systems with G 1 and G 2 reaction groups
Since we do not have G 3 group, (15) becomes Combining (14) and (17), we get K G 1 = (K E + η)/2 and K G 2 = (K E − η)/2 if the final state after the last reaction occurs is in Ω. The last reaction should be a reaction from G 1 group. Otherwise, the state already reached Ω before the last reaction occurs. Suppose that in simulation the total probability of the occurrence of reactions in G 1 group is a constant Q G 1 and then the total probability of the occurrence of reactions in G 2 group is Ω|K t = K E ) can be found from a binomial distribution as follows where K G 1 = (K E + η)/2 and K G 2 = (K E − η)/2 as determined earlier. Setting the derivative of Q(X(t) Ω|K t = K E ) with respect to Q G 1 to be zero, we get Q G 1 and Q G 2 that maximize Q(X(t) Ω|K t = K E ) as follows: To ensure that reactions in G 1 (G 2 ) group occur with probability Q G 1 (Q G 2 ) in each step of simulation, we adjust the probability of each reaction as follows where a G 1 (x) = m∈G 1 a m (x) and a G 2 (x) = m∈G 2 a m (x). It is easy to verify that m∈G 1 q m = Q G 1 and m∈G 2 q m = Q G 2 . As defined in (8), the weight for estimating the probability of the rare event is w μ = p μ /q μ if the μth reaction channel fires.

Systems with G 1 , G 2 and G 3 reaction groups
Combining (14) and (15), we get K G 1 = K G 2 + η and Suppose that in simulation the total probabilities of the occurrence of reactions in G 1 , G 2 and G 3 are constants respectively. Then, Q(X(t) Ω|K t = K E ) can be found from a multinomial distribution as follows where K G 1 = K G 2 + η and K G 3 = K E − η − 2K G 2 as determined earlier. Since there are (K E -h)/2 + 1 terms of the sum in (21), it is difficult to find Q G 1 , Q G 2 and Q G 3 that maximize Q(X(t) Ω|K t = K E ). So we will use a different approach to find Q G 1 , Q G 2 and Q G 3 as described in the following.
Let K G 1 , K G 2 and K G 3 be the average number of reactions of G 1 , G 2 and G 3 groups that occur in the time interval [0 T] in the original system. Since we have Then, we can approximate P(X(t) Ω|K t = K E ) in the original system, which is the counter part of Q(X(t) Ω|K t = K E ) in the weighted system, using the right-hand side of (21) but with Q G i , i = 1, 2, 3, replaced by P G i i = 1, 2, 3, respectively. This gives Suppose that the ( + 1)th term of the sum in (22) is the largest. We further relax the rule of thumb and maximize the ( + 1)th term of the sum in (21) to find Q G 2 , Q G 2 and Q G 3 .
It is not difficult to find the ( + 1)th term of the sum in (22). Let us denote the (K G 2 + 1)th term of the sum in (22) as f (K G 2 ). We can exhaustively search over all However, this may require relatively large computation because the factorials involved in f (K G 2 ). We can reduce computation by searching over Specifically, we calculate all g(K G 2 ) from (23). If g(K G 2 ) > 1 but g(K G 2 + 1) < 1, then f (K G 2 ) is a local maximum. After obtaining all local maximums, we can find the global maximum f() from the local maximums.
After we find , we set the partial derivatives of the ( + 1)th term of the sum in (21) with respect to Q G 1 and Q G 2 to be zero. This gives the following optimal Q G 1 , Q G 2 and Q G 3 Substituting Q G 1 and Q G 2 in (24) into (20), we get the probability q m , m G 1 or G 2 that is used to generate the mth reaction in each step of simulation. For G 3 group, we get the probability of each reaction as follows where a G 3 (x) = m∈G 3 a m (x). While we can use q m in (25) to generate reactions in G 3 group, we next develop an optional method for finetuning q m , m G 3 , which can further reduce the variance ofP(E R ). We divide G 3 group into three subgroups: G 31 , G 32 and G 33 . Occurrence of reactions in G 31 group increases the probability of occurrence of reactions in Q G 1 group or reduces the probability of the occurrence of the reactions in Q G 2 group, which in turn increases the probability of the rare event. Occurrence of reactions in G 32 group reduces the probability of occurrence of reactions in Q G 1 group or increases he probability of the occurrence of reactions in Q G 2 group, which reduces the probability of the are event. Occurrence of reactions in G 33 group does not change the probability of occurrence of reactions in Q G 1 and Q G 2 groups, which does not change the probability of the rare event.
Let K G 31 , K G 32 and K G 33 be the average number of reactions from G 31 , G 32 and G 33 that occur in the time interval [0 T] in the original system. we define P G 32 = K G 32 /K T , P G 32 = K G 32 /K T and P G 33 = K G 33 /K E . Our goal is to make Q 31 to be greater than P G 31 and Q 32 to be less than P G 32 to increase the probability of the rare event. However, this may not feasible when Q G 3 < P G 3 . Hence, we can fine-tune Q G 31 , Q G 32 and Q G 33 only when Q G 3 ≥ P G 3 and propose the following formula to determine Q 31 , Q 32 and Q 33 : where a, b (0 1) are two pre-specified constants. It is not difficult to verify from (26) that To ensure that P G 31 ≤ Q G 31 < Q G 3 − Q G 33 and 0 < Q G 32 ≤ P G 32 , we choose a and b satisfying 0 ≤ b < 1 and Finally, we obtain q m for m G 3 as follows where a G 3i (x) = m∈G 3i a m (x), i = 1, 2, 3.

Systems with dimerization reactions
So far we assumed that the system did not have any dimerization reactions, i.e. the system consisted of reactions with |ν im | = 0 or 1. We now generalize our methods developed earlier to the system with dimerization reactions. If there are dimerization reactions in G 1 and G 2 groups, we further divide G 1 group into G 11 and G 12 subgroups and G 2 group into G 21 and G 22 subgroups. The G 11 group contains reactions with ν im sign(h) = 1, where sign(h) = 1 when h > 0 and sign(h) = -1 when h < 0. The G 12 group contains reactions with ν im sign(h) = 2. The G 21 group contains reactions with ν im sign(h) = -1, while the G 12 group contains reactions with ν im sign (h) = -2.
Let us define K G 11 = m∈G 11 K m, K G 12 = m∈G 12 K m, K G 21 = m∈G 21 K m and K G 22 = m∈G 22 K m. Clearly, we have K G 1 = K G 11 + K G 12 and K G 2 = K G 21 + K G 22 . Then, (13) becomes Let us consider systems with G 1 and G 2 groups but without G 3 group. Although we still have K G 1 + K G 2 = K E or equivalently K G 11 + K G 12 + K G 21 + K G 22 = K E, we cannot obtain four unknowns K G 11 , K G 12 , K G 21 and K G 22 from only two equations.
Suppose that K G 11 , K G 12 , K G 21 and K G 22 are average number of reactions from G 11 , G 12 , G 21 and G 22 groups that occur in the time interval [0 T] in the original system. We notice from (20) that we do not change the ratio of the probabilities of two reactions in the same group, i.e., q m 1 /q m 2 = p m 1 /p m 2 if m 1 and m 2 are in the same G 1 or G 2 group. Therefore, we would expect that K G 12 /K G 11 ≈ K G 12 /K G 11 and K G 22 /K G 21 ≈ K G 22 /K G 21 . Using these two relationships, we can write (28) as where λ 1 = (K G 11 +2K G 12 ) (K G 11 +K G 12 ) and λ 2 = . From (17) and (29), we obtain and K G 2 = (λ 2 K E − η)/(λ 1 + λ 2 ). Substituting K G 1 and K G 2 into (18) and maximizing Q(X(t) Ω|K t = K E ), we obtain We then substitute Q G 1 and Q G 2 into (20) to get q m . Now let us consider the systems with G 1 , G 2 and G 3 reactions. From (29), we have K G 1 = (λ 2 K G 2 + η)/λ 1 , and from (15) and (29), we obtain . Following the derivations in Section 4.2, we can get q m for any reaction. More specifically, substituting K G 1 , K G 3 and the upper limit of K G 2 into (21), we obtain Q(X(t) Ω|K t = K E ). We can also get P(X(t) Ω|K t = K E ) similar to (22) by replacing Q G i in Q(X(t) Ω|K t = K E ) with P G i . Then, we determine the maximum term of the sum in P(X(t) Ω|K t = K E ) and denote the value of K G 2 corresponding to the maximum term as + 1. We find Q G 1 , Q G 2 and Q G 3 by maximizing the ( + 1)th term of the sum in Q(X(t) Ω|K t = K E ). Finally, we substitute Q G 1 and Q G 2 into (20) to get q m , m G 1 or G 2 . For the reactions in G 3 group, we can either substitute Q G 3 into (25) to obtain q m , or if we want to fine-tune q m , we use (26) and (27) to get q m .

wSSA and wNRM with parameter selection
The key to determining probability of each reaction q m is to find the total probability of each group, Q G 1 , Q G 2 , Q G 31 , Q G 31 , Q G 32 and Q G 33 . This requires the average number of reactions of each group occurring during the interval [0 T] in the original system, K T , K G 11 , K G 12 , K G 21 , K G 31 , K G 31 , K G 32 , K G 33 . If the system is relatively simple, we may get these numbers analytically. If we cannot obtain them analytically, we can estimate them by running Gillespie's exact SSA. Since the number of runs needed to estimates these numbers is much smaller than the number of runs needed to estimate the probability of the rare event, the computational overhead is negligible.
We next summarize the wSSA incorporating the parameter selection method in the following algorithm. We will not include the procedure for fine-tuning the probability rate constants of reactions in the G 3 group, but will describe how to add this optional procedure to the algorithm. We will also describe how to modify Algorithm 1 to incorporate the parameter selection procedure into the wNRM.
generate two unit-interval uniform random variables r 1 and r 2 .

17.
x If Q G 3 ≥ P G 3 and we want to fine-tune the probability rate constants of the reactions in the G 3 group, we modify Algorithm 2 as follows. In step 1, we also estimate K G 32 , K G 32 and K G 33 and choose the value of a and b in (26). In step 2, we also calculate Q G 31 , Q G 32 and Q G 33 from (26). In step 14, we calculate q m for G 3 reactions from (27) instead of (25). Comparing with the refined wSSA [23], the wSSA with our parameter selection procedure does not need to make some guessing about the parameters for adjusting the probability of each reaction q m , but directly calculate q m using a systematically developed method. This has two main advantages. First, our method will always adjust q m appropriately to reduce the variance ofP(E R ), whereas the refined wSSA may not adjust q m as well as our method, especially if the initial guessed values are far away from the optimal values. Second, as we mentioned earlier, the computational overhead of our method is negligible, whereas the refined wSSA requires non-negligible computational overhead for determining parameters. Indeed, as we will show in Section 5, the variance ofP(E R ) provided by the wSSA with our parameter selection method can be more than one order of magnitude lower than that provided by the refined wSSA for given number of n. Moreover, the wSSA with our parameter selection method is faster than the refined wSSA, since it requires less computational overhead to adjust q m .
We can also incorporate our parameter selection method without the fine-tuning procedure into the wNRM as follows. We replace the first step of Algorithm 1 with the first three steps of Algorithm 2. We then modify the fourth step of Algorithm 1 as follows: evaluate all a m (x), calculate all q m from (20) and (25), and calculate all d m (x) as d m (x) = q m a 0 (x). Finally, we change the fifth step of Algorithm 1 to the following: find a m (x) need to be updated from G and evaluate these a m (x); calculate all q m from (20) and (25), and calculate all d new m (x) as d new m (x) = q m a 0 (x). We can also fine-tune the probability rate constants of G 3 reactions in the wNRM in the same way as described in the previous paragraph for the wSSA. Note that since our parameter selection method employs a systematic method for partitioning reactions into three groups as discussed earlier, our method can be applied to any real chemical reaction systems.

Numerical examples
In this section, we present simulation results for several chemical reaction systems to demonstrate the accuracy and efficiency of the wSSA and wNRM with our parameter selection method, which we refer to as wSSAps and wNRMps, respectively, in the rest of the paper. All simulations were run in Matlab on a PC with an Intel dual Core 2.67-GHz CPU and 3G-byte memory running Windows XP.

Single species production-degradation model
This simple system was originally used by Kuwahara and Mura [22] and then Gillespie et al. [23] to test the wSSA and the refined wSSA. It includes the following two chemical reactions: In reaction R 1 , species S 1 synthesizes species S 2 with a probability rate constant c 1 , while in reaction R 2 , species S 2 is degraded with a probability rate constant c 2 . We used the same initial state and probability rate constants as used in [22,23]: X 1 (0) = 1, X 2 (0) = 40, c 1 = 1 and c 2 = 0.025.
It is observed that the system is at equilibrium, since a 1 (x 0 ) = c 1 × X 1 (0) = c 2 × X 2 (0) = a 2 (x 0 ). It can be shown [22] that X 2 (t) is a Poisson random variable with mean equal to 40. References [22,23] sought to estimate P(E R ) = P t≤100 (X 2 θ|x 0 ), the probability of X 2 (t) = θ for t ≤ 100 and several values of θ between 65 and 80. Since θ is about four to six standard deviations above the mean value 40, P t≤100 (X 2 θ|x 0 ) is very small. Kuwahara and Mura [22] employed the wSSA to estimate P(E R ) and used b 1 (x) = δa 1 (x) and b 2 (x) = 1/ δa 2 (x) with δ = 1.2 for four different values of θ: 65, 70, 75 and 80. Gillespie et al. [23] applied the refined wSSA to estimate P(ER) and used the same way to determine b 1 (x) and b 2 (x) but found that δ = 1.2 is near optimal for θ = 65 and that δ = 1.3 is near optimal for θ = 80. We repeated the simulation of Gillespie et al. [23] for θ = 65, 70, 75 and 80 with δ = 1.2, 1.25, 1.25 and 1.3, respectively. We then applied the wSSAps and the wNRMps to estimate P(E R ) for θ = 65, 70, 75 and 80. This system has only two types of reaction: R 1 is a G 1 reaction and R 2 is a G 2 reaction. Since the system is at equilibrium with a 0 (x 0 ) = 2, K T with T = 100 is estimated to be 200, and thus K E = K T = 200.
Using (19), we get q 1 = Q G 1 = (K E + θ − 40)/(2K E ) and q 2 = 1q 1 . Table 1 gives the estimated probabilityP(E R ) and the sample variance s 2 for the wNRMps, the wSSAps and the refined wSSA, obtained from 10 7 simulation runs with θ = 65, 70, 75 and 80. It is seen thatP(E R ) is almost identical for all three methods. However, the wNRMps and the wSSAps provide variance almost two order of magnitude lower than the refined wSSA for θ = 80, or less than or almost one order of magnitude lower than the refined wSSA for θ = 75, 70 and 65. Moreover, the wNRMps and the wSSAps need about 60 and 70% CPU time of the refined wSSA, respectively. Note that the CPU time for the refined wSSA in Table 1 does not include the time needed for searching for the optimal value of δ for each θ. The less CPU time used by the wNRMps is expected since it only requires to generate one random variable in each step, whereas the wSSAps and the refined wSSA need to generate two random variables. It is also reasonable that the wSSAps requires less CPU time than the refined wSSA, because the wSSAps needs less computation to calculate the probability of each reaction in each step. Figure 1 compares the standard deviation (σ / √ n) ofP(E R ) for the wSSAps and the refined wSSA with different number of runs, n. Since the wNRMps provides almost the same standard deviation as the wSSAps, we do not plot it in the figure. It is seen that the wSSAps consistently yields much smaller standard deviation than the refined wSSA for all values of n. It was shown in [24] that the dwSSA yielded similar variance comparing to the refined wSSA. Therefore, our parameter selection method also substantially outperforms the dwSSA in this example.

A reaction system with G 1 , G 2 and G 3 reactions
The previous system only contains a G 1 reaction and a G 2 reaction. We used the following system with G 1 , G 2 and G 3 reactions to test the wNRMps and the wSSAps: In this system, a monomer S 1 converts to S 2 with a probability rate constant c 1 , while S 2 is degraded with a probability rate constant c 2 . Meanwhile, another species S 3 synthesizes S 1 with a probability rate constant c 3 and S 1 degrades with a probability rate constant c 4 . In our simulations, we used the following values for the probability rate constants and the initial state: and X 1 (0) = 40, X 2 (0) = 40, X 3 (0) = 1.
This system is at equilibrium and the mean value of X 2 (t) is 40. We are interested in P (E R ) = P t≤10 (X 2 θ|x (0)), the probability of X 2 (t) = θ for t ≤ 10. We chose θ = 65 and 68 in our simulations. To apply the wSSAps and the wNRMps to estimate P(ER), we divide the system into three groups. The G 1 group contains reaction R 1 ; the G 2 group includes reaction R 2 ; the G 3 group consists of reactions R 3 and R 4 . When fine-tuning the parameters, we further divided G 3 into a G 31 group which contains reaction R 3 and a G 32 group which contains reaction R 4 . Since the system is at equilibrium and we have a 0 (x 0 ) = 20, a 1 (x 0 ) = 4, a 2 (x 0 ) = 4, a 3 (x 0 ) = 8 and a 4 (x 0 ) = 4, we get K T = 200 , K 1 = 40 , K 2 = 40 , K 3 = 80 and K 4 = 40. Therefore, we get K E = K T = 200 and the following probabilities: P G 1 = 0.2, P G 2 = 0.2 and P G 3 = 0.6.
If θ = 65, we have h = 25. Using (23), we obtained = 29. Substituting into (24), we got Q G 1 = 0.27, Q G 2 = 0.145 and Q G 3 = 0.585. We then chose a = 0.85 and b = 0.80 and calculated Q G 31 and Q G 32 from (26) as Table 1 Estimated probability of the rare eventP (E R ) and the sample variance s 2 as well as the CPU time (in s) with 10 7 runs of the wNRMps, the wSSAps and the refined wSSA for the single species production-degradation model (31)  P (E R ) and Q G 32 = 0.1678. Similarly, if θ = 68, we got = 26, which resulted in Q G 1 = 0.27 and Q G 2 = 0.13. Again, selecting a = 0.85 and b = 0.80, we got Q G 31 = 0.430 and Q G 32 = 0.170. To test whether the wNRMps and the wSSAps are sensitive to parameters a and b, we also used another set of parameters a = 0.80 and b = 0.75. In order to compare the performance of the wNRMps and the wSSAps with that of the refined wSSA, we also ran simulations with the refined wSSA. In the refined wSSA, we chose the following parameters g 1 = δ, g 2 = 1/ δ and g m = 1, m = 3, 4 to adjust propensity functions. Since the optimal value of a is unknown, we ran the refined wSSA for δ = 1.  Figure 2 shows the variance ofP(E R ) obtained from the simulations with the refined wSSA and the wSSAps. Since the wNRMps yielded almost the same variance as the wSSAps, we only plotted the variance obtained from the wSSAps. It is seen that the wSSAps provides variance more than one order of magnitude lower than that provided by refined wSSA with the best δ. Is also observed that the wSSAps is not very sensitive to the parameters a and b, since the variance obtained with two different sets of values for a and b is almost the same. Table 2 listsP(E R ) and its variance obtained from n = 10 7 runs of the refined wSSA, the wNRMps and the wSSAps. We first ran the wNRMps and the wSSAps without fine-tuning the probability of reactions in G 3 group and calculated q m using (25). We then ran the wNRMps and the wSSAps with fine-tuning the probability of reactions in G 3 group and used two sets of parameters (a = 0.85, b = 0.80; a = 0.80, b = 0.75) and (26) to calculate q m for the reactions in G 3 group. We also made 10 11 runs of the exact SSA to estimateP(E R ). It is seen that the wNRMps, the wSSAps and the refined wSSA all yield the sameP(E R ) as the exact SSA. However, the wNRMps and the wSSAps with fine-tuning the probabilities of G 3 reactions offer variance more than one order of magnitude lower than that provided by the refined wSSA. Without fine-tuning the probabilities of G 3 reactions, the wNRMps and the wSSAps provided a little bit larger variance but still almost one order of magnitude lower than that provided by the refined wSSA. Table 2 also shows that the wNRMps and the wSSAps needed only 60-70% CPU time needed by the refined wSSA. Again, the CPU time of the refined wSSA in Table 2 does not include the time needed for searching for the optimal value of δ for each θ. If we include this time, the CPU time of the refined wSSA will be almost doubled.

Enzymatic futile cycle model
The enzymatic futile cycle model used in [22,23] consists of two instances of the elementary single-substrate enzymatic reaction described by the following six reactions:  and the sample variance s 2 as well as the CPU TIME (in s) with 10 7 runs of the wNRMps, the wSSAps and the refined wSSA for the system given in (32) The probability of the rare event estimated from 10 11 runs of exact SSA method is 1.14 × 10 -4 for θ = 65 and 1.49 × 10 -5 for θ = 68 This system essentially consists of a forward-reverse pair of enzyme-substrate reactions, with the conversion of S 2 into S 5 catalyzed by S 1 in the first three reactions and the conversion of S 5 into S 2 catalyzed by S 4 in the last three reactions. We used the same probability rate constants and initial state as used in [22,23]: With the above rate constants and initial state, X 2 (t) and X 2 (5) tend to equilibrate about their initial value 50. References [22,23] sought to estimate P(E R ) = P t≤100 (X 5 θ|x(0)), the probability that X 5 (t) = θ for t ≤ 100 and several values of θ between 25 and 40. We repeated simulations with the refined wSSA in [23] for θ = 25 and 40. The refined wSSA employed the following parameters g 3 = δ, γ 6 = 1/δ and g m = 1, m = 1, 2, 4, 5, and we used the best value of δ determined in [23]: δ = 0.35 for θ = 25 and δ = 0.60 for θ = 40.
In this system, we always have X 2 (t) + X 5 (t) = 100. So when the rare event occurs at time t, we have X 5 (t) = θ and X 2 (t) = 100 -θ. The rare event is therefore defined as X 5 = 50 + h with h = θ -50 or equivalently X 2 = 50 -h. According to the partition rule defined in Section 4, R 3 is a G 2 reaction; R 6 is a G 1 reaction; R 1 , R 2 , R 4 and R 5 are G 3 reactions.
We ran Gillespie's SSA 10 3 times and got an estimate of K T as K T = 432, and thus K E = K T = 432. When θ = 40, we have h = -10. Using (23) and K E = 432, we obtained = 6 Substituting into (24), we got Q G 2 = 6/432, Q G 2 = 6/432 and Q G 3 = 410/432. In this example, there always have certain reactions whose propensity functions are zero, since we always have X 1 (t) + X 3 (t) = 1 and X 4 (t) + X 6 (t) = 1. Due to this special property, we calculate the probability of each reaction as follows. The system has only 4 states in terms of X 3 (t) and X 6 (t): X 3 (t)X 6 (t) = 11, 01, 10 or 00. From the 10 3 runs of Gillespie's exact SSA, we estimated the probability of reactions occurring in reach state as P 11 ≈ 1/2, P 01 = P 10 ≈ 1/4 and P 00 ≈ 0. Note that reaction R 6 only occurs in states 11 and 01 and we denote its probability in these two states used in the wSSAps as q 11 6 and q 01 6 and its natural probability as p 11 6 and p 01 6 . The probability p 11 6 can be calculated as p 11 6 = 1/22 and p 01 6 can be approximated as p 01 6 = 1/511 assuming X 2 (t) = 50 since the system is in equilibrium. Then, using the relationships: Q G 1 = q 11 6 P 11 + q 01 6 P 01 and q 11 6 /q 01 6 = p 11 6 /p 01 6 , we get q 11 6 = 0.0725 and q 01 6 = 0.0031. Reaction R 3 only occurs in states 11 and 10 and its probability can be obtained similarly as q 11 3 = 0.0272 and q 10 3 = 0.0012. In a state s (s = 11, 01, 10 or 00), we calculate Q G 3 = 1 − q s 6 − q s 3 and then calculated q s m , m = 1, 2, 4 and 5, from (25). Surprisingly, q 11 6 , q 01 6 , q 11 3 and q 10 3 we calculated are very close to the values used in the refined wSSA which were obtained by making 10 5 runs of the refined wSSA for each of seven guessed values of g. In contrast, we do not need to guess the values of parameters but calculate them analytically, and all the information needed in our calculation was obtained from 10 3 of Gillespie's exact SSA, which incurs negligible computational overhead.
When θ = 25, we have h = -25. Using (23) and K T = 432 , we obtained = 3. Substituting into (24), we got Q G 1 = 28/432, Q G 2 = 3/432 and Q G 3 = 401/432. Similar to the previous calculation, we got q 11 6 = 0.1269, q 11 3 = 0.0136, q 11 3 = 0.0136 and q 10 3 = 0.0006 and then calculated the probabilities of other reactions from (25). Again q 11 6 , q 01 6 , q 11 3 and q 10 3 we obtained are very close to the values used in the refined wSSA. Table 3 lists the simulation results obtained from 10 6 runs of the wNRMps, the wSSAps and the refined wSSA for θ = 40 and 25. It is seen that the estimated probabil-ityP(E R ) and variance s 2 are almost identical for all three methods, which is expected because the probability of each reaction in three methods is almost the same. This implies that all three methods may have used near optimal values for the importance sampling parameters. However, in the previous two systems, the parameters used by the refined wSSA are far away from their optimal values, because the wSSAps and the wNRMps provided much lower variance than the refined wSSA. It is also seen from Table 3 that the wSSAps used almost the same CPU time as that used by the refined wSSA and that the wNRMps used about 80% of the CPU time of the refined wSSA. Again, the CPU time of the refined wSSA does not include the time needed to find the optimal value of δ. Figure 3 depicts the standard deviation of the estimated probability versus the number of simulation runs n for the Table 3 Estimated probability of the rare eventP (E R ) and the sample variance s 2 as well as the CPU TIME (in s) with 10 6 runs of the wNRMps, the wSSAps and the refined wSSA for the enzyme futile cycle model  wSSAps and the refined wSSA. Since the wNRMps provides almost the same standard deviation as the wSSAps, we did not plot it in the figure. It is again seen that the wSSAps and the refined wSSA yield almost the same standard deviation for all values of n in this case. It was demonstrated in [24] that the dwSSA yielded comparable variance as the refined wSSA. Therefore, our parameter selection method offers similar performance to the dwSSA in this example.

Conclusion
The wSSA and the refined wSSA are innovative variation of Gillespie's standard SSA. They provide an efficient way for estimating the probability of rare events that occur in chemical reaction systems with an extremely low probability in a given time period. The wSSA was developed based on the directed method of the SSA. In this paper, we developed an alternative wNRM for estimating the probability of the rave event. We also devised a systematic method for selecting the values of importance sampling parameters, which is absent in the wSSA and the refined wSSA. This parameter selection method was then incorporated into the wSSA and the wNRM. Numerical examples demonstrated that comparing with the refined wSSA and the dwSSA, the wSSA and the wNRM with our parameter selection procedure could substantially reduce the variance of the estimated probability of the rare event and speed up simulation.