- Research
- Open Access
Gene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information
- Bin Jia^{1} and
- Xiaodong Wang^{2}Email author
https://doi.org/10.1186/1687-4153-2013-16
© Jia and Wang; licensee Springer. 2013
- Received: 25 July 2013
- Accepted: 11 November 2013
- Published: 17 December 2013
Abstract
Abstract
The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF_{3}), and the fifth-degree cubature Kalman filter (CKF_{5}) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.
Keywords
- Gene regulatory network
- Point-based Gaussian approximation filters
- Network prior information
- Sparsity
- Iterative thresholding
1 Introduction
Inferring gene regulatory network (GRN) has become one of the most important missions in system biology. Genome-wide expression data is widely used due to the development of several high-throughput experimental technologies. The gene regulatory network can be inferred from a number of gene expression samples taken over a period of time. Modeling of GRN is required before its structure can be inferred. Common dynamical modeling methods of GRN include Bayesian networks [1], Boolean networks [2], ordinary differential equations [3], state-space models [4, 5], and so on. Various approaches based on different models have been used to infer the network from observed gene expression data, such as the Markov Chain Monte Carlo (MCMC) methods for the dynamic Bayesian network model [6] and the ordinary differential equation model [7], as well as the Kalman filtering methods for the state-space model [4, 8] and the ordinary differential equation model [3]. Some survey papers can be found in [9–12].
Due to the ‘stochastic’ nature of the gene expression, the Kalman filtering approach based on the state-space model is one of the most competitive methods for inferring the GRN. The Kalman filter is optimal for linear Gaussian systems. However, the GRN is generally highly nonlinear. Hence, advanced filtering methods for nonlinear dynamic systems should be considered. The extended Kalman filter (EKF) is probably the most widely used nonlinear filter which uses the first-order Taylor series expansion to linearize the nonlinear model. However, the accuracy of the EKF is low when the system is highly nonlinear or contains large uncertainty. The point-based Gaussian approximation filters have been recently proposed to improve the performance of the EKF, which employ various quadrature rules to compute the integrals involved in the exact Bayesian estimation. Many filters fall into this category, such as the unscented Kalman filter (UKF) [13], the Gauss-Hermite quadrature filter [14], the cubature Kalman filter (CKF) [15], and the sparse-grid quadrature filter [16]. Besides the point-based Gaussian approximation filters, the particle filter has drawn much attention recently [17]. The particle filter uses random particles with weights to represent the probability density function (pdf) in the Bayesian estimation and provides better estimation result than the EKF. The main problem of the particle filter is that the computational complexity is high, and therefore, it is hard to use for high-dimensional problems, such as the problem considered in this paper.
The EKF and the particle filter have been used for the inference of GRN [4, 8, 18]. In this paper, we consider the point-based Gaussian approximation filters. Our main objective is to provide a framework of incorporating network prior information into the filters. For example, some gene regulations may be known [19] from literature and the inference accuracy of GRN can be improved by incorporating the known regulations of the GRN [20]. Integration of the prior knowledge or constraints with the GRN inference algorithm has been introduced to improve the inference result. The DNA motif sequence in gene promoter regions is incorporated in [21] while modeling of transcription factor interactions is incorporated in [22]. As mentioned in [20], experimentally determined physical interactions can be obtained. In addition, the sparsity constraint is frequently used in the inference of the GRN. To the best of the authors’ knowledge, the most related work in incorporating the prior information in Bayesian filters is [8]. In that work, rather than directly getting the inference results from the filter, an optimization method is used. In particular, a cost function is used in which the sparsity constraint is enforced. However, the cost function in [8] does not consider the uncertainty of the state in the filtering. That cost function in fact is not coupled well with the filtering algorithm. In addition, it did not consider other kinds of prior information. In this paper, we propose a new framework that incorporates the prior information effectively in the filtering algorithm by solving a constrained optimization problem. Efficient recursive algorithms are provided to solve the associated optimization problem.
The remainder of this paper is organized as follows. In Section 2, the modeling of gene regulatory network is introduced. The point-based Gaussian approximation filters are briefly introduced in Section 3. The proposed new filtering framework is described in Section 4. In Section 5, experimental results are provided. Finally, concluding remarks are given in Section 6.
2 State-space modeling of gene regulatory network
The GRN can be described by a graph in which genes are viewed as nodes and edges depict causal relations between genes. The structure of GRN reveals the mechanisms of biological cells. Analyzing the structure of GRN will pave the way for curing various diseases [23]. The learning of GRN has drawn much attention recently due to the availability the microarray data. By analyzing collected gene expression levels over a period of time, one can identify various regulatory relations between different genes. To facilitate the analysis of the GRN, modeling of GRN is required. Different models can be used, such as Bayesian networks [1], Boolean networks [2], ordinary differential equation [3], and state-space model [4, 5]. The state-space model has been widely used because it incorporates noise and can make use of computationally efficient filtering algorithms [5]. Thus, we also use the state-space modeling of GRN in this paper.
where x_{ k }= [ x_{1,k},…,x_{n,k}]^{ T } is the state vector and x_{i,k} denotes the gene expression level of the i-th gene at time k. f is a nonlinear function that characterizes the regulatory relationship among the genes. v_{ k } is the state noise and it is assumed to follow a Gaussian distribution with mean 0 and covariance matrix Q_{ k }, i.e., ${\mathit{v}}_{k}\sim \mathcal{N}(0,{\mathit{Q}}_{k})$.
In (2), A is the regulatory coefficient matrix with a_{ ij } denoting the regulation coefficient from gene j to gene i. Note that a positive coefficient a_{ ij } indicates that gene j activates gene i and a negative a_{ ij } indicates that gene j represses gene i. In (4), μ_{ i } is a parameter. Note that A and μ_{ i } are unknown parameters. The discrete-time nonlinear stochastic dynamic system [24] shown in Eqs. (1)-(3) have been successfully used to describe the GRN [4, 8]. Equation (4) is also called Sigmoid function which is frequently used since it is consistent with the fact that all concentrations get saturated at some point in time [25]. The Sigmoid function has been used in modeling GRN to verify various methods, such as artificial neural network [26], simulated annealing and clustering algorithm [27], extended Kalman filter [4], particle filter [8], and Genetic programming and Kalman filtering [25].
where h (·) is some nonlinear function, n_{ k } is the measurement noise, which is assumed to follow the Gaussian distribution with mean 0 and covariance matrix R_{ k }, i.e., ${\mathit{n}}_{k}\sim \mathcal{N}(0,{\mathit{R}}_{k})$. For example, if the noise corrupted expression levels are observed, then h(x) = x.
3 Network inference using point-based Gaussian approximation filters
3.1 Gaussian approximation filters
In this section, the framework of point-based Gaussian approximation filters for the state-space dynamic model is briefly reviewed. We consider the state-space model consisting of the state Equation (1) and the measurement Equation (5). We denote ${\mathit{y}}^{k}\triangleq \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}{\mathit{y}}_{1},\dots ,{\mathit{y}}_{k}]$.
The pdf recursions in (6) and (7) are in general computationally intractable unless the system is linear and Gaussian. The Gaussian approximation filters approximate (6) and (7) by invoking Gaussian assumptions. Specifically, the first assumption is that given y^{k-1}, x_{k-1} has a Gaussian distribution, i.e., ${\mathit{x}}_{k-1}|{\mathit{y}}^{k-1}\sim \mathcal{N}({\widehat{\mathit{x}}}_{k-1|k-1},{\mathit{P}}_{k-1|k-1})$. The second assumption is that (x_{ k },y_{ k }) are jointly Gaussian given y^{k-1}.
where $\varphi \left(\mathit{x};\phantom{\rule{0.3em}{0ex}}\widehat{\mathit{x}},\mathit{P}\right)$ denotes the multivariate Gaussian pdf with mean $\widehat{\mathit{x}}$ and covariance P.
3.2 Point-based Gaussian approximation filters
where P= S S^{ T } and S can be obtained by Cholesky decomposition or singular value decomposition (SVD).
Various numerical rules can be used to form the approximation in (16), which lead to different Gaussian approximation filters. In particular, the unscented transformation, the Gauss-Hermite quadrature rule, and the sparse-grid quadrature rules are used in the unscented Kalman filter (UKF), the Gauss-Hermite quadrature Kalman filter (GHQF), and the sparse-grid quadrature filter (SGQF), respectively.
Recently, the fifth-degree quadrature filter has been proposed and shown to be more accurate than the third-degree quadrature filters, such as the UKF and the third-degree cubature Kalman filter (CKF_{3}), when the system is highly nonlinear or contains large uncertainty [16]. In this paper, we consider the UKF, CKF_{3}, and the fifth-degree cubature Kalman filter (CKF_{5}). Other filters such as the central difference filter [14] and divided difference filter [28] can also be used. The CKF_{5} is based on Mysovskikh’s method which uses fewer point than the fifth-degree quadrature filter in [16]. In the following, different numerical rules used in (16) are briefly summarized.
3.2.1 Unscented transform
where κ is a tunable parameter, and e_{ i } is the i-th n-dimensional unit vector in which the i-th element is 1 and other elements are 0.
3.2.2 Cubature rules
where ${U}_{n}=\left\{\mathit{s}\in {\mathit{R}}^{n}:\parallel \mathit{s}\parallel =1\right\}$, and $\sigma \left(\xb7\right)$ is the spherical surface measure or the area element on U_{ n }.
Note that (31) contains two types of integrals: the radial integral $\underset{0}{\overset{\infty}{\int}}{h}_{r}\left(r\right){r}^{n-1}\text{exp}\left(-{r}^{2}\right)\mathrm{d}r$ and the spherical integral $\underset{{U}_{n}}{\int}{\mathit{h}}_{s}\left(\mathit{s}\right)\mathrm{d}\sigma \left(\mathit{s}\right)$.
Remark: The third-degree cubature rule is identical to the unscented transformation with κ = 0.
3.3 Augmented state-space model for network inference
In the state-space model for gene regulatory networks described in Section 3.2, the underlying network structure is characterized by the n × n regulatory coefficient matrix A in (2) and the parameters μ = [ μ_{1},…,μ_{ n }] in (4). The problem of network inference then becomes to estimate A and μ. To do that, we incorporate the unknown parameters A and μ into the state vector to obtain an augmented state-space model, and then apply the point-based Gaussian approximation filters to estimate the space vector and thereby obtaining the estimates of A and μ.
Note that A_{k-1} and g_{k-1} can be obtained from θ_{k-1}, and ${\stackrel{\u0304}{\mathit{v}}}_{k}\sim \mathcal{N}(0,{\stackrel{\u0304}{\mathit{Q}}}_{k})$ with ${\stackrel{\u0304}{\mathit{Q}}}_{k}=\text{diag}\phantom{\rule{0.3em}{0ex}}\left(\left[\phantom{\rule{0.3em}{0ex}}{\mathit{Q}}_{k}\phantom{\rule{1em}{0ex}}{\mathit{O}}_{{n}^{2}+n}\right]\right)$, where O_{ m } denotes an m × m all-zero matrix.
where $\mathit{B}=\phantom{\rule{2.77626pt}{0ex}}[\phantom{\rule{0.3em}{0ex}}{\mathit{I}}_{n},{\mathit{O}}_{n\times ({n}^{2}+n)}]$, ${\mathit{O}}_{n\times ({n}^{2}+n)}$ denotes an n × (n^{2} + n) all zeros matrix.
The point-based Gaussian approximation filters can then be used to obtain the estimate of the augmented state, ${\widehat{\stackrel{\u0304}{\mathit{x}}}}_{k}$, from which the estimates of the unknown network parameters, i.e., $\widehat{\mathit{A}}$ and $\widehat{\mathit{\mu}}$ can then be obtained.
which are the same as the filtering updates for Kalman filters.
4 Incorporating prior information
In practice, some prior knowledge on the underlying GRN is typically available. In this section, we outline approaches to incorporating such prior knowledge into the point-based Gaussian approximation filters for network inference. In particular, we consider two types of prior information, namely, sparsity constraints and range constraints on the network. For networks with sparsity constraints, we incorporate an iterative thresholding procedure into the Gaussian approximation filters. And to accommodate range constraints, we employ PDF-truncated Gaussian approximation filters.
4.1 Optimization-based approach for sparsity constraints
4.1.1 The optimization formulations
where ${J}_{p}\left(\stackrel{\u0304}{\mathit{x}}\right)$ is a penalty function associated with the prior information and λ is a tunable parameter that regulates the tightness of the penalty.
Note that (49) can also be interpreted as the result of applying the least squares shrinkage selection operator (LASSO) to (47). The LASSO adds an L_{1} - norm constraint to the GRN so that the regulatory coefficient matrix A tends to be sparse with many zero elements.
Note that as in [20], here we do not force a_{ ij } = 0 corresponding to e_{ ij } = 1 but rather use an L_{1} - norm penalty. The advantage of such an approach is that it allows the algorithm to pick different structures but more likely to pick the edges without penalties. ‘o’ denotes the entry-wise product operation of two matrices.
4.1.2 Iterative thresholding algorithm
Solving the optimization problems in (49) and (50) is not straightforward since |a| is non-differentiable at a = 0. In the following, an efficient solver called the iterative thresholding algorithm is introduced.
where $\mathit{\text{\lambda}}={[\phantom{\rule{0.3em}{0ex}}{\lambda}_{1},{\lambda}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\lambda}_{{n}^{2}+2n}]}^{T}$ and $L\left(\stackrel{\u0304}{\mathit{x}}\right)$ is a smooth function. Note that if $\mathit{\text{\lambda}}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}{0}_{1\times n},\lambda \times {1}_{1\times {n}^{2}},{0}_{1\times n}]}^{T}$, then (51) becomes (49); and if $\mathit{\text{\lambda}}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}{0}_{1\times n},\lambda \times \widehat{\underline{\mathit{\theta}}},{0}_{1\times n}]}^{T}$, then (51) becomes (50). Note that $\widehat{\underline{\mathit{\theta}}}=\phantom{\rule{0.3em}{0ex}}{\left[{e}_{11},{e}_{12},\cdots \phantom{\rule{0.3em}{0ex}},{e}_{1n},\cdots \phantom{\rule{0.3em}{0ex}},{e}_{\mathit{\text{nn}}}\right]}^{T}$.
is the soft thresholding function with sign(u) and $\text{max}\left\{\left|\mathit{u}\right|-\mathit{a},0\right\}$ being component-wise operators.
where ε is a given small number.
4.2 PDF truncation method for range constraints
If the range constraints on the regulatory coefficients are available, the inference accuracy can be improved by enforcing the constraints in the Gaussian approximation filters.
The PDF truncation method [31] can be employed to incorporate the above range constraint into the Gaussian approximation filters, by converting the updated mean ${\widehat{\stackrel{\u0304}{\mathit{x}}}}_{k|k}$ and covariance P_{k|k} to the pseudo mean ${\widehat{\stackrel{\u0304}{\mathit{x}}}}_{k|k}^{t}$ and covariance ${\mathit{P}}_{k|k}^{t}$ which are then used in the next prediction and filtering steps.
After imposing all n constraints, the final constrained state estimate and covariance at time k are given respectively by ${\widehat{\stackrel{\u0304}{\mathit{x}}}}_{k|k}^{t}\triangleq {\widehat{\stackrel{\u0304}{\mathit{x}}}}_{k|k,n}^{t}$ and ${\mathit{P}}_{k|k}^{t}\triangleq {\mathit{P}}_{k|k,n}^{t}$.
5 Numerical results
5.1 Synthetic network
and μ_{ i } = 2, i = 1,⋯,8. For the filter, each coefficient in $\widehat{\mathit{A}}$ is initialized from a Gaussian distribution with mean 0 and variance 0.2. Moreover, the coefficient μ_{ i } is initialized from a Gaussian distribution with mean 1.5 and variance 0.2. The system state is initialized using the first measurement.
where the number of true positives (TP #) denotes the number of links correctly predicted by the inference algorithm; the number of false positives (FP #) denotes the number of incorrectly predicted links; the number of true negatives (TN #) denotes the number of correctly predicted nonlinks; and the number of false negatives (FN #) denotes the number of missed links by the inference algorithm [8].
5.1.1 Comparison of the EKF with point-based Gaussian approximation filters
Comparison of UKF with different κ
True positive rate | False positive rate | Positive predictive rate | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
UKF(κ = -5) | 0.7576 | 0.9355 | 0.8472 | 0.5000 | 0.7647 | 0.5955 | 0.5094 | 0.6279 | 0.5824 | |||
UKF(κ = -2) | 0.7576 | 0.9355 | 0.8406 | 0.5161 | 0.7647 | 0.5933 | 0.5094 | 0.6279 | 0.5825 | |||
UKF(κ = 0) | 0.7576 | 0.9375 | 0.8426 | 0.5161 | 0.7647 | 0.5918 | 0.5094 | 0.6364 | 0.5840 | |||
UKF(κ = 2) | 0.7576 | 0.9375 | 0.8407 | 0.5152 | 0.7353 | 0.5895 | 0.5098 | 0.6279 | 0.5841 | |||
UKF(κ = 5) | 0.7576 | 0.9063 | 0.8394 | 0.5161 | 0.7353 | 0.5933 | 0.5192 | 0.6279 | 0.5821 |
Comparison of different filters
True positives # | False positives # | True negatives # | False negatives # | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg |
EKF | 2 | 17 | 10.60 | 23 | 44 | 36.4 | 2 | 15 | 7.08 | 2 | 24 | 9.92 |
UKF | 25 | 29 | 26.80 | 16 | 26 | 19.28 | 8 | 16 | 13.06 | 2 | 8 | 4.86 |
CKF_{3} | 25 | 30 | 26.74 | 16 | 26 | 19.10 | 8 | 15 | 13.14 | 2 | 8 | 5.02 |
CKF_{5} | 25 | 29 | 26.64 | 16 | 26 | 19.24 | 8 | 16 | 13.08 | 1 | 8 | 5.04 |
True positive rate | False positive rate | Positive predictive rate | ||||||||||
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
EKF | 0.0769 | 0.8667 | 0.5224 | 0.6053 | 0.9545 | 0.8358 | 0.0800 | 0.3208 | 0.2231 | |||
UKF | 0.7576 | 0.9355 | 0.8472 | 0.5 | 0.7576 | 0.5955 | 0.5094 | 0.6279 | 0.5824 | |||
CKF_{3} | 0.7576 | 0.9375 | 0.8426 | 0.5161 | 0.7647 | 0.5918 | 0.5094 | 0.6364 | 0.5840 | |||
CKF_{5} | 0.7576 | 0.9667 | 0.8417 | 0.5000 | 0.7647 | 0.5946 | 0.5094 | 0.6279 | 0.5814 |
Based on the above tests, in the rest of the paper, only the UKF is used.
5.1.2 Comparison of the UKF and the UKF incorporating the prior information
As mentioned above, the UKF is used as a typical filter to compare the performance with and without the prior information.
Inferred results of the conventional filter and filters incorporating the prior information
True positives # | False positives # | True negatives # | False negatives # | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg |
UKF | 25 | 29 | 26.80 | 16 | 26 | 19.28 | 8 | 16 | 13.06 | 2 | 8 | 4.86 |
UKF_{p 1} | 25 | 29 | 27.34 | 14 | 19 | 16.52 | 13 | 18 | 15.72 | 2 | 8 | 4.42 |
UKF_{p 2} | 23 | 26 | 24.16 | 13 | 16 | 13.86 | 16 | 18 | 17.20 | 7 | 10 | 8.78 |
UKF_{p 3} | 25 | 29 | 26.70 | 12 | 24 | 17.50 | 9 | 19 | 14.50 | 3 | 8 | 5.30 |
True positive rate | False positive rate | Positive predictive rate | ||||||||||
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
UKF | 0.7576 | 0.9355 | 0.8472 | 0.5 | 0.7647 | 0.5955 | 0.5094 | 0.6279 | 0.5824 | |||
UKF_{p 1} | 0.7576 | 0.9355 | 0.8614 | 0.4375 | 0.5935 | 0.5121 | 0.5778 | 0.6744 | 0.6239 | |||
UKF_{p 2} | 0.6970 | 0.7879 | 0.7335 | 0.4194 | 0.5000 | 0.4462 | 0.5897 | 0.6667 | 0.6355 | |||
UKF_{p 3} | 0.7576 | 0.9063 | 0.8348 | 0.3871 | 0.7273 | 0.5463 | 0.5294 | 0.6923 | 0.6049 |
Comparison of UKF _{ p1 } using different λ
True positive rate | False positive rate | Positive predictive rate | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
UKF_{p 1}(λ = 0.1) | 0.7576 | 0.9355 | 0.8484 | 0.5000 | 0.7647 | 0.5900 | 0.5094 | 0.6279 | 0.5850 | |||
UKF_{p 1}(λ = 0.5) | 0.7576 | 0.9677 | 0.8535 | 0.4688 | 0.7647 | 0.5696 | 0.5094 | 0.6512 | 0.5948 | |||
UKF_{p 1}(λ = 1) | 0.7576 | 0.9355 | 0.8614 | 0.4375 | 0.5935 | 0.5121 | 0.5778 | 0.6744 | 0.6239 | |||
UKF_{p 1}(λ = 5) | 0.7500 | 0.9355 | 0.8439 | 0.3548 | 0.5455 | 0.4672 | 0.5814 | 0.7105 | 0.6456 | |||
UKF_{p 1}(λ = 10) | 0.7273 | 0.9063 | 0.8217 | 0.3226 | 0.4848 | 0.4156 | 0.6190 | 0.7368 | 0.6695 |
Effect of strength of the links using different λ
True positive rate | False positive rate | Positive predictive rate | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
${\text{UKF}}_{\stackrel{~}{p}1}$ (λ = 0.1) | 0.7576 | 0.9677 | 0.8484 | 0.4688 | 0.7647 | 0.5713 | 0.5094 | 0.6512 | 0.5929 | |||
${\text{UKF}}_{\stackrel{~}{p}1}$ (λ = 0.5) | 0.7576 | 0.9333 | 0.8468 | 0.4516 | 0.7059 | 0.5422 | 0.5385 | 0.6512 | 0.6057 | |||
${\text{UKF}}_{\stackrel{~}{p}1}$ (λ = 1) | 0.7500 | 0.9032 | 0.8221 | 0.3750 | 0.5758 | 0.4953 | 0.5814 | 0.6842 | 0.6257 | |||
${\text{UKF}}_{\stackrel{~}{p}1}$ (λ = 5) | 0.7273 | 0.8750 | 0.8220 | 0.3548 | 0.5000 | 0.4169 | 0.6098 | 0.7179 | 0.6684 | |||
${\text{UKF}}_{\stackrel{~}{p}1}$ (λ = 10) | 0.7500 | 0.8750 | 0.8214 | 0.3226 | 0.5000 | 0.4143 | 0.6098 | 0.7368 | 0.6696 |
Effect of false prior information using different λ
True positive rate | False positive rate | Positive predictive rate | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
${\text{UKF}}_{\stackrel{\u0304}{p}1}$ (λ = 0.1) | 0.7576 | 0.9667 | 0.8491 | 0.5000 | 0.7647 | 0.5933 | 0.5094 | 0.6279 | 0.5835 | |||
${\text{UKF}}_{\stackrel{\u0304}{p}1}$ (λ = 0.5) | 0.7576 | 0.9355 | 0.8535 | 0.4839 | 0.7647 | 0.5962 | 0.5094 | 0.6279 | 0.5836 | |||
${\text{UKF}}_{\stackrel{\u0304}{p}1}$ (λ = 1) | 0.7576 | 0.9333 | 0.8572 | 0.4839 | 0.7059 | 0.6001 | 0.5200 | 0.6279 | 0.5830 | |||
${\text{UKF}}_{\stackrel{\u0304}{p}1}$ (λ = 5) | 0.6970 | 0.8125 | 0.7546 | 0.4194 | 0.5938 | 0.5000 | 0.5682 | 0.6486 | 0.6062 | |||
${\text{UKF}}_{\stackrel{\u0304}{p}1}$ (λ = 10) | 0.5758 | 0.7576 | 0.6810 | 0.3226 | 0.5000 | 0.4066 | 0.5676 | 0.7059 | 0.6369 |
Incorporating LASSO The problem setup is the same as before except that the LASSO rather than the existing network information is used. The UKF incorporating LASSO is denoted as UKF_{p 2}.
As shown in Table 3, the average TP # and FP # of UKF_{p 2} are lower than those of UKF and the average TN # and FN # of UKF_{p 2} are higher than those of UKF. Hence, UKF_{p 2} produces less links, including correct and incorrect ones. In addition, UKF_{p 2} produces more nonlinks and missed links. It is consistent with the fact that the LASSO tends to provide a sparse solution. It can be seen from Table 3 that the average FPR of UKF_{p 2} is lower than that of UKF and the average precision of UKF_{p 2} is higher than that of UKF. Hence, by incorporating LASSO, the inference accuracy is improved.
Comparison of UKF _{ p2 } using different λ
True positive rate | False positive rate | Positive predictive rate | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Filters | Min | Max | Avg | Min | Max | Avg | Min | Max | Avg | |||
UKF_{p 2} (λ = 0.1) | 0.7576 | 0.9355 | 0.8304 | 0.4839 | 0.6970 | 0.5699 | 0.5306 | 0.6512 | 0.5914 | |||
UKF_{p 2} (λ = 0.5) | 0.6970 | 0.8710 | 0.7750 | 0.4194 | 0.5758 | 0.4902 | 0.5682 | 0.6585 | 0.6198 | |||
UKF_{p 2} (λ = 1) | 0.6970 | 0.7879 | 0.7335 | 0.4194 | 0.5000 | 0.4462 | 0.5897 | 0.6667 | 0.6355 | |||
UKF_{p 2} (λ = 5) | 0.4545 | 0.6667 | 0.5501 | 0.3226 | 0.4516 | 0.3791 | 0.5714 | 0.6471 | 0.6064 | |||
UKF_{p 2} (λ = 10) | 0.4545 | 0.5455 | 0.4800 | 0.2903 | 0.3871 | 0.3523 | 0.5556 | 0.6538 | 0.5920 |
Incorporating the range constraint The existing network information can be used to provide the rough range constraint of $\stackrel{\u0304}{\mathit{x}}$. A tight constraint is forced on the regulation coefficient a_{ ij } when there is a small regulation possibility from genej to genei and a loose constraint is forced on the regulation coefficient with no prior information. In the simulation, for the coefficients corresponding to the zero elements in (79), the lower bound and the upper bound are set as -10 and 10, respectively. For the coefficients corresponding to the nonzero elements in (79), the lower bound and the upper bound are set as -0.1 and 0.1, respectively. The UKF incorporating the range constraint is denoted as UKF_{p 3}. As shown in Table 3, the average FPR of UKF_{p 3} is lower than that of UKF and the average precision of UKF_{p 3} is higher than that of UKF.
5.2 Yeast protein synthesis network
Inferred results of the UKF and UKF _{ p2 }
Filters | True positives # | False positives # | True negatives # | False negatives # |
---|---|---|---|---|
UKF | 1 | 7 | 14 | 3 |
UKF_{p 2} | 2 | 3 | 18 | 2 |
Filters | TPR | FPR | Precision | |
UKF | 0.25 | 0.3333 | 0.1250 | |
UKF_{p 2} | 0.5000 | 0.1429 | 0.4000 |
By incorporating the sparsity constraint, UKF_{p 2} provides much better inference results than UKF. As shown in Table 8, the TP # and TN # of UKF_{p 2} are higher than those of UKF and the FP # and FN # are lower than those of UKF. In addition, it can be seen from Table 8, the FPR of UKF_{p 2} is lower than that of UKF and the TPR and the precision of UKF_{p 2} is higher than that of UKF.
6 Conclusions
In this paper, we have proposed a framework of employing the point-based Gaussian approximation filters which incorporates the prior knowledge to infer the gene regulatory network (GRN) based on the gene expression data. The performance of the proposed framework is tested by a synthetic network and the yeast protein synthesis network. Numerical results show that the inference accuracy of the GRN by the proposed point-based Gaussian approximation filter incorporating the prior information is higher than using the traditional filters without incorporating prior knowledge. The proposed method works for small- and medium-size GRNs due to the computational complexity considerations. It remains a future research topic how to adapt the proposed inference framework to handle large GRNs at reasonable computational complexity.
Declarations
Authors’ Affiliations
References
- Zou M, Conzen SD: A new dynamic Bayesian network (dbn) approach for identifying gene regulatory networks from time course microarray data. Bioinformatics. 2005, 21 (1): 71-79.View ArticleGoogle Scholar
- Zhou X, Wang X, Pal R, Ivanov I, Bittner M, Dougherty ER: A Bayesian connectivity-based approach to constructing probabilistic gene regulatory networks. Bioinformatics. 2004, 20 (17): 2918-2927.View ArticleGoogle Scholar
- Quach M, Brunel N, d’Alché Buc F: Estimating parameters and hidden variables in non-linear state-space models based on odes for biological networks inference. Bioinformatics. 2007, 23 (23): 3209-3216.View ArticleGoogle Scholar
- Wang Z, Liu X, Liu Y, Liang J, Vinciotti V: An extended Kalman filtering approach to modeling nonlinear dynamic gene regulatory networks via short gene expression time series. Comput. Biol. Bioinformatics, IEEE/ACM Trans. 2009, 6 (3): 410-419.View ArticleGoogle Scholar
- Wu X, Li P, Wang N, Gong P, Perkins EJ, Deng Y, Zhang C: State space model with hidden variables for reconstruction of gene regulatory networks. BMC Syst Biol. 2011, 5 (Suppl 3): S3-10.1186/1752-0509-5-S3-S3.View ArticleGoogle Scholar
- Werhli AV, Husmeier D: Reconstructing gene regulatory networks with Bayesian networks by combining expression data with multiple sources of prior knowledge. Stat. Appl. Genet. Mol. Biol. 2007, 6: Article 15-MathSciNetGoogle Scholar
- Mazur J, Ritter D, Reinelt G, Kaderali L: Reconstructing nonlinear dynamic models of gene regulation using stochastic sampling. BMC Bioinformatics. 2009, 10: 448-View ArticleGoogle Scholar
- Noor A, Serpedin E, Nounou M, Nounou H: Inferring gene regulatory networks via nonlinear state-space models and exploiting sparsity. Comput. Biol. Bioinformatics, IEEE/ACM Trans. 2012, 9 (4): 1203-1211.View ArticleGoogle Scholar
- Hecker M, Lambeck S, Toepfer S, van Someren E, Guthke R: Gene regulatory network inference: Data integration in dynamic models - a review. Biosystems. 2009, 96 (1): 86-103.View ArticleGoogle Scholar
- Markowetz F, Spang R: Inferring cellular networks - a review. BMC Bioinformatics. 2007, 8 (Suppl 6): S5-10.1186/1471-2105-8-S6-S5.View ArticleGoogle Scholar
- Huang Y, Tienda-Luna I, Wang Y: Reverse engineering gene regulatory networks. Signal Process. Mag., IEEE. 2009, 26 (1): 76-97.View ArticleGoogle Scholar
- de Jong H: Modeling and simulation of genetic regulatory systems: a literature review. J. Comput. Biol. 2002, 9: 67-103.View ArticleGoogle Scholar
- Julier SJ, Uhlmann JK: Unscented filtering and nonlinear estimation. Proc. IEEE. 2004, 92 (3): 401-422. 10.1109/JPROC.2003.823141.View ArticleGoogle Scholar
- Ito K, Xiong K: Gaussian filters for nonlinear filtering problems. Automatic Control, IEEE Trans. 2000, 45 (5): 910-927. 10.1109/9.855552.MathSciNetView ArticleGoogle Scholar
- Arasaratnam I, Haykin S: Cubature kalman filters. Automatic Control, IEEE Trans. 2009, 54 (6): 1254-1269.MathSciNetView ArticleGoogle Scholar
- Jia B, Xin M, Cheng Y: Sparse-grid quadrature nonlinear filtering. Automatica. 2012, 48 (2): 327-341. 10.1016/j.automatica.2011.08.057.MathSciNetView ArticleGoogle Scholar
- Arulampalam M, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. Signal Process., IEEE Trans. 2002, 50 (2): 174-188. 10.1109/78.978374.View ArticleGoogle Scholar
- Shen X, Vikalo H: Inferring parameters of gene regulatory networks via particle filtering. EURASIP J. Adv. Signal Process. 2010, 2010: 204612-10.1155/2010/204612.View ArticleGoogle Scholar
- Steele E, Tucker A, ‘t Hoen PA, Schuemie M: Literature-based priors for gene regulatory networks. Bioinformatics. 2009, 25 (14): 1768-1774.View ArticleGoogle Scholar
- Christley S, Nie Q, Xie X: Incorporating existing network information into gene network inference. PLoS ONE. 2009, 4 (8): e6799-View ArticleGoogle Scholar
- Tamada Y, Kim S, Bannai H, Imoto S, Tashiro K, Kuhara S, Miyano S: Estimating gene networks from gene expression data by combining Bayesian network model with promoter element detection. Bioinformatics. 2003, 19 (suppl 2): 227-236.View ArticleGoogle Scholar
- Li H, Zhan M: Unraveling transcriptional regulatory programs by integrative analysis of microarray and transcription factor binding data. Bioinformatics. 2008, 24 (17): 1874-1880.View ArticleGoogle Scholar
- Bouaynaya N, Shterenberg R, Schonfeld D: Methods for optimal intervention in gene regulatory networks [applications corner]. Signal Process. Mag., IEEE. 2012, 29 (1): 158-163.View ArticleGoogle Scholar
- Chen L, Aihara K: Chaos and asymptotical stability in discrete-time neural networks. Physica D: Nonlinear Phenomena. 1997, 104 (3): 286-325.MathSciNetView ArticleGoogle Scholar
- Qian L, Wang H, Dougherty ER: Inference of noisy nonlinear differential equation models for gene regulatory networks using genetic programming and Kalman filtering. Signal Process., IEEE Trans. 2008, 56 (7): 3327-3339.MathSciNetView ArticleGoogle Scholar
- Vohradsky J: Neural model of the genetic network. J. Biol. Chem. 2001, 276 (39): 36168-36173.View ArticleGoogle Scholar
- Mjolsness E, Mann T, Castano R, Wold B: From coexpression to coregulation: an approach to inferring transcriptional regulation among gene classes from large-scale expression data. in Advances in Neural Information Processing Systems. 1999, 12: 928-934.Google Scholar
- Nørgaard M, Poulsen NK, Ravn O: New developments in state estimation for nonlinear systems. Automatica. 2000, 36 (11): 1627-1638. 10.1016/S0005-1098(00)00089-3.MathSciNetView ArticleGoogle Scholar
- Mysovskikh IP: The Approximation of Multiple Integrals by Using Interpolatory Cubature Formulae in Quantitative Approximation, ed. by R DeVore, K Scherer. 1980, Academic Press, New York,Google Scholar
- Jazwinski AH: Stochastic Processes and Filtering Theory. 2007, Academic Press Inc., Waltham, MA,Google Scholar
- Teixeira BO, Tôrres LA, Aguirre LA, Bernstein DS: On unscented Kalman filtering with state interval constraints. J. Process Control. 2010, 20 (1): 45-57. 10.1016/j.jprocont.2009.10.007.View ArticleGoogle Scholar
- Wright S, Nowak R, Figueiredo M: Sparse reconstruction by separable approximation. Signal Process., IEEE Trans. 2009, 57 (7): 2479-2493.MathSciNetView ArticleGoogle Scholar
- Simon D, Simon DL: Constrained Kalman filtering via density function truncation for turbofan engine health estimation. Int. J. Syst. Sci. 2010, 41 (2): 159-171. 10.1080/00207720903042970.View ArticleGoogle Scholar
- Emmert-Strib F, Dehmer M: Analysis of Microarray Data. 2008, Wiley-Blackwell, Hoboken, NJ,View ArticleGoogle Scholar
- Wang H, Qian L, Dougherty E: Inference of gene regulatory networks using s-system: a unified approach. Syst. Biol., IET. 2010, 4 (2): 145-156. 10.1049/iet-syb.2008.0175.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.