Skip to main content

Regularized EM algorithm for sparse parameter estimation in nonlinear dynamic systems with application to gene regulatory network inference

Abstract

Parameter estimation in dynamic systems finds applications in various disciplines, including system biology. The well-known expectation-maximization (EM) algorithm is a popular method and has been widely used to solve system identification and parameter estimation problems. However, the conventional EM algorithm cannot exploit the sparsity. On the other hand, in gene regulatory network inference problems, the parameters to be estimated often exhibit sparse structure. In this paper, a regularized expectation-maximization (rEM) algorithm for sparse parameter estimation in nonlinear dynamic systems is proposed that is based on the maximum a posteriori (MAP) estimation and can incorporate the sparse prior. The expectation step involves the forward Gaussian approximation filtering and the backward Gaussian approximation smoothing. The maximization step employs a re-weighted iterative thresholding method. The proposed algorithm is then applied to gene regulatory network inference. Results based on both synthetic and real data show the effectiveness of the proposed algorithm.

1 Introduction

The dynamic system is a widely used modeling tool that finds applications in many engineering disciplines. Techniques for state estimation in dynamic systems have been well established. Recently, the problem of sparse state estimate has received significant interest. For example, various approaches to static sparse state estimation have been developed in [14], where the problem is essentially an underdetermined inverse problem, i.e., the number of measurements is small compared to the number of states. Extensions of these methods for dynamic sparse state estimation have been addressed in [57].

The expectation-maximization (EM) algorithm has also been applied to solve the sparse state estimate problem in dynamic systems [812]. In particular, in [810], the EM algorithm is employed to update the parameters of the Bernoulli-Gaussian prior and the measurement noise. These parameters are then used in the generalized approximate message passing algorithm [810]. In [12, 13], the EM algorithm is used to iteratively estimate the parameters that describe the prior distribution and noise variances. Moreover, in [14], the EM algorithm is used for blind identification, where the sparse state is explored. Note that in the above works, only linear dynamic systems are considered.

In this paper, we focus on the sparse parameter estimation problem instead of the sparse state estimation problem. We consider a general nonlinear dynamic system, where both the state equation and the measurement equation are parameterized by some unknown parameters which are assumed to be sparse. One particular application is the inference of gene regulatory networks. The gene regulatory network can be modeled by the state-space model [15], in which the gene regulations are represented by the unknown parameters. The gene regulatory network is known to be sparse due to the fact that a gene directly regulates or is regularized by a small number of genes [1619]. The EM algorithm has been applied to parameter estimation in dynamic systems [20]. However, the EM algorithm cannot exploit the sparsity of the parameters. Here, we propose a regularized expectation-maximization (rEM) algorithm for sparse parameter estimation in nonlinear dynamic systems. Specifically, the sparsity of the parameters is imposed by a Laplace prior and we consider the approximate maximum a posteriori (MAP) estimate of the parameters. It should be emphasized that the proposed method is an approximate MAP-EM algorithm based on various Gaussian assumptions and quadrature procedures for approximating Gaussian integrals. Note that the MAP-EM algorithm may get stuck at local minima or saddle points. Similar to the conventional EM algorithm, the rEM algorithm also consists of an expectation step and a maximization step. The expectation step involves the forward Gaussian approximation filtering and the backward Gaussian approximation smoothing. The maximization step involves solving an 1 minimization problem for which a re-weighted iterative thresholding algorithm is employed. To illustrate the proposed sparse parameter estimation method in dynamic systems, we consider the gene-regulatory network inference based on gene expression data.

The unscented Kalman filter has been used in the inference of gene regulatory network [15, 21, 22]. However, the methods proposed in [15, 21, 22] are fundamentally different with the method proposed in this paper. Firstly, the unscented Kalman filter is only used once in [15, 21, 22], while it is used in each iteration of the rEM algorithm in this paper. Secondly, not only the unscented Kalman filter but also the unscented Kalman smoother is used in our proposed rEM algorithm. In essence, only the observation before time k is used to the estimation at time k in the unscented Kalman filter. However, in our rEM algorithm, all observation data is used to the estimation at time k (by the unscented Kalman smoother). The fundamental difference between the proposed work and that of [9] is that the proposed work is for the sparse parameter estimation problem of the dynamic system, while that of [9] is only for the sparse parameter estimation of the static problem. In addition, a general nonlinear dynamic system is involved in our work and only linear system is involved in the work of [9]. The main difference between the proposed work and that of [23] is that the sparsity constraint is enforced. The main contribution of this paper is to use the sparsity-enforced EM algorithm to solve the sparse parameter estimation problem. In addition, the reweighted iterative threshold algorithm is proposed to solve the 1 optimization algorithm. To the best knowledge of the authors, the proposed rEM with the reweighted iterative threshold optimization algorithm is innovative. Furthermore, we have systematically investigated the performance of the proposed algorithm and compared the results with other conventional algorithms.

The remainder of this paper is organized as follows. In Section 2, the problem of the sparse parameter estimation in dynamic systems is introduced and the regularized EM algorithm is formulated. In Section 3, the E-step of rEM that involves forward-backward recursions and Gaussian approximations is discussed. Section 4 discusses the 1 optimization problem involved in the maximization step. Application of the proposed rEM algorithm to gene regulatory network inference is discussed in Section 5. Concluding remarks are given in Section 6.

2 Problem statement and the MAP-EM algorithm

We consider a general discrete-time nonlinear dynamic system with unknown parameters, given by the following state and measurement equations:

x k = f ( x k 1 , θ ) + u k ,
(1)
and y k = h ( x k , θ ) + v k ,
(2)

where x k and y k are the state vector and the observation vector at time k, respectively; θ is the unknown parameter vector; f(·) and h(·) are two nonlinear functions; u k N( 0 , U k ) is the process noise, and v k N( 0 , R k ) is the measurement noise. It is assumed that both {u k } and {v k } are independent noise processes and they are mutually independent. Note that the nonlinear functions f and h are assumed to be differentiable.

Define the notation Y k [ y 1 ,, y k ]. The problem considered in this paper is to estimate the unknown system parameter vector θ from the length-K measurement data YK. We assume that θ is sparse. In particular, it has a Laplacian prior distribution which is commonly used as a sparse prior,

p(θ)= i = 1 m λ i 2 e λ i | θ i | .
(3)

In the EM algorithm and the MAP-EM algorithm [23], given an estimate θ, a new estimate θ′′ is given by

θ ′′ = arg max θ Q(θ, θ ),
(4)

and

θ ′′ = arg max θ Q ( θ , θ ) + log p ( θ ) ,
(5)

respectively.

Note that the regularized EM can be viewed as a special MAP-EM. To differentiate the sparsity-enforced EM algorithm from the general MAP-EM algorithm, rEM is used. In this paper, the following assumptions are made. (1) The probability density function of the state is assumed to be Gaussian. The Bayesian filter is optimal; however, exact finite-dimensional solutions do not exist. Hence, numerical approximation has to be made. The Gaussian approximation is frequently assumed due to the relatively low complexity and high accuracy [2426]. (2) The integrals are approximated by various quadrature methods. Many numerical rules, such as Gauss-Hermite quadrature [25], unscented transformation [27], cubature rule [24], and the sparse grid quadrature [26], as well as the Monte Carlo method [28], can be used to approximate the integral. However, the quadrature rule is the best when computational complexity and accuracy are both considered [29].

We next consider the expression of the Q-function in (5). Due to the Markovian structure of the state-space model (1) to (2), we have

p( X K , Y K |θ)=p( x 1 |θ) k = 2 K p( x k | x k 1 ,θ) k = 1 K p( y k | x k ,θ).
(6)

Therefore,

Q ( θ , θ ) = log p ( X K , Y K | θ ) p ( X K | Y K , θ ) d X K = log p ( x 1 | θ ) p ( x 1 | Y K , θ ) d x 1 + k = 2 K log p ( x k | x k 1 , θ ) 1 2 ( x k f ( x k 1 , θ ) ) T U k 1 ( x k f ( x k 1 , θ ) ) c k p ( x k , x k 1 | Y K , θ ) d x k 1 d x k + k = 1 K log p ( y k | x k , θ ) 1 2 ( y k h ( x k , θ ) ) T R k 1 ( y k h ( x k , θ ) ) d k p ( x k | Y K , θ ) d x k ,
(7)

where c k 1 2 [log| U k |+dim( x k )log(2π)] and d k 1 2 [log| R k |+dim( y k )log(2π)]. We assume that the initial state x1 is independent of the parameter θ. Hence, with the prior given in (3), the optimization in (5) can be rewritten as

θ ′′ = arg max θ Q ( θ , θ ) + log p ( θ ) = arg min θ k = 2 K 2 c k + ( x k f ( x k 1 , θ ) T U k 1 ( x k f ( x k 1 , θ ) × p ( x k , x k 1 | Y K , θ ) d x k 1 d x k + k = 1 K 2 d k + ( y k h ( x k , θ ) T R k 1 ( y k h ( x k , θ ) p ( x k | Y K , θ ) d x k + 2 λ θ 1 ,
(8)

where λ=[λ1,λ2,,λ m ]T, and ‘ ’ denotes the point-wise multiplication.

Note that in many applications, the unknown parameters θ are only related to the state equation, but not to the measurement equation. Therefore, the second term in (8) can be removed. In the next section, we discuss the procedures for computing the densities p(x k ,xk−1|YK,θ) and p(x k |YK,θ), the integrals, and the minimization in (8).

3 The E-step: computing the Q-function

We first discuss the calculation of the probability density functions of the states p(x k ,xk−1|YK,θ) and p(x k |YK,θ) in (8), which involves a forward recursion of a point-based Gaussian approximation filter to compute p(x k |Yk,θ) and p(xk+1|Yk,θ), k=1,2,...,K, and a backward recursion of a point-based Gaussian approximation smoother to compute p(x k ,xk−1|YK,θ) and p(x k |YK,θ), k=K,K−1,...,1. For notational simplicity, in the remainder of this section, we drop the parameter θ.

3.1 Forward recursion

The forward recursion is composed of two steps: prediction and filtering. Specifically, given the prior probability density function (PDF) p(xk−1|Yk−1) at time k−1, we need to compute the predicted conditional PDF p(x k |Yk−1); then, given the measurement y k at time k, we update the filtered PDF p(x k |Yk). These PDF recursions are in general computationally intractable unless the system is linear and Gaussian. The Gaussian approximation filters are based on the following two assumptions: (1) Given Yk−1, xk−1 has a Gaussian distribution, i.e., x k 1 | Y k 1 N( x ̂ k 1 | k 1 , P k 1 | k 1 ); and (2) (x k ,y k ) are jointly Gaussian, given Yk−1.

It then follows that the predictive PDF is Gaussian, i.e., x k | Y k 1 N( x ̂ k | k 1 , P k | k 1 ), with [24, 26, 27]

x ̂ k | k 1 E { x k | Y k 1 } = E x k 1 | Y k 1 f ( x k 1 ) ,
(9)
P k | k 1 Cov { x k | Y k 1 } = E x k 1 | Y k 1 ( f ( x k 1 ) x ̂ k | k 1 ) ( f ( x k 1 x ̂ k | k 1 ) T + U k 1 ,
(10)

where E x k 1 | Y k 1 g ( x k 1 ) =g(x)ϕ(x; x ̂ k 1 | k 1 , Pk−1|k−1)dx, and ϕ x ; x ̂ , P denotes the multivariate Gaussian PDF with mean x ̂ and covariance P.

Moreover, the filtered PDF is also Gaussian, i.e., x k | Y k N( x ̂ k | k , P k | k )[24, 26, 27], where

x ̂ k | k E { x k | Y k } = x ̂ k | k 1 + L k ( y k y ̂ k | k 1 ) ,
(11)
and P k | k Cov { x k | Y k } = P k | k 1 L k P k xy ,
(12)

with

y ̂ k | k 1 = E x k | Y k 1 h ( x k ) ,
(13)
L k = P k xy ( R k + P k yy ) 1 ,
(14)
P k xy = E x k | Y k 1 ( x k x ̂ k | k 1 ) ( h ( x k ) y ̂ k | k 1 ) T ,
(15)
P k yy = E x k | Y k 1 ( h ( x k ) y ̂ k | k 1 ) ( h ( x k ) y ̂ k | k 1 ) T .
(16)

3.2 Backward recursion

In the backward recursion, we compute the smoothed PDFs p(x k ,xk+1|YK) and p(x k |YK). Here, the approximate assumption made is that conditioned on yk, x k and xk+1 are jointly Gaussian [30], i.e.,

x k x k + 1 Y k N x ̂ k | k x ̂ k + 1 | k , × P k | k C k C k T P k + 1 | k ,
(17)
with C k Cov { x k , x k + 1 | Y k } = E x k | Y k ( x k x ̂ k | k ) ( f ( x k ) x ̂ k + 1 | k ) T .
(18)

Due to the Markov property of the state-space model, we have p(x k |xk+1,YK)=p(x k |xk+1,Yk). Therefore, we can write [30]

p ( x k , x k + 1 | Y K ) = p ( x k | x k + 1 , Y K ) p ( x k + 1 | Y K ) = p ( x k | x k + 1 , Y k ) p ( x k + 1 | Y K ) .
(19)

Now, assume that

x k + 1 | Y K N( x ~ k + 1 , P ~ k + 1 ),with x ~ K = x ̂ K | K , P ~ K = P K | K .
(20)

It then follows from (17) and (19) that [30]

x k x k + 1 | Y K N x ~ k x ~ k + 1 , P ~ k D k P ~ k + 1 P ~ k + 1 D k T P ~ k + 1 ,
(21)

where

x ~ k = x ̂ k | k + D k ( x ~ k + 1 x ̂ k + 1 | k ) ,
(22)
P ~ k = P k | k + D k ( P ~ k + 1 P k + 1 | k ) D k T ,
(23)
D k = C k P k + 1 | k 1 .
(24)

3.3 Approximating the integrals

The integrals associated with the expectations in the forward-backward recursions for computing the approximate state PDFs, i.e., (9), (10), (13), (15), (16), and (18), as well as the integrals involved in computing the function Q(θ,θ) in (8), are integrals of Gaussian type that can be efficiently approximated by various quadrature methods. Specifically, if a set of weighted points {(γ i ,w i ),i=1,…,N} can be used to approximate the integral

E N ( 0 , I ) {g(x)}=g(x)ϕ x ; 0 , I dx i = 1 N w i g( γ i ),
(25)

then the general Gaussian-type integral can be approximated by

E N ( x ̂ , P ) {g(x)}=g(x)ϕ x ; x ̂ , P dx i = 1 N w i g(S γ i + x ̂ ),
(26)

where P=S ST and S can be obtained by Cholesky decomposition or singular value decomposition (SVD).

By using different point sets, different Gaussian approximation filters and smoothers can be obtained, such as the Gauss-Hermite quadrature (GHQ) [25], the unscented transform (UT) [27], the spherical-radial cubature rule (CR) [24], the sparse grid quadrature rule (SGQ) [26], and the quasi Monte Carlo method (QMC) [28]. Both the UT and the CR are the third-degree numerical rules which means the integration can be exactly calculated when g(x) is a polynomial with the degree up to three. In addition, the form of the CR is identical to the UT with a specific parameter. The main advantage of the UT and the CR is that the number of points required by the rule increases linearly with the dimension. However, one problem of the UT and the CR is that the high-order information of the nonlinear function is difficult to capture so that the accuracy may be low when g(x) is a highly nonlinear function. The GHQ rule, in contrast, can capture arbitrary degree information of g(x) by using more points. It has been proven that GHQ can provide more accurate results than the UT or the CR [25, 26]. Similarly, the QMC method can also obtain more accurate results than the UT. However, both the GHQ rule and the QMC method require a large number of points for the high-dimensional problem. Specifically, the number of points required by the GHQ rule increases exponentially with the dimension. To achieve a similar performance of the GHQ with a small number of points, the SGQ is proposed [26], where the number of points increases only polynomially with the dimension.

For the numerical results in this paper, the UT is used in the Gaussian approximation filter and smoother, where we have N=2n+1, with n being the dimension of the state vector x k . The quadrature points and the corresponding weights are given, respectively, by

γ i = 0 , i = 1 , ( n + κ ) e i 1 , i = 2 , , n + 1 , ( n + κ ) e i n 1 , i = n + 2 , , 2 n + 1 ,
(27)

and

w i = κ n + κ , i = 1 , 1 2 ( n + κ ) , i = 2 , , 2 n + 1 ,
(28)

where κ is a tunable parameter, and e i is the i th n dimensional unit vector. Note that κ=0 is used as the default value in this paper, as in the cubature Kalman filter [24]. In addition, κ=−3 can also be used as in the unscented Kalman filter [27].

4 The M-step: solving the 1 optimization problem

Solving the 1 optimization problems in (8) is not trivial since |θ i | is nondifferentiable at θ i =0. The 1 optimization is a useful tool to obtain sparse solutions. Methods for solving linear inverse problems with sparse constraints are reviewed in [1]. Some more recent developments include the projected scaled subgradient [31] method, the gradient support pursuit method [32], and the greedy sparse-simplex method [33]. In this paper, for the maximization step in the proposed rEM algorithm, due to the simplicity of implementation, we will employ a modified version of the iterative thresholding algorithm.

4.1 Iterative thresholding algorithm

Denote Q ~ (θ, θ ) as the two summation terms in (8). We consider the optimization problem in (8)

arg min θ J(θ)= Q ~ (θ, θ )+2λθ 1 .
(29)

The solution to (29) can be iteratively obtained by solving a sequence of optimization problems [34]. As in the Newton’s method, the Taylor series expansion of the Q ~ (θ, θ ) around the solution θt at the t th iteration is given by

Q ~ ( θ t +Δθ, θ ) Q ~ ( θ t , θ )+Δ θ T Q ~ ( θ t , θ )+ α t 2 Δθ 2 2 ,
(30)

where Q ~ is the gradient of the negative Q-function and α t is such that α t I mimics the Hessian 2 Q ~ . Then, θt+1 is given by

θ t + 1 = arg min z ( z θ t ) T Q ~ ( θ t , θ )+ α t 2 z θ t 2 2 +2λz 1 ,
(31)

where z denotes the variable to be optimized in the objective function.

The equivalent form of (31) is given by

θ t + 1 = arg min z 1 2 z u t 2 2 + 2 α t λ z 1 ,
(32)
with u t = θ t 1 α t Q ~ ( θ t , θ ) ,
(33)
α t ( s t ) T r t s t 2 ,
(34)
s t = θ t θ t 1 ,
(35)
r t = Q ~ ( θ t , θ ) Q ~ ( θ t 1 , θ ) .
(36)

Note that Equation 34 is derived as follows. Because we require that α t I mimics the Hessian 2 Q ~ , i.e., α t strt, solving α t in the least-squares sense, we have

α t arg min α α s t r t 2 2 = ( s t ) T r t s t ) T s t .
(37)

The solution to (32) is given by θ t + 1 = η S ( u t , 2 λ α t ), where

η S ( u t ,a)=sign( u t )max | u t | a , 0
(38)

is the soft thresholding function with sign(ut) and max{|ut|−a,0} being component-wise operators.

Finally, the iterative procedure for solving (29) is given by

θ t + 1 = sign θ t 1 α t Q ~ ( θ t , θ ) max θ t 1 α t Q ~ ( θ t , θ ) 2 λ α t , 0 .
(39)

And the iteration stops when the following condition is met:

| J ( θ t + 1 ) J ( θ t ) | | J ( θ t ) | ε,
(40)

where ε is a given small number.

4.2 Adaptive selection of λ

So far, the parameters λ i in the Laplace prior are fixed. Here, we propose to adaptively tune them based on the output of the iterative thresholding algorithm. The algorithm consists of solving a sequence of weighted 1-minimization problems. λ i used for the next iteration are computed from the value of the current solution. A good choice of λ i is to make them counteract the influence of the magnitude of the 1 penalty function [35]. Following this idea, we propose an iterative re-weighted thresholding algorithm. At the beginning of the maximization step, we set λ i =1,i. Then, we run the iterative thresholding algorithm to obtain θ. Next, we update λ i as λ i = 1 | θ i | + ε ,i, where ε is a small positive number, and run the iterative thresholding algorithm again using the new λ. The above process is repeated until it converges at the point where the maximization step completes. Note that for the iterative re-weighted thresholding algorithm, the assumption that θ has a Laplacian prior no longer holds.

5 Application to gene regulatory network inference

The gene regulatory network can be described by a graph in which genes are viewed as nodes and edges depict causal relations between genes. By analyzing collected gene expression levels over a period of time, one can find some regulatory relations between different genes. Under the discrete-time state-space modeling, for a gene regulatory network with n genes, the state vector x k =[x1,k,…,xn,k]T, where xi,k, denotes the gene expression level of the i th gene at time k.

In this case, the nonlinear function f(x) in the general dynamic Equation (1) is given by [15]

f( x k 1 ,θ)=Ag( x k 1 ),
(41)

with

g(x)= g ( x 1 ) g ( x n ) ,
(42)

and

g i (x)= 1 1 + e x ,i=1,,n.
(43)

In (41), A is an n×n regulatory coefficient matrix with the element a i j denoting the regulation coefficient from gene j to gene i. A positive coefficient a i j indicates that gene j activates gene i, and a negative θ i j indicates that gene j represses gene i. The parameter to be estimated is θ=A which is sparse.

For the measurement model, we have

y k = x k + v k .
(44)

5.1 Inference of gene regulatory network with four genes

In the simulations, we consider a network with four genes. The true gene regulatory coefficients matrix is given by

A= 3 0 0 4.5 2.9 0 5 0 6 4 0 0 0 5 2 0 .
(45)

To compare the EM algorithm with the proposed rEM algorithm, the simulation was conduced ten times. In each time, the initial value of A(θ) is randomly generated from a Gaussian distribution with mean 0 and variance 2. The EM, rEM, and rEM w , as well as the basis pursuit de-noising dynamic filtering (BPDN-DF) method and the 1 optimization method, are tested. Here, rEM w denotes the version of the rEM algorithm with the iterative re-weighted thresholding discussed in Section 4.2.

As a performance metrics, the receiver operating characteristic (ROC) curve is frequently used. However, for this specific example, with the increasing of the false-positive rate, the true-positive rate given by rEM and EM is always high (close to 1) which makes the distinguishment of the performance of rEM algorithm and EM algorithm difficult. Hence, the root mean-squared error (RMSE) and the sparsity factor (SF) are used in this section. The RMSE is defined by

RMSE= 1 N 2 i = 1 N j = 1 N ( A ij A ̄ ij ) 2 ,
(46)

where A ̄ denotes the estimated A. The SF is given by

SF= ϕ 0 ϕ ,
(47)

where ϕ0 and ϕ are the number of zero values of the estimated parameter and the number of zero values of true parameter, respectively. It can be seen that the estimation is over sparse if the sparsity factor is greater than 1.

In addition, to test the effectiveness of the proposed method at finding the support of the unknown parameters, the number of matched elements is used and can be obtained by the following procedures: (1) Compute the support of A using the true parameters (denoted by A s ) and the support of A using the estimated parameters (denoted by A ̄ s ). Note that we assign [ A s ] i j =1 if A i j ≠0 and [ A s ] i j =0 if A i j =0. Similarly, we assign [ A ̄ s ] ij =1 if Ā ij 0 and [ A ̄ s ] ij =0 if Ā ij =0. (2) Compute the number of zero elements of A s A ̄ s as the matched elements. It is easy to see that it is effective at finding the support of the unknown parameters when the number of matched elements is large.

5.1.1 The effect of different λ

The performance of rEM using different λ (10,5,1, 0.5,0.1) is compared with the EM algorithm and the rEM w . The RMSE and SF are shown in Figures 1 and 2, respectively. The RMSE does not increase monotonously with the decreasing of parameter λ. It can be seen that the rEM with λ=5 has better performance than that using other λ. In addition, the rEM with all λ except λ=10 outperforms the EM algorithm. It provides smaller RMSE and sparser result. The rEM w provides the smallest RMSE and sparsest parameter estimation. The number of matched elements of test algorithms with different λ is given in Figure 3. It can also be seen that rEM w provides more matched elements than the EM algorithm.

Figure 1
figure 1

RMSE of rEM with different λ and rEM w .

Figure 2
figure 2

SF of rEM with different λ and rEM w .

Figure 3
figure 3

The number of matched elements of rEM with different λ and rEM w .

5.1.2 The effect of noise

Two different cases are tested. In the first case, the covariance of the process noise and measurement noise are chosen to be 0.01. In the second case, they are chosen to be 0.1. The performance of two test cases is shown in Figures 4, 5, 6. It can be seen that the RMSE of rEM w with U,R=0.01I is smaller than that with U,R=0.1I. In addition, the rEM w with U,R=0.01 provides a larger number of matched elements than that with U,R=0.1 as shown in Figure 6. Hence, the estimation accuracy is better when the process noise and measure noise are small.

Figure 4
figure 4

RMSE of rEM w with different noise levels.

Figure 5
figure 5

SF of rEM w with different noise levels.

Figure 6
figure 6

The number of matched elements of rEM w with different noise levels.

5.1.3 The effect of the number of observations

In order to test the effect of the number of observations, the rEM w algorithm with 10 and 20 observations are tested. The simulation results are shown in Figures 7, 8, 9. It can be seen that rEM w with more observations gives less RMSE. In addition, as shown in Figure 9, rEM w with more observations gives slightly better result for finding the support of the unknown parameters.

Figure 7
figure 7

RMSE of rEM w with different lengths of observations.

Figure 8
figure 8

SF of rEM w with different lengths of observations.

Figure 9
figure 9

The number of matched elements of rEM w with different lengths of observations.

5.1.4 The effect of κ

In order to test the performance of κ, the rEM w algorithm with different κ (0,-1,-3) is tested. The performance results are shown in Figures 10, 11, 12. Note that the cubature rule corresponds to κ=0, and the unscented transformation corresponds to κ=−1. Roughly speaking, the performance of rEM w with different κ is close. Specifically, it can be seen that the RMSE of rEM w using κ=−1 and rEM w using κ=−3 is less than that of rEM w using κ=0. The sparsity factor of rEM w using κ=−1 is more close to 1 than that of rEM w using κ=−3 and κ=0. Moreover, the number of matched elements of rEM w using κ=−1 is more than that of rEM w using κ=−3 and κ=0. Hence, the performance of rEM using κ=−1 is the best in this case.

Figure 10
figure 10

RMSE of rEM w with different κ .

Figure 11
figure 11

SR of rEM w with different κ .

Figure 12
figure 12

The number of matched elements of rEM w with different κ .

5.1.5 Effect of sparsity level

The performance comparison of the rEM w and the conventional EM with different sparsity levels of A is shown in Figures 13, 14, 15. In this subsection, another A which is denser than the previously used A is given by

A= 3 1 0 4.5 2.9 0 5 1 6 4 0 1 1 5 2 0 .
(48)
Figure 13
figure 13

RMSE of the EM and rEM w for normal and denser A .

Figure 14
figure 14

SF of the EM and rEM w for normal and denser A .

Figure 15
figure 15

The number of matched elements of the EM and rEM w for normal and denser A .

Note that ‘(Denser)’ is used to denote the result using A shown in Equation 48. It can be seen that the RMSE of rEM w (Denser) is comparable to that of the EM(Denser). However, the sparsity factor of rEM w (Denser) is closer to 1 than that of the EM(Denser) which means that the rEM w (Denser) is better. In addition, the number of matched elements of the rEM w (Denser) is large than that of the EM(Denser), which means that the rEM w (Denser) is better than the EM(Denser) in finding the support of the unknown parameters. The performance of the rEM w (Denser), however, is worse than that of the rEM w in terms of the improvement of the RMSE. Hence, the rEM algorithm may have close performance with the EM algorithm when the sparsity is not obvious.

5.1.6 Comparison with 1 optimization

We compare the proposed rEM algorithm and the 1 optimization-based method, as well as the conventional EM algorithm. The 1 optimization is a popular approach to obtain the sparse solution. For the problem under consideration, it obtains an estimate of θ by solving the following optimization problem:

θ ̂ = arg min θ k = 2 K [ y k A ( θ ) g ( x ̂ k 1 ) ] T [ y k A(θ)g( x ̂ k 1 )]+λθ 1 ,
(49)

where x ̂ 1 = x 1 and x ̂ k + 1 =g( x ̂ k ).

We also compare the 1 optimization method with the proposed rEMw algorithm, and the results are shown in Figures 16, 17, 18. Seven different λ (5, 2, 1, 0.5, 0.1, 0.05, and 0.01) are used in the 1 optimization method. The RMSE does not decrease monotonously with the decreasing of the parameter λ. Among all tested values, the 1 optimization method with λ=0.1 gives the smallest RMSE. However, the sparsity factor of the 1 optimization with λ=0.1 is far from the ideal value 1. The 1 optimization with λ=5 gives the best support detection as shown in Figure 18. The re-weighted 1 optimization algorithm is also used in the simulation. However, all 1 optimization-based methods cannot achieve better performance than that of using the rEM w .

Figure 16
figure 16

RMSE of the 1 optimization with different λ , reweighted 1 optimization, EM, and rEM w .

Figure 17
figure 17

SF of the 1 optimization with different λ , reweighted 1 optimization, EM, and rEM w .

Figure 18
figure 18

The number of matched elements of the 1 optimization with different λ , reweighted 1 optimization, EM, and rEM w .

5.1.7 Comparison with BPDN-DF

To solve the problem using BPDN-DF, the model in (41) and (44) are modified as

x ~ k = f ~ ( x ~ k 1 ) A ( θ k 1 ) g ( x k 1 ) θ k 1 + v k 0
(50)

and

h( x ~ k )= H ~ k + n k = I 4 0 16 x ~ k + n k ,
(51)

respectively. Note that x ~ k = x k T , θ k T T . Then, x ~ ̂ k is given by [36]

x ~ ̂ k = arg min x ~ y k H ~ k x ~ 2 2 + λ x ~ 1 + x ~ f ~ ( x ~ k 1 ) 2 2 ,
(52)

where λ=[λ1,,λ20] with λ i =0, i=1,2,3,4 since our objective is to explore the sparsity of the parameter θ. The exact same initial values used in testing EM and rEM are used to test the performance of the BPDN-DF. The simulation results are shown in Figures 19, 20, 21. It can be seen that although the sparsity factor of BPDN-DF is comparable with that of the rEM w , the RMSE of the BPDN-DF is much larger than that of the rEMw. In addition, as shown in Figure 21, the rEMw is better than the BPDN-DF in finding the support of the unknown parameters. The possible reason is that the BPDN-DF does not consider the noise in the dynamic system, and the measurement matrix H ~ k is an ill-conditioned matrix. In the simulation, λ j =0.1, j=5,,20. Based on our tests by using other values of λ, there is no obvious improvement.

Figure 19
figure 19

RMSE of BPDN-DF and rEM w .

Figure 20
figure 20

SF of BPDN-DF and rEM w .

Figure 21
figure 21

The number of matched elements of BPDN-DF and rEM w .

5.2 Inference of gene regulatory network with eight genes

In this section, we test the proposed algorithm using a larger gene regulatory network which includes eight genes; the performances of the EM, the rEM, the rEM w , the 1 optimization method, and BPDN-DF are given. Forty data points are collected to infer the structure of the network. The system noise and measurement noise are assumed to be Gaussian-distributed with means 0 and covariances U k =0.01I8 and R k =0.01I8, respectively. The connection coefficient matrix is given by

A= 0 0 0 0 0 0 2.4 3.2 0 0 0 4.1 0 2.4 0 4.1 5.0 2.1 1.5 0 4.5 0 2.1 0 0 1.3 2.5 3.7 1.8 0 0 3.1 0 0 0 2.6 3.2 0 1 4 1.5 1.8 0 3.4 1.4 1.1 0 1.7 1.8 0 0 3 1.1 2.4 0 0 1.3 0 1 0 2.1 0 0 2.2 .
(53)

For testing, each coefficient in A ̂ is initialized from a Gaussian distribution with mean 0 and variance 1. The system state is initialized using the first measurement.

The metric used to evaluate the inferred GRN is the ROC curve, in which the true-positive rate (TPR) and the false-positive rate (FPR) are involved. They are given by

TPR= TP # TP # + FN # ,
(54)
FPR= FP # FP # + TN # ,
(55)

where the number of true positives (TP #) denotes the number of links correctly predicted by the inference algorithm, the number of false positives (FP #) denotes the number of incorrectly predicted links. The number of true negatives (TN #) denotes the number of correctly predicted non-links, and the number of false negatives (FN #) denotes the number of missed links by the inference algorithm [15].

The ROC curves of the EM, the rEM, and the rEM w are compared in Figure 22. The rEM with different λ is tested. In Figure 22, the curves of rEM with four typical values of λ are shown. There is no obvious improvement by using other λ. From the figure, it can be seen that the rEM w performs better than the rEM and the convectional EM algorithms.

Figure 22
figure 22

ROCs of the EM, the rEM, and the rEM w .

In addition, the sparse solution is obtained by using rEM and rEM w while it cannot be obtained by using the EM algorithm. The sparsity factor of rEM and rEM w is shown in Figure 23; the sparsity of the solution given by rEM w is closer to the ground truth than that given by the EM algorithm.

Figure 23
figure 23

Sparsity factor of the rEM and the rEM w .

In Figure 24, the ROC curves of the rEM w , 1 optimization method, and BPDN-DF are compared. Similarly, the 1 optimization method with different λ is tested, and only four curves are shown in the figure. By using other values, there is no obvious improvement. The BPDN-DF with different λ has no obvious difference in the test. From Figure 24, it can be seen that the rEM w performs much better than the 1 optimization method and BPDN-DF algorithm. Hence, the sparsity factor of 1 optimization method and BPDN-DF is not shown.

Figure 24
figure 24

ROCs of the rEM w , 1 optimization method, and BPDN-DF.

5.3 Inference of gene regulatory network from malaria expression data

The dataset with the first six gene expression data of malaria is given in reference [37] and is used in this section. The initial covariance for the algorithm is P0=0.5I. The process noise and measurement noise are assumed to be Gaussian noise with zero mean and covariance 0.32I and 0.42I, respectively. In the following, we show the inference results of the parameter and the state estimation provided by the unscented Kalman filter (UKF) based on the model using the inferred parameters.

The inferred A by the EM algorithm is

A ̄ = 2.2120 7.9443 2.3843 6.1800 3.5269 2.8300 0.6585 0.5319 0.5987 4.0023 2.8684 1.1167 1.9022 9.1935 3.0504 7.9274 5.0037 3.4825 1.8157 8.8003 3.4441 9.4813 7.1284 3.4345 1.8413 8.3515 2.2789 5.3726 1.6722 2.5999 2.1053 3.3850 3.4007 10.2753 12.3170 2.1100 .
(56)

The inferred A by the rEM with λ=1 is

A ̄ = 0.8448 6.3169 0.8943 3.9423 0 2.8387 0 0.2422 0 0.5051 0 1.3407 0.1424 7.1799 0.7461 5.2535 0.3607 2.9298 0.0048 8.1010 0.7851 5.3300 0 4.2157 0.4022 6.9358 2.0375 4.0332 0 2.7372 0 5.6613 0 4.3934 2.9426 6.3350 .
(57)

The inferred A by the rEM w is

A ̄ = 0.3662 7.5033 0 9.6020 0 0 2.0531 1.1905 0 5.1439 0.0011 0 0 9.0526 0 11.6504 0 0 0 9.3419 0 14.4056 3.5361 1.0739 0.0034 8.5250 0 11.0732 0 0 0 3.8773 0.0025 13.1848 8.3610 1.4877 .
(58)

The state estimation provided by the UKF based on the model using the inferred parameters of the EM, the rEM, the rEM w , and the true gene expression is shown in Figure 25. The left top and right top panels are the expression of the first gene and the second gene, respectively. The left center and right center panels are the expression of the third gene and the fourth gene, respectively. The left bottom and right bottom panels are the expression of the fifth gene and the sixth gene, respectively. It can be seen that the estimate gene expression using the UKF and parameters given by the EM, the rEM, and the rEM w is close to the true gene expression data. In addition, The rEM w algorithm provide sparser solution than the rEM algorithm. Both the rEM and the rEM w algorithms give sparser solutions than the EM algorithm which validates the effectiveness of the proposed method.

Figure 25
figure 25

True malaria gene expression and estimated gene expression by different algorithms.

6 Conclusions

In this paper, we have considered the problem of sparse parameter estimation in a general nonlinear dynamic system, and we have proposed an approximate MAP-EM solution, called the rEM algorithm. The expectation step involves the forward Gaussian approximation filtering and the backward Gaussian approximation smoothing. The maximization step employs a re-weighted iterative thresholding method. We have provided examples of the inference of gene regulatory network based on expression data. Comparisons with the traditional EM algorithm as well as with the existing approach to solving sparse problems such as the 1 optimization and the BPDN-DF show that the proposed rEM algorithm provides both more accurate estimation result and sparser solutions.

References

  1. Tropp J, Wright S: Computational methods for sparse solution of linear inverse problems. Proc. IEEE 2010,98(6):948-958.

    Article  Google Scholar 

  2. Ji S, Xue Y, Carin L: Bayesian compressive sensing. IEEE Trans. Signal Processing 2008,56(6):2346-2356.

    Article  MathSciNet  Google Scholar 

  3. Larsson EG, Selen Y: Linear regression with a sparse parameter vector. IEEE Trans. Signal Processing 2007,55(2):451-460.

    Article  MathSciNet  Google Scholar 

  4. M Figueiredo: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sign. Process 2007,1(4):586-597.

    Article  Google Scholar 

  5. Zachariah D, Chatterjee S, Jansson M: Dynamic iterative pursuit. IEEE Trans. Signal Processing 2012,60(9):4967-4972.

    Article  MathSciNet  Google Scholar 

  6. Qiu C, Lu W, Vaswani N: Real-time dynamic MR image reconstruction using Kalman filtered compressed sensing. Paper presented in the IEEE international conference on acoustics, speech and signal processing (ICASSP), Taipei, 19–24 April 2009, pp. 393–396

    Google Scholar 

  7. Ziniel J, Schniter P: Efficient high-dimensional inference in the multiple measurement vector problem. IEEE Trans. Signal Processing 2013,61(2):340-354.

    Article  MathSciNet  Google Scholar 

  8. Vila J, Schniter P: Expectation-maximization Bernoulli-Gaussian approximate message passing. Paper presented at the forty-fifth Asilomar conference on signals, systems and computers (ASILOMAR), Pacific Grove, CA USA, 6–9 Nov 2011, pp. 799–803

    Google Scholar 

  9. Vila J, Schniter P: Expectation-maximization Gaussian-mixture approximate message passing. Paper presented in the 46th annual conference on information sciences and systems (CISS), Princeton, NJ USA, 21–23 March 2012, pp. 1–6

    Google Scholar 

  10. Kamilov U, Rangan S, Fletcher A, Unser M: Estimation, Approximate Message Passing with Consistent Parameter and Applications to Sparse Learning. Paper presented at the 26th annual conference on neural information processing systems, Lake Tahoe, NV, USA, 3–8 Dec 2012

    Google Scholar 

  11. Gurbuz A, Pilanci M, Arikan O: Expectation maximization based matching pursuit. Paper presented at the in IEEE international conference on acoustics, speech and signal processing (ICASSP), Kyoto, 25–30 March 2012, pp. 3313–3316

    Google Scholar 

  12. Charles A, Rozell C: Re-weighted l_1 Dynamic Filtering for Time-Varying Sparse Signal Estimation. Ithaca: Cornell University; 2012. arXiv:1208.0325

    Google Scholar 

  13. Ziniel J, Schniter P: Efficient high-dimensional inference in the multiple measurement vector problem. IEEE Trans. Signal Processing 2013,61(2):340-354.

    Article  MathSciNet  Google Scholar 

  14. Barembruch S, Moulines E, Scaglione A: A sparse EM algorithm for blind and semi-blind identification of doubly selective OFDM channels. Paper presented at the IEEE eleventh international workshop on signal processing advances in wireless communications (SPAWC), Marrakech, 20–23 June 2010, pp. 1–5

    Google Scholar 

  15. Noor A, Serpedin E, Nounou M, Nounou H: Inferring gene regulatory networks via nonlinear state-space models and exploiting sparsity. IEEE/ACM Trans. Comput. Biol. Bioinform 2012,9(4):1203-1211.

    Article  Google Scholar 

  16. Gardner TS, Di Bernardo D, Lorenz D, Collins JJ: Inferring genetic networks and identifying compound mode of action via expression profiling. Science 2003,301(5629):102-105. 10.1126/science.1081900

    Article  Google Scholar 

  17. Tegner J, Yeung MS, Hasty J, Collins JJ: Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc. Natl. Acad. Sci 2003,100(10):5944-5949. 10.1073/pnas.0933416100

    Article  Google Scholar 

  18. Cai X, Bazerque JA, Giannakis GB: Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations. PLoS Comput. Biol 2013,9(5):e1003068. 10.1371/journal.pcbi.1003068

    Article  Google Scholar 

  19. Thieffry D, Huerta AM, Pérez-Rueda E, Collado-Vides J: From specific gene regulation to genomic networks: a global analysis of transcriptional regulation in Escherichia coli . Bioessays 1998,20(5):433-440. 10.1002/(SICI)1521-1878(199805)20:5<433::AID-BIES10>3.0.CO;2-2

    Article  Google Scholar 

  20. Wang Z, Yang F, Ho D, Swift S, Tucker A, Liu X: Stochastic dynamic modeling of short gene expression time-series data. IEEE Trans. Nanobioscience 2008,7(1):44-55.

    Article  Google Scholar 

  21. Noor A, Serpedin E, Nounou M, Nounou H: Reverse engineering sparse gene regulatory networks using cubature Kalman filter and compressed sensing. Adv. Bioinformatics 2013, 2013: 205763.

    Google Scholar 

  22. Wang L, Wang X, Arkin AP, Samoilov MS: Inference of gene regulatory networks from genome-wide knockout fitness data. Bioinformatics 2013,29(3):338-346. 10.1093/bioinformatics/bts634

    Article  Google Scholar 

  23. McLachlan G, Krishnan T: The EM Algorithm and Extensions. Hoboken: Wiley-Interscience; 2008.

    Book  Google Scholar 

  24. Arasaratnam I, Haykin S: Cubature Kalman filters. IEEE Trans. Automat. Contr 2009,54(6):1254-1269.

    Article  MathSciNet  Google Scholar 

  25. Ito K, Xiong K: Gaussian filters for nonlinear filtering problems. IEEE Trans. Automat. Contr 2000,45(5):910-927. 10.1109/9.855552

    Article  MathSciNet  Google Scholar 

  26. Jia B, Xin M, Cheng Y: Sparse-grid quadrature nonlinear filtering. Automatica 2012,48(2):327-341. 10.1016/j.automatica.2011.08.057

    Article  MathSciNet  Google Scholar 

  27. Julier SJ, Uhlmann JK: Unscented filtering and nonlinear estimation. Proc. IEEE 2004,92(3):401-422. 10.1109/JPROC.2003.823141

    Article  Google Scholar 

  28. Guo D, Wang X: Quasi-Monte Carlo filtering in nonlinear dynamic systems. IEEE Trans. Signal Processing 2006,54(6):2087-2098.

    Article  Google Scholar 

  29. Jia B, Xin M, Cheng Y: High-degree cubature Kalman filter. Automatica 2013,49(2):510-518. 10.1016/j.automatica.2012.11.014

    Article  MathSciNet  Google Scholar 

  30. Sarkka S: Unscented Rauch–Tung–Striebel smoother. IEEE Trans. Automat. Cont 2008,53(3):845-849.

    Article  MathSciNet  Google Scholar 

  31. Schmidt M: Graphical model structure learning with L1-regularization. Ph.D. Dissertation, University of British Columbia, 2010

    Google Scholar 

  32. Bahmani S, Raj B, Boufounos P: Greedy sparsity-constrained optimization. J. Mac. Learn. Res 2013, 14: 807-841.

    MathSciNet  Google Scholar 

  33. Beck A, Eldar YC: Sparsity constrained nonlinear optimization: optimality conditions and algorithms. SIAM J. Optim 2013,23(3):1480-1509. 10.1137/120869778

    Article  MathSciNet  Google Scholar 

  34. Wright S, Nowak R, Figueiredo M: Sparse reconstruction by separable approximation. IEEE Trans. Signal Processing 2009,57(7):2479-2493.

    Article  MathSciNet  Google Scholar 

  35. Candes EJ, Wakin MB, Boyd S: Enhancing sparsity by reweighted l1 minimization. J. Fourier. Anal. Appl 2008, 14: 877-905. 10.1007/s00041-008-9045-x

    Article  MathSciNet  Google Scholar 

  36. Charles A, Asif MS, Romberg J, Rozell C: Sparsity penalties in dynamical system estimation. Paper presented at the 45th annual conference on information sciences and systems (CISS), Baltimore, 23–25 March 2011, pp. 1–6

  37. Wang Z, Liu X, Liu Y, Liang J, Vinciotti V: An extended kalman filtering approach to modeling nonlinear dynamic gene regulatory networks via short gene expression time series. IEEE/ACM Trans. Comput. Biol. Bioinform 2009,6(3):410-419.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Jia, B., Wang, X. Regularized EM algorithm for sparse parameter estimation in nonlinear dynamic systems with application to gene regulatory network inference. J Bioinform Sys Biology 2014, 5 (2014). https://doi.org/10.1186/1687-4153-2014-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-4153-2014-5

Keywords