Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models
© Baker et al; licensee Springer. 2011
Received: 30 November 2010
Accepted: 11 October 2011
Published: 11 October 2011
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
The focus of systems biology is to study the dynamic, complex and interconnected functionality of living organisms . To have a systems-level understanding of these organisms, it is necessary to integrate experimental and computational techniques to form a dynamic model [1, 2]. One such approach to dynamic models is the modeling of metabolic fluxes by their underlying enzymatic reaction rates. These enzymatic reaction rates, or enzyme kinetics, are described by a kinetic rate law. Different rate laws may be used, matching the specific behaviour of the chemical reaction that is catalysed by the enzyme to the most appropriate rate law. These kinetic rate laws are formulated with mathematical functions of metabolite concentration(s) and one or more kinetic parameters. In combination with the stoichiometry of the metabolism, these kinetic rate laws define the function of the cell. In order to properly describe the dynamics, it is required to have both an accurate and a complete set of parameter values that implement these kinetic rate laws. Owing to various limitations in wet lab experiments, it is not always possible to have a measured value for all the required parameters. In these cases, it is necessary to apply computational approaches for the estimation of these unknown parameters.
In the past few years, increasing research has been made on the application of several optimization techniques towards parameter estimation in systems biology. These include nonlinear least square (NLSQ) fitting , simulated annealing  and evolutionary computation . More recently, kinetic modelling has been formulated as a nonlinear dynamic system in state-space form, where the parameter estimation is addressed in the framework of control theory. One of the most widely used methods in control theory for parameter estimation is the Kalman filter . However, the Kalman filter is designed for inference in a linear dynamic system, and subsequently gives inaccurate results when applied to nonlinear systems. Instead, a number of extensions to the Kalman filter have been proposed to deal with nonlinear systems. Amongst those extensions, the most widely used are the extended Kalman filter (EKF)  and the unscented Kalman filter (UKF) [6, 7]. At the core of the UKF is the unscented transformation (UT) which operates directly through a nonlinear transformation, instead of relying on analytical linearization of the system (as performed by EKF) . This nonlinear transformation gives the UKF a distinct computational advantage over the EKF. Unlike the linearization performed by the EKF, the UT does not require the calculation of partial derivatives. Furthermore, the UKF has the accuracy of a second-order Taylor approximation, while the EKF has just a first-order accuracy . Overall, the UKF has been found to be more robust and converges faster than the EKF due to increased time update accuracy and improved covariance accuracy .
Nevertheless, parameter estimation is highly dependent on the availability and quality of the measurement data. Owing to the lack of measurement data collected from wet lab experiments, it is difficult to obtain reliable estimates of unknown kinetic parameter values. As a result, it is crucial to be able to determine the estimability of the model parameters from the available experimental data. Parameter identifiability tests are carried out to find out the estimable parameters of the model using the available experimental data and to rank these parameters based on how sensitive the model is with respect to a change in these parameters. The rank is directly proportional to the impact that the corrseponding parameter has on the system output and its ability to capture the important characteristics of the system . In this article, we investigated parameter identifiability using a sensitivity-based orthogonal identifiability algorithm proposed by Yao et al.  with the UKF as the method for parameter estimation in a nonlinear biological model.
In the Kalman filter method, identifiability is addressed with the view of observability . A system is said to be observable if the initial state can be uniquely identified from the output data at any given point in time . However, most observability analysis methods work by first calculating an analytical solution of the system, which is not possible if the system is considerably large and nonlinear. The novelty of this study lies in the fact that we propose to embed a sensitivity-based method for identifiability analysis into the UKF during the estimation of the parameter. The central difference (CD) method was used to calculate the sensitivity coefficient, where the step size is taken as the square root of the variance generated by the UKF at each step of its iteration. For the implementation, testing and validation of these methods, we have taken the sucrose accumulation in the sugar cane culm model published by Rohwer et al. .
2.1. Problem statement
where f is the nonlinear function describing the reactions, each of which is made up of the sum or difference of individual rate laws (see Additional file 1, Supplementary data). The vector X is the state vector of the model, values of which are the metabolite concentrations, and X0 is the initial state vector at time t0. The vector θ contains the unknown rate coefficients, such as Michaelis-Menten parameters, which we want to estimate. As the parameters are constant, it is possible to construct an augmented state vector by treating θ as additional state variables with zero rate of change, . The output vector Y is the output signal vector, or the vector of the quantities that can be measured from biological experiments,
This output signal is related to the state through a function g that encodes the relationship between the state of the system, X, and the measurement data at any given time. Having the measurement data, we try to estimate the parameter values by minimizing the distance between the measured data (actual) and the model data (estimated).
Parameter identifiability attempts to answer the question of whether or not the parameters of a given model can be uniquely identified with the given level of experimental data. Only if identifiability can be assured for the combined set of model parameters and measurement data, is it then reasonable to continue the estimation process. In this article, we simulate the measurement data from the model. This synthetic data is derived by combining the simulated data with random noise to develop a realistic experimental dataset .
Several theories of identifiability analysis exist, the most widely applied of which are introduced, and one of those is chosen for evaluation. A model is globally identifiable if a unique value can be found for each of the model parameters that reproduce the experimental data. On the other hand, if a finite number of sets of parameter values can be found, which reproduce the experimental data, then the model is called locally identifiable. Finally, the model is said to be unidentifiable if there exist an infinite number of possible parameter sets that can reproduce the experiment.
Two classes of identifiability analysis arise depending on the availability of prior information on the parameter data. The first is structural identifiability analysis and the second is posterior identifiability analysis . For structural identifiability analysis, no prior information about the parameter values are required, whereas for posterior identifiability analysis prior information about the parameter values are needed. On the other hand, structural identifiability analysis is highly restricted to either linear models or for the nonlinear case, small models with less than ten states and parameters . For our analysis, we used a posterior identifiability approach, specifically local at-a-point identifiability (a specific method of locally identifiable modelling ).
For large nonlinear models, posterior identifiability methods are feasible. Yao et al.  developed an orthogonal-based parameter identifiability method using a scaled sensitivity matrix. Jacquez et al.  developed a method based on correlation, and Degenring et al.  developed a method based on principal component analysis. All of these methods are local at-a-point identifiability analysis methods and perform similarly with nonlinear biological models . For our approach, we have chosen the orthogonal-based method because of its ease of implementation and straightforward analysis. We applied this orthogonal method of parameter identifiability to determine the set of identifiable parameters and then applied the UKF to perform the estimation of these unknown parameters.
2.2. Unscented Kalman filter
The UKF is based on a statistical linearization technique. Starting with a nonlinear function of random variables, a linear regression between n points is drawn from the prior distribution of the random variables. This technique gives a more accurate result than analytical linearization techniques, such as Taylor series linearization, as it considers the spread of the random variables .
A Kalman filter is composed of a number of equations which estimate the state of a process by minimizing the covariance of the estimation error. Kalman filters work in a predictor-corrector style, whereby they first predict the process state and covariance at some time using information from the model (prediction) and then improve this estimate by incorporating the measurement data (corrector). UKF is itself an extension of the UT , a deterministic sampling technique which implements a native nonlinear transformation to derive the mean and covariance of the estimates. This transformed mean and covariance are then supplied to the Kalman filter equations to estimate the state variables.
where and are the corresponding weights to calculate, respectively, the mean and covariance of the state. The transformed mean and covariance are then fed into the standard Kalman filter equations to make the process estimation.
2.3. Orthogonal-based method for parameter identifiability
Calculate the sensitivity coefficient matrix Z.
Calculate the sum of squared values of the Z matrix and choose the highest column to be the most estimable one.
Mark the column as X L where .
- 4.Calculate an orthogonal projection for the column that exhibits the highest independence to the vector space V spanned by X L .
The residual matrix, , is calculated as a measure of independence.
The sum of squares values is calculated for each column of the R L matrix, resulting in the vector C L , and the column corresponding to the largest sum of squares is chosen for the next estimable parameter.
Select the corresponding column in Z and augment the matrix X L by marking the new column.
Iterate steps 4-7 until the cutoff value is reached or until all of the parameters are selected to be identifiable.
In this approach, the choice of step size, Δθ j , is critical as numerical values obtained with this method depend highly on the value of the step size. The square root of the variance generated by UKF at each step of its iteration was used as the step size, which gives . This choice is made to ensure that the step size remains variable with each recursive step, as well as within the feasible parameter range of the perturbed system. It has been shown that the approximation error gets smaller linearly as step size becomes smaller . Parameters are maintained within one standard deviation (the approximation error), and thus, they have a higher probability in comparison to parameters outside of this range. Furthermore, with each recursion the availability of new information during the parameter estimation in UKF correlates to a general decrease in the uncertainty within the system , making the standard deviation a feasible choice for the step size.
3.1. Model setup
Parameters chosen to be unknown, and their corresponding rank, or position in the residual matrix
We start with the ODEs by first integrating them over the time interval [0 T] where T = 5000 with all the known parameters to generate the synthetic measurement time series data. We choose the final time point to be the time when the system reaches its steady state. The MATLAB function ode45 (a numerical Runge-Kutta method for numerical integration) was used for solving the ODE. The synthetic measurement data were created through the inclusion of a small random uncorrelated white noise to the observation. During the simulation, the measurement data are sampled at a fixed interval of Δt = 0.2, to collect fixed time points.
In order to make a fair comparison of the UKF to other methods of parameter estimation, the identifiability analysis was performed separately. This should not affect the advantage of integration of identifiability with estimation, but in fact detract from it, as it gives the other estimation algorithms an effective headstart.
Therefore, we first performed the identifiability analysis, to determine which parameters could be estimated. The 12 parameters assumed to be 'unknown' were initialized as previously described. The identifiability analysis revealed that 10 out of the 12 parameters were identifiable (see Table 1). In the method proposed by Yao et al. , heuristics were used for determining the condition to stop the selection of identifiable parameters. We followed the same procedure laid out in Yao et al. , and found the condition for a reasonable stopping criterion to be Max(C L ) < 0.004.
The UKF parameter estimation algorithm was repeated for 97 runs to provide statistics of the estimation. In order to compare the parameter estimation methods as these parameters have the least effect on the system, we keep the nonidentifiable parameters fixed to their known values . In general, however, these parameters would not be known a priori. In these cases, we would first try to resolve the parameter identifiability through restructuring the model and, only as a last resort, set them to fixed arbitrary values.
Though the method estimated most of the parameter values with lower standard deviation, parameters, Km6UDP and Km6Suc6P, show decidedly higher standard deviation. This high variation contradicts the evaluation of the identifiability analysis. One possible explanation is that these two parameters have some sort of a functional relationship (nonlinear) with other parameters. The orthogonal nature of the parameter identifiability approach proposed by Yao et al. can only deal with collinearity. A second possible explanation could be the local identifiability approach, as applied in this study, which by definition only ensures that the system is identifiable within a finite (but not unique) set of points in the parameter space. Individual parameters within this set could have a very large domain, resulting in a large variation within the individual parameter, i.e. the parameter is identifiable but poorly resolved.
To better gauge the parameter estimation of the UKF, the ten estimable parameters were similarly determined using a genetic algorithm (GA) and NLSQ. Both alternatives were implemented in MATLAB, using the default implementations and settings. A third alternative, simulated annealing, was attempted using the implementation in Copasi. However, this method on its own failed to produce usable parameters and required more than an order of magnitude longer to run. As with the UKF, 97 repetitions were performed for each of these methods.
Comparison of actual parameter values and the parameter estimation results using UKF, GA and NLSQ
Neither the GA nor the NLSQ performed well when the parameter value fell below 1, which accounted for six out of the ten parameters. In fact, with one exception (NLSQ parameter Ki3G6P), only the UKF was able to consistently estimate smaller parameters. In fact the GA seemed to have difficulties with any parameter too far from 1, with all mean parameters falling between 0.85 and 1.04 with very small standard deviations. Similar to the GA, the NLSQ estimation shows very tight results for the parameters with value 1 (standard deviations < 0.01), and with the exception of the parameter Ki3G6P, the standard deviations increase considerably as the parameter value differs from 1 (with five of the standard deviations exceeding 100% of the parameter value). The UKF is more consistent throughout, estimating both larger and smaller values with more consistent standard deviations.
In order to develop dynamic models for systems biology, it is necessary to have knowledge of the underlying kinetic parameters for the system being modelled. Since it is not always possible to have this knowledge directly from experimental measurements, it is necessary to develop a method to estimate these parameter values. Furthermore, it is critical that we rely on the accuracy of these estimated values. One step towards this is the parameter identifiability which can be used to help determine if there are sufficient measurement data with which to identify the parameter(s).
In this article, we have proposed a method whereby biological systems can be viewed as a state-space system, in order to apply approaches from control theory, the UKF, to parameter estimation. However, before approaching the estimation problem, an identifiability approach proposed by Yao et al.  was applied to identify the parameters which cannot be uniquely estimated, based on the model structure and the measurement data. One of the benefits in integrating estimation and identifiability is the reuse of the variance generated by the UKF for the step size in the calculation of the sensitivity coefficient for identifiability.
The UKF offers many desirable traits to biological modelling, chief among them being a native nonlinear transformation . The UKF is thus able to overcome one of the major bottlenecks in biological modelling, a lack of experimentally measured parameters. The UKF with identifiability analysis is particularly important in the study of kinetic networks, as a large number of parameters might be unidentifiable as these networks increase in size and complexity. Another aspect of the UKF that lends itself to kinetic models is that UKF is a time-evolution algorithm. This means that the parameter estimation with UKF is refined with each additional set of measurements, making it especially successful at estimating biochemical pathways with time series data.
In our future study, we intend to refine the methods to better identify the functional relationship(s) between parameters and quantify them. By applying the identifiability analysis, we will estimate the independent parameters and determine the dependent ones from this quantification. One other thrust of research will be in generalizing the stopping criterion for identifiability analysis. For this test model, it was found that Max(C L ) < 0.004 provided the desired stopping criterion, but it is unknown if this is a model- or data-specific value.
aMatlab source for implementation can be made available upon request.
extended Kalman filter
nonlinear least squares
unscented Kalman filter
This work was supported by the German Federal Ministry for Education and Research (BMBF 0315295).
- Sun X, Jin L, Xiong M: Extended Kalman filter for estimation of parameters in nonlinear state-space models of biochemical networks. PLoS ONE 2008, 3: e3758. 10.1371/journal.pone.0003758View ArticleGoogle Scholar
- Lillacci G, Khammash M: Parameter estimation and model selection in computational biology. PLoS Comput Biol 2010, 6: e1000696. 10.1371/journal.pcbi.1000696MathSciNetView ArticleGoogle Scholar
- Mendes P, Kell D: Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics 1998,14(10):869-883. 10.1093/bioinformatics/14.10.869View ArticleGoogle Scholar
- Kirkpatrick S, Gelatt CD, Vecchi MP: Optimization by simulated annealing. Science 1983, 220: 671-680. 10.1126/science.220.4598.671MathSciNetView ArticleMATHGoogle Scholar
- Moles CG, Mendes P, Banga JR: Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res 2003, 13: 2467-2474. 10.1101/gr.1262503View ArticleGoogle Scholar
- Quach M, Brunel N, d'Alche Buc F: Estimating parameters and hidden variables in non-linear state-space models based on ODEs for biological networks inference. Bioinformatics 2007, 23: 3209-3216. 10.1093/bioinformatics/btm510View ArticleGoogle Scholar
- Julier S, Uhlmann J: Unscented filtering and nonlinear estimation. Proc IEEE 2004,92(3):401-422. 10.1109/JPROC.2003.823141View ArticleGoogle Scholar
- Kandepu R, Foss B, Imsland L: Applying the unscented Kalman filter for nonlinear state estimation. J Process Control 2008,18(7-8):753-768. 10.1016/j.jprocont.2007.11.004View ArticleGoogle Scholar
- Yue H, Brown M, Knowles J, Wang H, Broomhead DS, Kell DB: Insights into the behaviour of systems biology models from dynamic sensitivity and identifiability analysis: a case study of NF-kB signaling pathway. Mol Biosyst 2006, 2: 640-649. 10.1039/b609442bView ArticleGoogle Scholar
- Yao KZ, Shaw BM, Kou B, McAuley KB, Bacon DW: Modeling ethylene/butene copolymerization with multi-site catalysts: parameter estimability and experimental design. Polym React Eng 2003,11(3):563-588. 10.1081/PRE-120024426View ArticleGoogle Scholar
- Geffen D: Parameter identifiability of biochemical reaction networks in systems biology. Masters Thesis, Department of Chemical Engineering, Queen's University, Kingston 2008.Google Scholar
- Rohwer JM, Botha FC: Analysis of sucrose accumulation in the sugar cane culm on the basis of in vitro kinetic data. Biochem J 2001,358(2):437-445. 10.1042/0264-6021:3580437View ArticleGoogle Scholar
- Chen WW, Niepel M, Sorger PK: Classic and contemporary approaches to modeling biochemical reactions. Genes Dev 2010,24(17):1861-1875. 10.1101/gad.1945410View ArticleGoogle Scholar
- Quaiser T, Monnigmann M: Systematic identifiability testing for unambiguous mechanistic modeling--application to JAK-STAT, MAP kinase, and NK-kB signaling pathway models. BMC Syst Biol 2009, 3: 50. 10.1186/1752-0509-3-50View ArticleGoogle Scholar
- Asprey SP, Macchietto S: Dynamic Model Development: Methods, Theory and Applications. Elsevier, Amsterdam; 2003.Google Scholar
- Jacquez JA, Greif P: Numerical parameter identifiability and estimability: integrating identifiability, estimability, and optimal sampling design. Math Biosci 1985,77(1-2):201-227. 10.1016/0025-5564(85)90098-7View ArticleMATHGoogle Scholar
- Degenring D, Froemel C, Dikta G, Takors R: Sensitivity analysis for the reduction of complex metabolism models. J Process Control 2004,14(7):729-745. 10.1016/j.jprocont.2003.12.008View ArticleGoogle Scholar
- Terejanu GA: Unscented Kalman filter tutorial.[http://users.ices.utexas.edu/~terejanu/files/tutorialUKF.pdf]
- Baker RD: A methodology for sensitivity analysis of models fitted to data using statistical methods. IMA J Manag Math 2001,12(1):23-39. 10.1093/imaman/12.1.23MathSciNetView ArticleMATHGoogle Scholar
- Brennan C: Notes on numerical differentiation. School of Electronic Engineering, Dublin City University.[http://elm.eeng.dcu.ie/~ee317/Course_Notes/handout1.pdf]
- Kalman Intro, PSAS[http://psas.pdx.edu/KalmanIntro/]
- Julier SJ, Uhlmann JK: A new extension of the Kalman filter to nonlinear systems. International Symposium on Aerospace/Defense Sensing, Simulation and Controls 1997., 3:Google Scholar