# Inferring Boolean network states from partial information

- Guy Karlebach
^{1}Email author

**2013**:11

https://doi.org/10.1186/1687-4153-2013-11

© Karlebach; licensee Springer. 2013

**Received: **30 May 2013

**Accepted: **26 August 2013

**Published: **5 September 2013

## Abstract

Networks of molecular interactions regulate key processes in living cells. Therefore, understanding their functionality is a high priority in advancing biological knowledge. Boolean networks are often used to describe cellular networks mathematically and are fitted to experimental datasets. The fitting often results in ambiguities since the interpretation of the measurements is not straightforward and since the data contain noise. In order to facilitate a more reliable mapping between datasets and Boolean networks, we develop an algorithm that infers network trajectories from a dataset distorted by noise. We analyze our algorithm theoretically and demonstrate its accuracy using simulation and microarray expression data.

## Keywords

## Introduction

*T*+ 1 is determined by the Boolean values of other nodes at time

*T*. More specifically, the Boolean value of a node is determined by a time-invariant Boolean function that takes as input the Boolean values of a set of network nodes at the preceding time point. The set of nodes that constitute the input to the Boolean function is called its regulators, and the output node is referred to as target. The vector of the Boolean values of all the network nodes is called the network state. A sequence of states that evolves from some initial state according to the Boolean functions is called a trajectory. The trajectories of network states can be complex, displaying chaos or order depending on the network structure and the initial state [2]. When the outputs of all the Boolean functions at state S produce the state S itself, S is called a steady state. Since in every state every node is set to one of two Boolean values, the number of possible network states is exponential to the number of nodes. Figure 1 illustrates a simple Boolean network.

In recent years, new experimental technologies in molecular biology enabled a broader examination of gene activity in cells [3–5] and consequently, significant efforts have been invested in the application of gene regulatory networks modeling [6]. However, experimental procedures produce continuous values that do not determine conclusively the activity or inactivity of a gene. Hence, these values cannot be mapped into states of Boolean networks unambiguously, and the resulting picture of the cell state contains errors. Computational methods address this problem in various ways, for example, by using additional data such as the genomic sequences of gene promoters [7], by mapping the continuous measurements into discrete values and then optimally fitting the transformed dataset to a network model [8, 9], or by using a prior distribution on states [10]. It is well recognized that an improved ability to probe the state of a cell can lead to improvement in our understanding of a broad range of biological processes.

With this motivation in mind, we propose a novel algorithm for inferring the state of a Boolean network using a continuous noisy expression dataset and the network structure, i.e., the genes and their regulators. The algorithm is based on the following idea: High expression values are more likely to correspond to a Boolean 1, while low to a Boolean 0. By combining the network structure and the expression dataset, we can estimate the likelihood of each continuous value to correspond to a Boolean value of 1 or 0. We can then update the likelihood (equivalently the expression value) of each gene accordingly and repeat the process until any further change would either (a) change a gene towards a Boolean value that is less likely for it or (b) change a gene towards a Boolean value that is as likely as the opposite Boolean value (i.e., make an arbitrary guess). The update scheme should be such that if enough updates were possible, the final probability distribution will describe the states of a Boolean network.

The next section explains how to implement this idea using the conditional entropy [11] of the network. It will be shown that changing the gene probabilities in the opposite direction of the conditional entropy gradient is equivalent to executing the inference algorithm that we outlined above. The section begins by analyzing a simple network and then extends the results to general networks.

In the ‘Testing’ section, we use simulation and real data in order to test the performance of the algorithm. We generate noisy datasets for several Boolean network structures and use a microarray time-series dataset from a study of the *Saccharomyces cerevisiae* cell cycle. We find that using the simulated datasets, the algorithm infers a large proportion of the Boolean states correctly; and using the yeast dataset, it infers Boolean states that agree with the conclusion of the study. We conclude by summarizing our results and suggesting research directions that can lead to further progress in this domain.

## Main text

### Analysis

Consider the following simple network: gene X negatively regulates gene Y. In other words, when X is active Y is inactive, and vice versa. X is also said to be a repressor of Y or to repress Y. The Boolean function that controls the value of Y is called NOT.

An experimental device can measure the states of X and Y. If a gene is active, it measures a value from a normal distribution with a large positive average *μ* and small standard deviation *σ*. If a gene is inactive, the device measures a value from a normal distribution with a negative average − *μ* and standard deviation *σ*.

The input to our problem is a series of *N* i.i.d. measurements of the genes X, Y (for example, under different stimulations given to the cells). X can be active or inactive in every measurement with equal probabilities. We are also given the structure of the network. We do not know the logic with which X regulates Y, but the values in the dataset will reflect this logic.

Our goal is to find the states of X and Y in each measurement. Clearly, we cannot always recover the ‘true’ states from every measurement, since there is a nonzero probability that the device will measure a large value for the inactive gene and, at the same time, a small value for the active gene. Nevertheless, the best strategy is to identify X as a repressor and then predict that in each pair of values the larger one corresponds to an active gene and the smaller to an inactive - the larger the difference, the higher our confidence. The inference algorithm, which we will shortly describe, will apply a generalization of this strategy. We will show that in the case of the simple network, the algorithm predicts the network states in the optimal way. Then, we will explain how it generalizes to more complex networks. Before we describe the algorithm, we need to define several random variables.

Denote the *N* measurements by C_{1}, C_{2},…,C_{
N
}, and the continuous values of X and Y in measurement C_{
i
} as *x*_{
i
} and *y*_{
i
}, respectively. As a convention, we will use uppercase and lowercase letters to define variables that assume discrete values and continuous values, respectively. The terms measurement *i* and C_{
i
} are used interchangeably.

The role of the logistic function is to map continuous values to probabilities. For example, if *x*_{
i
} is close to the average of its distribution *μ*, it will have a high probability to correspond to a Boolean 1, because *μ* is a large positive number. The use of the logistic function will also enable us to implement the update step in our algorithm, in which we update the probabilities of the previous iteration.

*X*;

*Y*]

_{ i }∈ {00, 10, 01, 11}:

The probability distribution of [*X*;*Y*]_{
i
} is well defined, since all probabilities are in (0,1) and sum to 1.

Since each of *x*_{
i
} and *y*_{
i
} is from one of the normal distributions *N*(*μ*,*σ*^{2}), *N*(−*μ*,*σ*^{2}) with a small *σ*^{2}, the probabilities *P*([*X*;*Y*]_{
i
} = 11) and *P*([*X*;*Y*]_{
i
} = 00) will be small.

*X*;

*Y*]

_{ i }, we can define the discrete random variable

*X*

_{ i }with probability function:

We define the discrete random variable *Y*_{
i
} by replacing $\lambda \left({x}_{i}\right),\overline{\lambda}\left({x}_{i}\right)$ with $\lambda \left({y}_{i}\right),\overline{\lambda}\left({y}_{i}\right)$ in the definition of *X*_{
i
}.

*X*= 1 and

*Y*= 0 in the whole dataset? In order to do that, note that as

*σ*

^{2}becomes smaller and the number of measurements

*N*larger, by the law of large numbers:

which is what one expects intuitively - either *X* is active and *Y* is inactive, or vice versa, but they cannot both be active or inactive in the same measurement, because *X* represses *Y*. Although it is possible to have a high probability *P*([*X*;*Y*]_{i} = 00) for some *i*, such deviations will have little effect on the average of the *N* samples. Hence, we define a variable [*X*;*Y*] ∈ {00, 01, 10, 11} with a distribution that is an average of the probabilities of the variables [*X*;*Y*]_{
i
}.

*X*can be inactive or active in any measurement with equal probabilities, similarly to [

*X*;

*Y*] we define the variable

*X*using the distribution

and in a similar way a discrete random variable *Y*. Note that the probability of [*X*;*Y*] is an estimation of the joint probabilities of *X* and *Y*, *P*(*X*,*Y*).

How can we infer the probabilities of variables that do not conform to the *X* → *Y* network, for example, when *x*_{
i
} and *y*_{
i
} are both positive? We can use the average of all the samples, which is rather accurate, and estimate the probabilities of *X*_{
i
} = 1 and *Y*_{
i
} = 1. Then we will correct the values of *x*_{
i
} and *y*_{
i
} accordingly. This estimation and correction process is in fact equivalent to changing *x*_{
i
} and *y*_{
i
} in the opposite direction of the gradient of the conditional entropy *H*(*Y*|*X*). We have defined the probability distributions *P*(*X*), *P*(*Y*), *P*(*X*,*Y*) as functions of the continuous values *x*_{
i
}, *y*_{
i
}. We can therefore partially derive the conditional entropy *H*(*Y*|*X*) according to each continuous value and obtain the gradient ∇*H*(*Y*|*X*). This leads to the following algorithm:

We now show that the algorithm obtains the desired solution for our simple network. More specifically, if *y*_{
i
} > *x*_{
i
}, then *λ* (*x*_{
i
}) will approach 0 and *λ*(*y*_{
i
}) will approach 1 and vice versa.

First, in order to compute the gradient, we use the chain rule for conditional entropy: *H*(*Y*|*X*) = *H*(*Y*,*X*) − *H*(*X*).

The direction of change in *x*_{
i
} (positive or negative, i.e., towards Boolean 1 or Boolean 0) will be determined by the ratio within the log. If this ratio is greater than 1, the direction of change will be negative (because the change is in the opposite direction of the gradient). If it is smaller than 1, the change will be positive.

- 1.
How certain we are in

*x*_{ i }. If*x*_{ i }is very high or very low, the whole expression, and the change it implies to*x*_{ i }, will be small. This is a result of the factor [*λ*(*x*_{ i }) · (1 −*λ*(*x*_{ i }))] that has its maximum at*λ*(*x*_{ i }) = 0.5 and approaches 0 when*λ*(*x*_{ i }) approaches 1 or 0. - 2.
The more likely Boolean value to assign to

*y*_{i.}The exponent of*P*(*X,Y*)^{P(Yi = Y)}) will decrease the weight of the probability*P*(*X*,*Y*) in the ratio if*P*(*Y*_{ i }=*Y*) is low, and vice versa. - 3.
The more likely Boolean (

*X*,*Y*) vectors. For example, if*P*(*Y*_{ i }= 0) ≈ 0, we will have within the log a ratio between*P*(*X*= 0,*Y*= 1) and*P*(*X*= 1,*Y*= 1). If*P*(*X*= 0,*Y*= 1) is more likely, the ratio will be greater than 1; and if*P*(*X*= 1,*Y*= 1) is more likely, it will be smaller than 1.

A symmetric expression can be developed for *y*_{
i
}. Note that since all regulator values are equally likely, the term $\frac{\partial H\left(X\right)}{\partial {x}_{i}}$ is 0 (otherwise it negates the bias).

*P*((

*X*,

*Y*) = (1,0)) =

*P*((

*X*,

*Y*) = (0,1)) = 0.49; and

*P*((

*X*,

*Y*) = (0,0)) =

*P*((

*X*,

*Y*) = (1,1)) = 0.01. We look at measurement

*i*in which

*x*

_{ i }= 2 and

*y*

_{ i }= 1 and plot the changes to

*x*

_{ i },

*y*

_{ i }in eight consecutive steps of the algorithm (Figure 2). We choose

*δ*=

*N*and therefore the constant 1/

*N*is canceled out. As can be seen in the figure,

*x*

_{ i }does not change significantly, while

*y*

_{ i }is reduced sharply to a negative value. This is in agreement with our optimal solution scheme for the simple

*X*→

*Y*network.

We used a very simple network in order to explain the principles of our algorithm, and we now turn to more complex networks. Any network can be described by a directed graph G(V,E), where the set of nodes V contains a node for every gene, and the set of edges E contains edges from every regulator to each of its targets. The entropy of every node *Y*_{
i
} is conditional on its set of regulators **X**_{
Yi
}. The conditional entropy of the network becomes ${{\displaystyle \sum}}_{i=1}^{\left|V\right|}H\left({Y}_{i}|{\mathbf{X}}_{{Y}_{i}}\right).$

The dataset of more complex networks may contain steady states, like in the case of the simple network, but it may also include longer trajectories. In the latter case, if two measurements *i*, *i* + 1 correspond to two consecutive states in a trajectory, the value of *y*_{i + 1} should be taken from C_{i + 1} and the values *x*_{
i
} of its regulators from C_{
i
}.

In the simple network that we discussed so far, V contains two nodes, one for gene X and one for gene Y, and E contains one directed edge from the node of X to the node of Y. Each measurement is a vector of size 2, (*x*_{i,}*y*_{
i
}). For calculating (*), we needed to find the probability *P*(*X*,*Y*) of vectors of size 2.

In the general case, in order to derive the conditional entropy of the network by the value *x*_{
i
} of one of the regulators *X* at the measurement *i*, we need to find the probability of a Boolean assignment to vectors of arbitrary size. We can do this in the same way as we did for *P*([*X*;*Y*]_{
i
}) - by multiplying the individual probabilities of the vector entries. The probability of seeing a Boolean vector in the dataset as a whole is again an average of its probabilities in the *N* measurements.

*M*

_{ x }the number of targets that

*X*regulates. Denote by ${\overrightarrow{Z}}_{\mathit{j}}$, a Boolean assignment to

**X**

_{ Yj }∪ {Y

_{ j }}/

*X*, where

*Y*

_{ j }is the

*j*th target of

*X*, and

**X**

_{ Yj }is the set of regulators of

*Y*

_{ j }, at the

*i*th measurement. Denote as $\overrightarrow{Z}$ any Boolean vector of size $\left|{\overrightarrow{Z}}_{\mathit{j}}\right|$. We generalize the derivative by

*x*

_{ i }given by (*) as follows:

The expression (**) determines the change to *x*_{
i
} in the same way as (*), taking into account all the targets of gene X in the network. If X is itself a target of other regulators, then *M*_{
x
} increases by 1, and ${\overrightarrow{Z}}_{{M}_{x}+1}$ will correspond to a Boolean assignment to the regulators of X at measurement *i*.

Note that if we decrease the step size of the gradient descent *δ* by a factor C, the change in the *x*_{
i
} values $-\delta \xb7\nabla {{\displaystyle \sum}}_{i=1}^{\left|V\right|}H\left({Y}_{i}|{\mathbf{X}}_{{Y}_{i}}\right)$ will decrease by a factor of C. However, since the logistic function maps the *x*_{
i
} values to the finite interval (0,1), equal probabilities λ(*x*_{
i
}) = *P*(*X*_{
i
} = 1) may not change by the same factor. For a ratio within the log in (**) that is very large for some *x*_{
i
}, and smaller for another *x*_{
j
}, the change in *P*(*X*_{
j
}) as a result of decreasing *δ* can remain large while the change in *P*(*X*_{
i
}) becomes small. In addition, if the change in the total entropy becomes very small as a result of decreasing *δ*, the algorithm will proceed to step 4.

It may be the case that the dataset is not sufficiently informative for inferring all the states. For example, if in the simple *X* → *Y* network *x*_{
i
} = *y*_{
i
}, the algorithm will change both values to 0. On the other hand, if all *x*_{
i
} and *y*_{
i
} are different, there are always parameters *τ*, *δ* for which the algorithm will change all *x*_{
i
} and *y*_{
i
} to have opposite signs, and *H*(*Y*|*X*) will approach 0. A situation as the former can also occur in more complex networks. We would like to prove that if it does not occur, i.e., if the dataset is informative enough, our algorithm will infer the states of a Boolean network. This is shown by the following theorem:

**Theorem 1:** *Let G =* (*V*,*E*) *be a graph that describes the structure of a Boolean network and D a dataset of N measurements.*

*Let X*
_{
Y
}
*be a set of nodes that regulate some node Y, i.e., ∀ X′ ∈ X*
_{
Y
}
*, (X′ → Y) ∈ E*

*Denote by*
${\overrightarrow{\mathit{X}}}_{{\mathit{Y}}_{i}}$
*an assignment of Boolean values to the nodes in X*
_{
Y
}
*at measurement i. Similarly, Y*
_{
i
}
*is a Boolean assignment to Y at measurement i.*

*If the algorithm converges to a global minimum and updates dataset D to become D′, then for any two measurements i,j in D′:*
$\mathit{P}\left({\overrightarrow{X}}_{{Y}_{i}}={\overrightarrow{X}}_{{Y}_{j}}\wedge {Y}_{i}\ne {Y}_{j}\right)=0.$

#### Proof

The conditional entropy of the network is a sum of conditional entropies. Since conditional entropy is always nonnegative, the global minimum is reached when the conditional entropy of the network is 0, and every term in the sum is also 0.

*X*

_{ Y }can be written as

Since the log is non-positive and the probabilities are non-negative, *H*(*Y*|*X*_{
Y
}) reaches its minimum when for every $\left(Y,{\overrightarrow{X}}_{Y}\right)$ either $P\left({\stackrel{\u20d7}{X}}_{Y}\right)=0$, $P\left(Y|{\stackrel{\u20d7}{X}}_{Y}\right)=0$, or $P\left(Y|{\stackrel{\u20d7}{X}}_{Y}\right)=1$.

If $P\left({\stackrel{\u20d7}{X}}_{Y}\right)=0$, the value ${\stackrel{\u20d7}{X}}_{Y}$ of the regulators never occurs in the data.

Otherwise, if $P\left(Y=1|{\stackrel{\u20d7}{X}}_{Y}\right)=0$, then since $\sum {}_{Y\in \left\{0,1\right\}}}P\left(Y|{\stackrel{\u20d7}{X}}_{Y}\right)=1$ it must hold that $P\left(Y=0|{\stackrel{\u20d7}{X}}_{Y}\right)=1$. Similarly, if $P\left(Y=1|{\stackrel{\u20d7}{X}}_{Y}\right)=1$ then $\phantom{\rule{0.25em}{0ex}}P\left(Y=0|{\stackrel{\u20d7}{X}}_{Y}\right)=0$.

Hence, for a specific assignment ${\stackrel{\u20d7}{X}}_{Y}$ of the regulators, the target *Y* is either 0 or 1 but never both.□

To summarize the analysis section, we showed that the algorithm infers the states of a simple network optimally if the dataset is informative enough. We then generalized the inference process to general networks, and showed that if the algorithm converges it will infer the states of a Boolean network.

In the ‘Testing’ section, we test the algorithm using simulation and real microarray expression data.

### Testing

#### Boolean networks with two regulators per node

- 1.
Assign logic rules to all the nodes. We use the same logic function for all the nodes - XOR in the first experiment and NOR in the second experiment. XOR's output is 1 if and only if the values of the regulator nodes differ. NOR's output is 1 if and only if the values of the regulator nodes are both 0

- 2.
Randomly choose an initial state

- 3.
Generate a trajectory of length 400 states

- 4.
Convert the Boolean trajectory to a continuous trajectory as follows:

- (a)
Replace every Boolean 1 by a value from a normal distribution with an average of 1 and a standard deviation of 1.1

- (a)
Replace every Boolean 0 by a value from a normal distribution with an average of −1 and a standard deviation of 1.1

*T*, it is more likely to make mistakes in its target at time

*T*+ 1.

#### Boolean networks with imperfect structure

#### The cell cycle network of Li et al

#### Conway's game of life

^{49}.

Additional file 1:**Movie 1.** Reconstruction of a trajectory of Conway’s Game of Life. The left frame is the real trajectory, the middle frame is a maximal probability reconstruction, and the right frame is the gradient descent reconstruction. Boolean 1 is represented by a black cell and Boolean 0 by a white cell. (MOV 2 MB)

The maximal probability reconstruction makes an error on 18.6% of the nodes. In the initial 50 states, it errs on 18.2% of the nodes, and in the last 50 states, on 19% of the nodes. The gradient descent reconstruction assigns the wrong values to 8.8% of the nodes. In the initial 50 states, its error rate is 12.8%, and in the last 50 states, 4.8%.

#### Microarray expression data

Orlando et al. [18] compared gene expression patterns in wild type yeast compared to a cyclin mutant strain. They observed that many genes are expressed in a cyclic pattern in both strains. In order to explain this observation, they suggested a Boolean network of nine transcription factors and transcription complexes. They showed that for logic functions of their choice and most initial states, the network traverses the cell cycle stages and, therefore, can explain their observation. We will use the expression data of the transcription factors and the network structure from [18] and infer the network states in wild type and mutant cells. If the states represent the cell cycle in both strains, then our analysis will support the conclusion of the study.

For the MBF, SBF, and SFF complexes, we use the expression profiles of their members STB1, SWI4, and FKH1, respectively. The dataset of [18] contains four time series of 15 microarrays for time points from 30 to 260 min, two replicates for the wild type and two for the mutant. Since all expression values are positive values, we need to map them to a symmetric range centered at 0, as the input of the simulations. However, different arrays will typically contain biases; for example, a gene can have a higher value in an array that has a higher mean expression value. Therefore, mapping two identical values from two different arrays to the same value may result in a bad estimation of the initial probabilities.

## Conclusions

In this study we presented a problem that arises in molecular biology, namely, that of inferring the activity of cellular network components given noisy measurements, and defined it as mapping continuous measurements to Boolean network states. We developed an algorithm that given a network structure infers its Boolean states from a dataset of continuous measurements. Our results show that the algorithm can successfully reconstruct Boolean states from inaccurate continuous data. The algorithm performs reasonably well even if the relations between the nodes of the network contain errors. We also showed that it can be used to interpret real microarray data and examine experimental hypotheses.

Our approach is highly dependent on a network structure, and when that is not available, methods that rely solely on expression should be used [15, 19]. We did not define a concept of prior knowledge, which has been used in various works to integrate information into Bayesian models [20, 21]. While this makes our method arguably less flexible, it also exempts us from the need to define prior distributions. Finally, the algorithm is defined for deterministic Boolean networks, in contrast to probabilistic Boolean networks that may better express biological noise [22].

When one of these two events occurs, it is impossible to reconstruct the original states of *X* and *Y*. In more complex networks, information loss is a more complex. Determining an upper limit on the number of Boolean values that can be recovered given a certain amount of noise may prove insightful.

Another aspect that should be investigated is how to choose parameters that optimize the performance of the algorithm, such as the parameters of the logistic function or the step size *δ* and threshold *τ* of the gradient descent.

As Boolean networks can generate a diverse range of dynamic behaviors, the accuracy of reconstructing trajectories that arise in different dynamic regimes should also be characterized. For example, are chaotic trajectories harder to reconstruct then those that display order? More simulation tests can better define the relationships between the quality of data and different classes of networks.

Current experimental techniques produce an ever-greater number of measurements, and there is a pressing need for methods that will enable researchers to interpret it accurately and without bias. An accurate method for inferring the state of a cell can translate this richness of data into important discoveries.

## Declarations

## Authors’ Affiliations

## References

- Kauffman SA:
**Metabolic stability and epigenesis in randomly constructed genetic nets.***J. Theor. Biol.*1969,**22:**437-467. 10.1016/0022-5193(69)90015-0MathSciNetView ArticleGoogle Scholar - Kauffman SA:
*The Origins of Order, Self-Organization and Selection in Evolution*. Oxford: Oxford University Press; 1993.Google Scholar - Yu X, Schneiderhan-Marra N, Joos TO:
**Protein microarrays and personalized medicine.***Ann. Biol. Clin.*2011,**69**(1):p17-p29.Google Scholar - Liu F, Kuo Jenssen WPTK, Hovig E:
**Performance comparison of multiple microarray platforms for gene expression profiling.***Methods Mol. Biol.*2012,**802:**141-155. 10.1007/978-1-61779-400-1_10View ArticleGoogle Scholar - Roy NC, Alterman E, Park ZA, McNabb WC:
**A comparison of analog and next-generation transcriptomic tools for mammalian studies.***Brief. Funct. Genomics*2011,**10**(3):p135-p150. 10.1093/bfgp/elr005View ArticleGoogle Scholar - Karlebach G, Shamir R:
**Modeling and analysis of gene regulatory networks.***Nat. Rev. Mol. Cell Biol.*2008,**9:**770-780. 10.1038/nrm2503View ArticleGoogle Scholar - Pan Y, Durfee T, Bockhorst J, Craven M:
**Connecting quantitative regulatory-network models to the genome.***Bioinformatics*2007,**23:**p367-p376. 10.1093/bioinformatics/btm228View ArticleGoogle Scholar - Akutsu T, Miyano S, Kuhara S:
**Identification of genetic networks from a small number of gene expression patterns under the Boolean network model.***Pac. Symp. Biocomput.*1999,**1999:**17-28.Google Scholar - Sharan R, Karp RM:
**Reconstructing Boolean models of signaling.***J. Comput. Biol.*2013,**3:**p1-p9.MathSciNetGoogle Scholar - Gat-Viks I, Tanay A, Raijman D, Shamir R:
**A probabilistic methodology for integrating knowledge and experiments on biological networks.***J. Comput. Biol.*2006,**13:**p165-p181. 10.1089/cmb.2006.13.165MathSciNetView ArticleGoogle Scholar - Shannon CE:
**A mathematical theory of communication.***Bell Syst. Tech. J.*1948,**27**(379–423):623-656.MathSciNetView ArticleGoogle Scholar - Karlebach G, Shamir R:
**Constructing logical models of gene regulatory networks by integrating transcription factor-DNA interactions with expression data: an entropy-based approach.***J. Comput. Biol.*2012,**19:**p30-p41. 10.1089/cmb.2011.0100MathSciNetView ArticleGoogle Scholar - Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T:
**Cytoscape: a software environment for integrated models of biomolecular interaction networks.***Genome Res.*2003,**13:**p2498-p2504. 10.1101/gr.1239303View ArticleGoogle Scholar - Edgar R,
*et al*.:**Gene expression omnibus: NCBI gene expression and hybridization array data repository.***Nucleic Acids Res.*2002,**30**(1):p207-p210. 10.1093/nar/30.1.207MathSciNetView ArticleGoogle Scholar - Shmulevich I, Zhang W:
**Binary analysis and optimization-based normalization of gene expression data.***Bioinformatics*2002,**18**(4):555-565. 10.1093/bioinformatics/18.4.555View ArticleGoogle Scholar - Li F, Long T, Lu Y, Ouyang Q, Tang C:
**The yeast cell-cycle network is robustly designed.***Proc. Natl. Acad. Sci. U. S. A.*2004,**101:**p4781-p4786. 10.1073/pnas.0305937101View ArticleGoogle Scholar - Gardner M:
**Mathematical games - the fantastic combinations of John Conway's new solitaire game “life”.***Scientific Am.*1970,**223:**120-123.View ArticleGoogle Scholar - Orlando DA, Lin CY, Bernard A, Wang JY, Socolar JES, Iversen ES, Hartemink AJ, Haase SB:
**Global control of cell-cycle transcription by coupled CDK and network oscillators.***Nature*2008,**453:**944-948. 10.1038/nature06955View ArticleGoogle Scholar - Zhou X, Wang X, Dougherty E:
**Binarization of microarray data based on a mixture model.***Mol Cancer Therap*2003,**2**(7):679-684.Google Scholar - Gat-Viks I, Tanay A, Raijman D, Shamir R:
**A probabilistic methodology for integrating knowledge and experiments on biological networks.***J. Comput. Biol.*2006,**13:**165-181. 10.1089/cmb.2006.13.165MathSciNetView ArticleGoogle Scholar - Friedman N, Linial M, Nachman I, Pe'er D:
**Using Bayesian networks to analyze expression data.***J. Comput. Biol.*2000,**7:**601-620. 10.1089/106652700750050961View ArticleGoogle Scholar - Shmulevich I, Dougherty ER, Kim S, Zhang W:
**Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks.***Bioinformatics*2002,**18:**261-274. 10.1093/bioinformatics/18.2.261View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.