Skip to main content

Detecting controlling nodes of boolean regulatory networks

Abstract

Boolean models of regulatory networks are assumed to be tolerant to perturbations. That qualitatively implies that each function can only depend on a few nodes. Biologically motivated constraints further show that functions found in Boolean regulatory networks belong to certain classes of functions, for example, the unate functions. It turns out that these classes have specific properties in the Fourier domain. That motivates us to study the problem of detecting controlling nodes in classes of Boolean networks using spectral techniques. We consider networks with unbalanced functions and functions of an average sensitivity less than 2 3 k, where k is the number of controlling variables for a function. Further, we consider the class of 1-low networks which include unate networks, linear threshold networks, and networks with nested canalyzing functions. We show that the application of spectral learning algorithms leads to both better time and sample complexity for the detection of controlling nodes compared with algorithms based on exhaustive search. For a particular algorithm, we state analytical upper bounds on the number of samples needed to find the controlling nodes of the Boolean functions. Further, improved algorithms for detecting controlling nodes in large-scale unate networks are given and numerically studied.

1 Introduction

The reconstruction of genetic regulatory networks using (possibly noisy) expression data is a contemporary problem in systems biology. Modern measurement methods, for example, the so-called microarrays, allow measuring the expression levels of thousands of genes under particular conditions. A major problem is to predict the structure of the underlying regulatory network. The overall goal is to understand the processes in cells, for example, how cells execute and control operations required for the functions performed by the cell. In the Boolean model, this implies that based on a given set of observed state-transition pairs (samples), the Boolean functions attached to each node need to be identified. In general, this problem is quite hard, due to the large number of possible Boolean functions. First results for the noiseless case appeared 1998 in the work of Liang et al. [1]. Their Reverse Engineering Algorithm (REVEAL) tries in a first step to find the controlling nodes of each node by estimating the mutual information between possible variables and the regulatory function's output. After the inputs have been identified, the truth table of the Boolean functions can be determined from the samples. If the number of variables for each function is at maximum K, the REVEAL algorithm considers any of the ( K n ) combinations of variables, where n is the number of nodes in the network.

The numerical results in [1] suggest that it is possible to identify a Boolean network using a small number of samples. Akutsu et al. [2] gave an analytical and constructive proof that it is possible to identify the network using only O ( log n ) samples with high probability. For constant values of K, the given algorithm, BOOL, has time complexityO ( n K + 1 m ) where m is the number of samples. Later it was shown that a similar algorithm also works in the presence of (low-levela) noise [3]. These algorithms are based on exhaustive search in two ways. First, they search through all ( K n ) possible combinations of controlling nodes. Second, they search through all of the 2 2 K possible Boolean functions. Lähdesmäki et al. [4] overcame the problem to search through all possible Boolean functions, reducing the double exponential factor to roughly 2 K . But their algorithm still searches through all ( K n ) possible variable combinations, hence, runs roughly in time nK . If n is large, applying such an algorithm is prohibitive even for moderate values of K.

The algorithms above implicitly solve two distinct problems. First, the controlling nodes of all nodes have to be detected, and second, each function has to be determined. This paper is dedicated to algorithms for detecting controlling nodes in Boolean networks. In general, this problem can be solved by exhaustive search in time nK . By exploiting structural properties of certain classes of functions, the time and sample complexity of the algorithms can be reduced. The sample complexity of an algorithm is the number of samples needed to detect the controlling nodes with a predefined probability. In fact, one can readily apply methods stemming from the area of PAC (probably approximately correct) learning theory [5], as the network identification problem can be reduced to the problem of learning Boolean juntas, i.e., Boolean functions that dependb only on a small number of their arguments. This problem was studied by Arpe and Reischuk [6] extending earlier work of Mossel et al. [7, 8].

The particular inference problem studied here is the following. Given a synchronous Boolean network and a set of input/output patterns, i.e.,

{ ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , , ( X m , Y m ) } ,

where X l and Y l describe noisy observations of two successive network states X l and Y l at some time t l and t l + 1, respectively. The networks state X l at time t l is modeled using a uniformly distributed random variable X.

The task to detect the controlling nodes can be reduced to the problem to find the essential variables of the Boolean functions. This problem is easier to solve for some classes of functions, namely for nearly all unbalanced functions and functions of an average sensitivity less then 2 3 k, where k is the number of controlling variables for a function. Further the class of 1-low networks, which include unate networks, linear threshold networks, and networks with nested canalyzing functions, is considered. The application of spectral learning algorithms leads to both better time and sample complexity for the detection of controlling nodes compared with exhaustive search. In particular, a slight improvement in the algorithm given in [6] is presented, for which analytical bounds on the number of samples needed to find the controlling nodes are derived. It is notable that for the class of 1-low networks, the time complexity of the resulting algorithms is roughly n2. The algorithm is further improved, where the main focus lies on the identification of controlling nodes in a large-scale unate network.

Finally, the performance of the improved algorithms is evaluated for large-scale unate networks with 500 nodes using numerical simulations. Further, the problem is studied in a Boolean network model of a control network of the central metabolism of Escherichia coli with 583 nodes [9]. Preliminary results of this work were presented in [10, 11].

The outline of the paper is as follows. In Section 2, Boolean networks are defined and the detection problem is formally stated. The two classes of functions considered here are introduced and discussed. Section 3 gives a brief introduction to the Fourier analysis of Boolean functions and discusses the spectral properties of the two classes of functions. Further, the algorithms are stated and analyzed in 3.3 and 3.4. Simulation results are presented in 3.5.

2 Regulatory networks and inference

2.1 Boolean regulatory networks

A Boolean network (BN) of n nodes can be described by a numbered list F = {f1, f2, ..., f n } of Boolean functions (BFs) f i : {-1, +1} n → {-1, +1}. Each node i in the network has a binary state variable x i (t) {-1, +1} assigned, which may vary in time t . The networks state at time t is given by x(t) = (x1, x2, ..., x n )(t) {-1, +1} n . The state of a node i at time t + 1 is given as

x i ( t + 1 ) = f i ( x ( t ) ) ,

i.e., given by the pre-state of the network x(t) and the Boolean functions f i .

In general not all of the possible n variables of a function f i are essential. The i th variable is called essential to f if and only if there exists at least one x {-1, +1} n such that f(x1, ..., x i , ..., x n ) ≠ f(x1, ..., -x i , ..., x n ). An equivalent terminology is that the function f depends on the i th variable. For any function f, the set var(f) {1, ..., n} is defined by

i var ( f ) if and only if the  i th variable is essential to  f ;

hence, var(f) is called the set of essential variables of f. If var(f) ≤ k, a function f with n variables is usually called a (n, k)-junta.

Finally note that each BN can be associated with a directed graph that allows describing the network using graph theoretic terms. Let G(V, E) be a directed graph, where V = {1, 2, ..., n} is the set of nodes and E V × V is the set of edges. The set E is defined by

( i , j ) E if and only if i var ( f j ) .

2.2 The detection problem

Assume that there exists an unknown BN that is an appropriate description of an underlying dynamical process, for example, a regulatory network. An experiment generates state-transition pairs by observing the process, but in general, the measurements of the state-transitions are noisy. The challenge is now to detect the functional dependencies between the nodes of the network.

This problem can be restated as follows: Assume that a function f is chosen at random from a subset of functions . A single state-transition contains a pre-state X l {-1, +1} n , chosen according to a well defined distribution and the corresponding output of the function Y l = f(X l ). Each component Xl, iand Y l is independently flipped with probability ϵ. In the following, ϵ is called the noise rate. In this way, a set of m noisy observations or samples,

X m = { ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , , ( X m , Y m ) } ,

is obtained. In the following, it is assumed that X is uniformly distributed. Some comments on choosing X uniformly distributed will be given in the last section. Given a set of samples, the task is to detect the set of essential variables of f. This should be achieved in an efficient way, since the number of nodes can be very large in realistic problems. Further, the probability of a detection error should be as small as possible.

2.3 Classes of regulatory functions

Different classes of functions have been proposed to model regulatory functions. The authors do not attempt to interfere in this discussion. Merely, the approach taken here is to show that many of the proposed functions fall into two classes for which Fourier-based algorithms provide an advantage in running time over algorithms based on exhaustive search. A precise definition is given later. Two classes of functions that may be reasonable models of functions in genetic regulatory networks are presented. For both of these classes, it is assumed that the number of essential variables is less or equal to k. The first class, denoted by C 2 3 k , includes

  • functions with average sensitivity less than 2 3 k, and

  • unbalanced functions,

where it is assumed that for any function f any restriction f′ on k′ > 1 of its essential variables has an average sensitivity less or equal than 2 3 k or is an unbalanced functions (or both). Note that a restriction f′ is obtained from f by setting some of its variables to fixed values. The second class C 1 includes

  • unate functions, which further include

  • nested canalizing functions, and

  • linear threshold functions.

The average sensitivity of a Boolean function f is defined as

as ( f ) = i I i ( f ) ,

where I i (f) is the influence of the variable i on f, [12], defined as

I i f =  Pr  { f ( X 1 . . . , X i , . . . , X n ) f ( X 1 , . . . , - X i , . . . , X n ) } .
(1)

Basically, low average sensitivity is a prerequisite of non-chaotic behavior in random Boolean networks (RBNs), in particular, the expectation of the average sensitivity has to be less or equal to 1 [13]. This motivates to study the class C 2 3 k as it is widely assumed that Boolean models of biological networks are tolerant to perturbations. Unbalanced functionsc are of interest due to a similar reason; namely, it is well known that the average sensitivity of balanced functions is lower bounded by 1 [14]. Hence, a function that has average sensitivity less than 1 is necessarily unbalanced.

Unate functions were shown to be of interest in the biological context by Grefenstette et al. [15]. These functions arise as a consequence of a biochemical model. They can be defined in terms of monotone functions. A function f is called monotone if f(x) ≤ f(y) holds for every xy, where xy x i y i . A function f(x) = f(x1, x2, ..., x n ) is said to be unate if there exists some fixed σ {-1, +1} n such that f(x1·σ1, x2·σ2, ..., x n ·σ n ) is a monotone function. Besides the results of Grefenstette et al., the class of unate functions is considered to be very promising because each variable of a unate function is correlated with its output. This property was conjectured to be important from the first days on [1]. Secondly, it contains the class of nested canalyzing functions and linear threshold functions which can often be found in Boolean models of regulatory networks. Kauffman et al. [16] discussed nested canalizing functions in the context of RBNs and found them to have a stabilizing effect on the networks. Notably, Samal et al. [17] reported that in the large-scale Boolean model of the regulatory network of the E. coli metabolism [9], the input functions of 579 out of 583 genes are, at least, canalyzing. Further investigations by the authors of the present paper revealed that all functions are unate. Linear threshold functions (LTFs) often appear in Boolean models of regulatory networks, for example, [18, 19]. A Boolean function is a LTF if it can be represented by

f ( x 1 , x 2 , , x n ) = + 1 if w 0 + i = 1 n w i x i 0 - 1 otherwise ,

where w i . For n < 4, the classes of unate and linear threshold functions coincide [20].

3 Learning essential variables of regulatory functions

3.1 Fourier analysis and learning

Let f : {-1, 1} n → {-1, 1} be a n-ary BF. Any function f can be represented by its Fourier expansion

f ( x ) = U [ n ] f ^ ( U ) χ U ( x ) ,
(2)

where [n] = {1, 2, ..., n} and

χ U ( x ) = i U x i

are the parity functions on variables in U. The Fourier coefficients f ^ U appearing in Equation 2 are given by

f ^ ( U ) = 2 - n x { - 1 , + 1 } n f ( x ) χ U ( x ) .
(3)

The number of Fourier coefficients is 2 n and each takes values in the interval [-1, 1] and is a multiple of 2-n+1. Parseval's theorem can be stated as

U [ n ] f ^ ( U ) 2 = 1 .
(4)

A particular property that is used later is the following. If f does not depend on the variable i, then

f ^ ( U ) = 0 if  i U .
(5)

Using this fact, Parseval's theorem implies that for a constant function f,

| f ^ ( ) | = 1 and f ^ ( U ) = 0 for all  U .

Further, if f is a (n, k)-junta, all coefficients f(U) with |U| > k are zero, which reduces the maximal number of non-zero coefficients to 2 k . All coefficients are multiples of 2-k+1, i.e., for some c

f ^ ( U ) = c 2 - k + 1 with | c | 2 k - 1 .
(6)

Hence, for any non-zero f ^ U ,

min U | f ^ ( U ) | 2 - k + 1 .
(7)

Spectral learning techniques identify a function or its dependencies from randomly drawn samples by estimating the spectral coefficients. Given a set of samples X m = { ( X 1 , Y 1 ) , , ( X m , Y m ) } , an estimator ĥ U of the coefficient f ^ U is given by

ĥ ( U ) = 1 m ( 1 - 2 ε ) | U | + 1 i = 1 m Y j χ U ( X j ) .
(8)

A similar approach was first proposed in [21] for the noiseless case and can also be used in the presence of noise [22]. It can be shown that

E ĥ ( U ) = f ^ ( U ) ,
(9)

see, for example, [22]. If the number of samples m grows, the estimator Equation 8 will converge to its expected value, namely f ^ U .

3.2 Spectral properties of specific regulatory functions

The Boolean functions mentioned in Section 2.3 be categorized according to their lowness[6].

Definition 1. A Boolean function f : {-1, +1} n → {-1, +1}is τ -low if for any i var(f) there exists a set U [n] with 0 < |U| ≤ τ such that i U and

| f ^ ( U ) | > 0 .

Clearly any function that is τ-low is also τ′-low if τ′ > τ. The notation of lowness allows to define the following families of classes.

Definition 2. C τ is the set of functions that are τ-low.

In this paper, the focus is on 2 3 k -low and 1-low functions. First, the latter class is considered. All unate functions are 1-low. This follows as

| f ^ ( { i } ) | = I i ( f ) , if  f is unate,
(10)

[23], and the fact that for any Boolean function, the influence of an essential variable is larger than zero. Hence, if the i th variable of a unate function f is essential, the Fourier coefficient f ^ i is non-zero.

Now the class C 2 3 k is discussed, first the following definition is needed.

Definition 3. A function f : {-1, +1} n → {-1, +1} is mth-order correlation immune if for all U [n] with 1 ≤ |U| ≤ m

f ^ ( U ) = 0 .

Correlation immune functions were considered by Siegenthaler [24] who used a different definition. The definition in terms of the Fourier coefficients as used here is due to Xiao and Massey [25]. These functions are of interest in cryptography, for example, to design combining functions of stream ciphers.

Unbalanced correlation immune functions cannot exist for too large m as the next theorem shows.

Theorem 1 (Mossel et al. [8]). Let f : {-1, +1} n → {-1, +1} be an unbalanced, mth order correlation immune function. Thenm 2 3 n.

A similar proposition holds for functions with low average sensitivity.

Proposition 1. Let f : {-1, +1} n → {-1, +1} be a mth-order correlation immune function such that as ( f ) 2 3 n, where X {-1, +1} n is uniformly distributed. Thenm 2 3 n.

Proof. If f is unbalanced, the proposition is true. Suppose f is balanced. Assume for contradiction that

| f ^ ( U ) | = 0 for  1 | U | m = 2 3 n .
(11)

From Parseval's theorem it follows that

as ( f ) = U [ n ] | U | f ^ ( U ) 2 = | U | > m | U | f ^ ( U ) 2 (1) > m U f ^ ( U ) 2 = m ( 1 - f ^ ( ) 2 ) = 2 3 n (2) (3)

which contradicts the assumption of the proposition.   □

Proposition 2. Let f be a function with k ≥ 2 essential variables (out of n) such that any restriction f′ on k′ of its essential variables, where 1 < k′k, has an average sensitivity less or equal than 2 3 k or is an unbalanced functions (or both). Then f is 2 3 k -low.

Proof. First note that if k = 2 the proposition is true. Now consider a function with k > 2. By assumption there is a variable i var(f) with a "low" coefficient,

1 Input: X, n, d

2 Output: R ̃ the essential variables

3 Global Parameters: τ, ϵ

4 begin

5     R ̃ =;

6    foreach U [n] and 1 ≤ |U|τ do

7       ĥ U 1 - 2 ε - | U | - 1 m - 1 x , y χ y χ U ( x ) ;

8       if |ĥ ( U ) | 2 - d then

9           R ̃ R ̃ U;

10       end

11    end

12 end

Algorithm 1: τ-NOISY-FOURIER d

that is U i and U 2 3 k. Consider the restrictions of f to the variable i denoted with f-1 and f+1. It is straightforward to show that

f ^ ( U ) = 1 2 f ^ + 1 ( U \ { i } ) + ( - 1 ) | { i } U | f ^ - 1 ( U \ { i } ) .
(12)

For variable ji there is a set V j and i V with V 2 3 k - 1 such that either f ^ + 1 ( V ) 0 or f ^ - 1 ( V ) 0 Eq. (12) implies that either f ^ ( V ) or f ^ ( V { i } ) not equal to zero. In the worst case one has to consider the coefficient f ^ ( V { i } ) . Now note that as |V {i}| is an integer number

| V { i } | 2 3 ( k 1 ) + 1 2 3 k .

This argument can now be repeated recursively (applying Eq. (12) to f-1 and f+1) showing the proposition.   □

3.3 The τ- NOISY-FOURIER d algorithm

A simple algorithm to find the essential variables of τ-low (n, k)-juntas directly follows from Equations 6 and 7. First, all Fourier coefficients up to weight τ are estimated. The absolute value of each estimated coefficient ĥ ( U ) is compared with a threshold. If a coefficient f ^ ( U ) is non-zero, its absolute value cannot be smaller then 2-k+1, see Equation 7. Hence, if |ĥ ( U ) | is larger than 2 -k , the variables corresponding to U are classified as essential. The algorithm was given by [6], but they used 2-d-1as threshold (see Line 8).

The following theorem appeared first in [6] but with a different bound.

Theorem 2. Let f be a τ-low (n, k)-junta and

m 2 2 2 k ( 1 - 2 ε ) - 2 τ - 2 ln 2 n τ δ .
(13)

Then Algorithm 1 identifies all essential variables with probability 1 - δ.

The bound is even true if ϵ is only an upper bound on the noise rate. The theorem follows from applying standard Hoeffding bounds. Note that the bound above is different to [6]. If τ = 1, the number of samples required to reach a predefined probability of error is smaller by a factor 4. This directly follows from the different threshold used here. If τ > 1, it was claimed in [6] that nτ can be replaced by n. But simulation results of the authors (not shown) contradict this result; hence, we rely here on the weaker result shown in Theorem 2. This issue will be discussed in future work.

3.4 Improved algorithms

In the following section, two algorithms are discussed that lead to better numerical results as Algorithm 1 especially for low k. The first algorithm is a straight forward modification of the τ-NOISY-FOURIER algorithm and is discussed in Section 3.4.1. The second algorithm requires a further assumption on the functions to which it is applied; namely, suppose that f is τ-low. If a variable of the function f is set to a particular fixed value, i.e., -1 or +1, the restricted version of f is obtained (this will be discussed in more detail later on). Now it has to be assumed that the restricted function is still τ-low, i.e., they have to be recursive τ-low. While it is possible to define such classes, only unate functions are considered. On the one hand, they naturally fulfill the constraint defined above, as any restriction of a unate function is again a unate function. On the other hand, they seem to be the most important class of functions as discussed earlier. Nevertheless, the following algorithms will be formulated in a way such that it is clear how to apply them for recursive τ-low functions.

3.4.1 A modification of the τ-NOISY-FOURIER d

Algorithm 1 suffers from a high number of so-called type-2-errors, i.e., it classifies non-essential variables as essential, especially for a small number of samples m. Hence, a simple modification is to return only a limited number of essential variables by taking only the variables that correspond to the coefficients with largest absolute value. The algorithm is denoted by τ -NOISY-FOURIERMOD and is shown below. The computational complexity of the algorithm increases compared with Algorithm 1. In line 8 ( τ n ) , many spectral coefficients have to be sorted which can be done in roughly n2τin the worst case [26].d In Figure 1 on page 19, the effect of the modification on the detection error is numerically studied.

Figure 1
figure 1

The average detection error in 10000 trials: Theoretical bound (dashed), original (triangle), and modified (box) τ-NOISY-FOURIER d , for unate functions with n = 500, ϵ = 0.05, d = k = 1 (red), 2 (blue), 3 (black), 4 (yellow), 5 (brown).

3.4.2 The K JUNTA algorithm

The second algorithm is based on the original idea of Mossel et al. [8] who recursively applied their algorithm to restricted functions of the original. While they did for other reasons, a slight modification of their approach can be used to reduce the number of samples needed. The running time of the algorithm is increased by an exponential dependency on k.

1 Input: X, n, d

2 Output: R ̃ the essential variables

3 Global Parameters: τ, ϵ

4 begin

5     R ̃ ;

6    foreach U [n] and |U| ≤ τ do

7       ĥ ( U ) ( 1 - 2 ε ) - | U | - 1 m - 1 ( x , y ) X y χ U ( x ) ;

8    end

9     U i :|ĥ ( U 1 ) ||ĥ ( U 2 ) ||ĥ ( U l ) |   // mod: sorted index;

10    for i = 1 to l do

11       if| R ̃ |<dthen    // mod: limiting condition

12          if|ĥ ( U i ) | 2 - d then R ̃ R ̃ U i ;

13       end

14    end

15 end

Algorithm 2: τ -NOISY-FOURIERMOD

To describe the algorithm, some additional definitions are needed. Define a (n, d) restriction ρ = (ρ1, ρ2, ..., ρ n ) as a vector of length n which consists of symbols in {+1, -1, *}, where the symbol * occurs exactly d times. The restricted function f| ρ can be obtained from the function f by fixing d arguments x i in the following way. If ρ i ≠ * then x i = ρ i . All x i for i such that ρ i = * are the arguments of f| ρ ; hence, it depends on at most d arguments. A vector x of length n matches if for all ρ i ≠ * it holds that x i = ρ i . The restricted samples set X ρ is defined as a subset of X that contains all samples (x, y) such that x matches the restriction ρ, i.e.,

X ρ = ( x , y ) X | x matches ρ .

The algorithm is now described as follows. Suppose there exists a procedure IDENTIFY that can identify at least one essential variable of a function f given a number of samples. If no essential variables exist, i.e., if f is constant, the procedure returns the empty set Ø.

Given a (n, k)-junta f, with k > 0, and a set I R = var(f) that contains some essential variables that are already known. Further, assume that there is a restriction ρ that fixes exactly the variables in I. The function f| ρ can be either the constant function or depend on some of the variables that are not fixed yet. For the latter case suppose that at least one new variable can be identified, using procedure IDENTIFY. Denote the set of newly identified variables with I. Then the procedure is continued with all of the 2|I|new restrictions that fix the variables in I until all these sub-restrictions will be constant. The resulting algorithm in a recursive form is given as Algorithm 3. Initially, the algorithm is started with K J UNTA ( X , n , d ) , where the global parameters (τ = 1, ϵ) are fixed.

Most of the algorithm has been explained already. First note that passing n as an argument is not necessary, because it is an implicit parameter of the

1 Input: X, n, d

2 Output: R ̃ the essential variables

3 Global Parameters: τ, ϵ

4 begin

5     R ̃ ;

6    I IDENTIFY ( X , d ) ;

7    if (d > |I| > 0) then

8        R ̃ ;

9       foreach restriction ρ do

10           R ̃ R ̃ K J UNTA ( X ρ , n - | I | , d - | I | ) ;

11       end

12        R ̃ COMBINE R ̃ , R ̃ , ρ ;

13    end

14 end

Algorithm 3: K JUNTA

1 Input: X, n, d

2 Output: I variables found

3 Global Parameters: τ, ϵ

4 begin

5    I;

6    foreach U [n] and |U| ≤ τ do

7       ĥ ( U ) ( 1 - 2 ε ) - | U | - 1 m - 1 ( x , y ) X y χ U ( x ) ;

8    end

9     M arg max U : 0 < | U | τ | h ^ ( U ) | ;

10    if ( CONST ( h ^ ( M ) , h ^ ( ) , d ) = t r u e ) then IM ;

11 end

Algorithm 4: IDENTIFY

samples. Further comments should be given to the line 9. The foreach loop is executed for each of the 2|I|possible restrictions of the variables contained in I. For each restriction, the corresponding restricted sample set is calculated and passed in a new call to K JUNTA. Each of these calls runs on smaller problems, namely finding variables of a (n - |I|, d - |I|)-junta. Notably, each of these runs is independent of the others. The variables found are then combined with R ̃ in line 11 using the procedure COMBINE. This is not just a union of sets since one has to take care about the labeling of the variables. For example, if R ̃ = { 1 } , and a subsequent call of K JUNTA returns variables joined to R ̃ = { 1 , 3 } , combining both leads to R ̃ = { 1 , 2 , 4 } .

The IDENTIFY procedure The question remains how to identify some of the essential variables or how to decide whether the function is constant. For τ-low functions, it is sufficient to estimate all coefficients f ^ U with |U| ≤ τ. In [7], it was proposed to search for the first coefficient that is above a certain threshold. The approach here is different. In particular, all coefficients with weight less or equal τ are computed. The coefficient with the maximum absolute value is compared with the zero coefficient to distinguish between a constant and a non-constant function. How this can be done is discussed below. The resulting algorithm is formulated in terms of Algorithm 4 on page 12. In line 8, the procedure CONST is called which tries to distinguish between a constant function and a non-constant function. If a non-constant function is found, the variables in M are returned, otherwise the empty set.

The CONST procedure In the following it is discussed how a constant function can be distinguished from a non-constant function, given that the function depends on not more than k variables. This is done based on the zero coefficient f ^ ( ) and the coefficient with the largest absolute value, denoted by f ^ ( M ) . Note that if and only if f is constant, | f ^ ( ) |=1 and f ^ ( U ) =0 for any set U by Parseval's theorem. If f is non-constant, | f ^ ( ) |<1 and there exists at least one coefficient with | f ^ ( U ) |>0 for some U; hence, it follows that | f ^ ( M ) |>0.

To distinguish between a constant and a non-constant function different procedures exist. The most simple one was proposed by Mossel et al. which will be denoted by CONST 1. There, if |ĥ ( ) |>1- 2 - d or |ĥ ( M ) |< 2 - d , the function is declared as constant.

For small d, a better procedure, that requires less samples, exists. It is denoted by CONST 2. Given the 2-tuple ( ĥ ( ) , ĥ ( M ) ) compute the--in Euclidean distance-- closest tuple (α, β) such that α < 1, β > 0 are multiples of 2-d+1. Hence, the function is declared as constant if

dist ( ĥ ( ) , ĥ ( M ) ) , ( 1 , 0 ) < dist ( ĥ ( ) , ĥ ( M ) ) , ( α , β ) ,

where dist (·,·) denotes the Euclidean distance.

A note on the computational complexity As mentioned, Algorithm 3 has an increased complexity compared with Algorithm 1. In the worst case, the algorithm is called 2 k times, but clearly each time on a smaller problem. If it is assumed that ĥ U can be computed in time O ( n m ) , the algorithm runs in O ( 2 k n 2 m ) for 1-low functions. Obviously for constant k, this reduces to O ( n 2 m ) .

3.5 Simulation results for unate networks

To compare the performance of the different algorithms, the following procedure is used. Suppose a BF f is chosen uniformly at random from a class n of n-ary τ-low functions, where τ and n are known. For the functions f, a set of m noisy state-transitions X m = { ( X l , Y l ) | l = 1 . . m } is generated as described in Section 2.2. The noise rate is fixed to ϵ = 0.05.

The most important indicator is the probability of a detection error. Define as the event { R ̃ var ( f ) } where R ̃ is the detected variable set. The detection error probability

P = Pr R ̃ var ( f )

is a prior indicator on the algorithm's performance. It should be mentioned that if there exists a function f such that var(f) > d, the detection error probability P does not vanish, even for large m.

Further evaluation criteria that are used in Section 3.5.3 are the precision rate ρ and the false-negative rate β. In the present context, the precision rate is defined as the conditional probability that a detected variable is indeed an essential variable, i.e.,

ρ = Pr i var ( f ) | i R ̃ .

An equivalent way of stating that matter is that a predicted edge e is in E, where G(V, E) is the associated graph of the network. The false-negative rate is defined as the conditional probability that an essential variable is not detected as being essential,

β = Pr i R ̃ | i var ( f ) .

In a network, this can be interpreted as the fraction of edges that have not been detected. The definitions above are consistent with Zhao et al. [27] who defined the type-1-error as the event that a node i is classified as a controlling node of some node j although this is not the case. Consequently the type-2-error is defined as the event { i R ̃ | i var ( f ) } .

3.5.1 τ-NOISY-FOURIER d versus τ- N OISY - F OURIER d m o d

First, the modified version of the τ-NOISY-FOURIER d algorithm is compared with the original algorithm. In 10,000 independent experiments, unate functions with exactly k essential variables are randomly drawn. The parameter d is always set to k. The results are presented in Figure 2, further the upper bounds on the detection error probability (Theorem 2) are shown. As promised τ- N OISY - F OURIER d mod outperforms the original algorithm.

Figure 2
figure 2

The average detection error in 10,000 trials: τ- N OISY - F OURIER d mod (box) and K JUNTA with CONST 1 (circle) and CONST 2 (diamond) procedure, unate functions (n = 500, ϵ = 0.05, d = k = 1 (red), 2 (blue), 3 (black), 4 (yellow), 5 (brown).

3.5.2 τ- N OISY - F OURIER d m o d versus K JUNTA

Again a subset of unate functions with exactly k essential variables is used to compare the τ- N OISY - F OURIER d mod algorithm with the K JUNTA algorithm. The parameter d is always set to k. The results are shown in Figure 2. For functions with a low number of essential variables, the procedure CONST 1 outperforms the τ-NOISY-FOURIER d algorithm. But the better performance vanishes with an increasing number of variables.

3.5.3 τ-NOISY-FOURIER d versus K JUNTA on an E. coli network

In this simulation, the functions are chosen from the regulatory functions of the control network of the E. coli metabolism [9]. This set includes functions with a different number of essential variables. Further, also some constant functions are included and some functions occur several times. Each function f has 583 possible arguments but depends on not more than eight variables. The functions distribution on essential variables is given in Table 1 and is equivalent to the in-degree distribution of the corresponding network.e The results in Figure 3 are obtained by applying the algorithms to each function in the set, this experiment is performed 100 times.

Table 1 In-degree distribution of the Boolean network (see text).
Figure 3
figure 3

Simulation results for the modified τ - N OISY - F OURIER d m o d (box) and K J UNTA with the CONST 1 (circle) procedure applied on the regulatory functions of a network of E. coli , see text. (n = 583, ϵ = 0.05, d = k = 1 (red), 2 (blue), 3 (black), 4 (yellow), 5 (brown).

Remarkable results: In the previous simulations, the parameter d is always set to k. Further only functions with exactly k essential variables are chosen. Here, the parameter d is usually smaller than k, which implies that not all variables can be found. Only variables with influence large or equal 2-dcan be detected. This is implied by Equations 10 and 7. On the other hand, even if d < k for some function f, the algorithm can possibly detect some of the essential variables of f.

4 Conclusion

In this paper, the problem to detect controlling nodes in Boolean networks is discussed. Boolean functions that are relevant for modeling genetic networks seem to belong to classes of functions for which spectral-based algorithms provide an efficient solution--both, in computational complexity and data needed. Especially the algorithms for unate functions are highly efficient in both running time and the number of samples needed to identify controlling nodes. Further analytical bounds on the probability of a detection error can be stated.

If the samples are chosen according to a uniform distribution, the results are promising. Applying the methods to the E. coli control network, with 583 nodes, shows that using approximately 200 samples, it is possible to find nearly 40% of all edges in the network with a precision rate close to one. On the other hand, a wrong selection of the parameter d can have a dramatic effect on the precision. For example, if under the same conditions d = 4 is chosen, the precision will drop below 0.5. Fortunately, the choice of the parameter can be guided by the available analytical bounds of the detection error probability. The latter is dominated by the probability that the estimator ĥ ( { i } ) will deviate from f ^ i by more than +/- 2-d. But this also determines the precision of the algorithm. Suppose that 200 samples are obtained from the E. coli network. The analytical bounds shown in Figure 1 suggest to choose d = 1 which indeed leads to a high precision (see Figure 3).

Clearly, our assumption of uniformly distributed samples is too optimistic. Fortunately, known results from PAC learning [6] show that it is possible to use similar algorithms for product distributed samples, i.e., in a random vector X each X i is chosen independently of the others with a certain probability such that - 1 <E { X i } = μ i < 1 . But there is a major problem: If μmax = max1≤in|μ i | gets closer to 1, the number of samples needed will increase with roughly (1 - μmax)-2k. In unate networks, this coincides with the fact that the influences of the variables can become very small. Hence, further investigations in this direction are necessary. This would be a major step toward the application of spectral algorithms in a real-world scenario.

Endnotes

aThe theoretical analysis requires the noise level to be bounded below a small value. bThis will be defined more precisely later. cA function is unbalanced if the number of +1 and -1 in the truth table is different. dUsing a better implementation as Algorithm 2, this can be reduced to 2τ log N. eThe detailed table of the used functions can be found in the supplementary material.

References

  1. Liang S, Fuhrman S, Somogyi Reveal R: A general reverse engineering algorithm for inference of genetic network architectures. Proceedings of the Pacific Symposium on Biocomputing 1998, 18-29.

    Google Scholar 

  2. Akutsu T, Miyano S, Kuhara S: Identification of genetic networks from a small number of gene expression patterns under the boolean network model. Proceedings of the Pacific Symposium on Biocomputing 1999, 17-28.

    Google Scholar 

  3. Akutsu T, Miyano S, Kuhara S: Inferring qualitative relations in genetic networks and metabolic pathways. Bioinformatics 2000,16(8):727-734. 10.1093/bioinformatics/16.8.727

    Article  Google Scholar 

  4. Lähdesmäki H, Shmulevich I, Yli-Harja O: On learning gene regulatory networks under the boolean network model. Mach Learn 2003,52(1-2):147-167.

    Article  MATH  Google Scholar 

  5. Valiant LG: A theory of the learnable. Commun ACM 1984,27(11):1134-1142. 10.1145/1968.1972

    Article  MATH  Google Scholar 

  6. Arpe J, Reischuk R: Learning juntas in the presence of noise. Theor Comput Sci 2007,384(1):2-21. 10.1016/j.tcs.2007.05.014

    Article  MathSciNet  MATH  Google Scholar 

  7. Mossel E, O'Donnell R, Servedio RP: Learning juntas. In Proceedings of the ACM Symposium on Theory of Computing. ACM, San Diego, CA, USA; 2003:206-212.

    Google Scholar 

  8. Mossel E, O'Donnell R, Servedio RA: Learning functions of k relevant variables. J Comput Syst Sci 2004,69(3):421-434. 10.1016/j.jcss.2004.04.002

    Article  MathSciNet  MATH  Google Scholar 

  9. Covert MW, Knight EM, Reed JL, Herrgard MJ, Palsson BO: Integrating high-throughput and computational data elucidates bacterial networks. Nature 2004,429(6987):92-96. 10.1038/nature02456

    Article  Google Scholar 

  10. Schober S, Mir K, Bossert M: Reconstruction of boolean genetic regulatory networks consisting of canalyzing or low sensitivity functions. Proceedings of International ITG Conference on Source and Channel Coding (SCC'10) 2010.

    Google Scholar 

  11. Schober S, Heckel R, Kracht D: Spectral properties of a boolean model of the E.Coli genetic network and their implications of network inference. In Proceedings of International Workshop on Computational Systems Biology. Luxembourg; 2010.

    Google Scholar 

  12. Ben-Or M, Linial N: Collective coin flipping, robust voting schemes and minima of banzhaf values. Proceedings of IEEE Symposium on Foundations of Computer Science 1985, 408-416.

    Google Scholar 

  13. Lynch JF: Current Developments in Mathematics Biology: Proceedings of Conference on Mathematical Biology and Dynamical Systems. In Dynamics of Random Boolean Networks. Edited by: Culshaw R, Mahdavi K, Boucher J. World Scientific Publishing Co; 2007:15-38.

    Google Scholar 

  14. Kahn J, Kalai G, Linial N: The influence of variables on boolean functions. IEEE Proceedings of Symposium on Foundations of Computer Science 1988, 68-80.

    Google Scholar 

  15. Grefenstette J, Kim So, Kauffman S: An analysis of the class of gene regulatory functions implied by a biochemical model. Biosystems 2006,84(2):81-90. 10.1016/j.biosystems.2005.09.009

    Article  Google Scholar 

  16. Kauffman SA, Peterson C, Samuelsson B, Troein C: Genetic networks with canalyzing boolean rules are always stable. PNAS 2004,101(49):17102-17107. 10.1073/pnas.0407783101

    Article  Google Scholar 

  17. Samal A, Jain S: The regulatory network of e. coli metabolism as a boolean dynamical system exhibits both homeostasis and flexibility of response. BMC Syst Biol 2008,2(1):21. 10.1186/1752-0509-2-21

    Article  Google Scholar 

  18. Li F, Long T, Lu Y, Ouyang Q, Tang C: The yeast cell-cycle network is robustly designed. PNAS 2004,101(14):4781-4786. 10.1073/pnas.0305937101

    Article  Google Scholar 

  19. Davidich MI, Bornholdt S: Boolean network model predicts cell cycle sequence of fission yeast. PLoS ONE 2008,3(2):e1672. 10.1371/journal.pone.0001672

    Article  Google Scholar 

  20. McNaughton R: Unate truth functions. IRE Trans Electron Comput 1961, 10: 1-6.

    Article  MathSciNet  Google Scholar 

  21. Linial N, Mansour Y, Nisan N: Constant depth circuits, Fourier transform, and learnability. Journal ACM 1993,40(3):607-620. 10.1145/174130.174138

    Article  MathSciNet  MATH  Google Scholar 

  22. Bshouty NH, Jackson JC, Tamon C: Uniform-distribution attribute noise learnability. Inf Comput 2003,187(2):277-290. 10.1016/S0890-5401(03)00135-4

    Article  MathSciNet  MATH  Google Scholar 

  23. Gotsman C, Linial N: Spectral properties of threshold functions. Combinatorica 1994,14(1):35-50. 10.1007/BF01305949

    Article  MathSciNet  MATH  Google Scholar 

  24. Siegenthaler T: Correlation-immunity of nonlinear combining functions for cryptographic applications. IEEE Trans Inf Theory 1984,30(5):776-780. 10.1109/TIT.1984.1056949

    Article  MathSciNet  MATH  Google Scholar 

  25. Xiao G-Z, Massey JL: A spectral characterization of Correlation-Immune combining functions. IEEE Trans Inf Theory 1988,34(3):569-571. 10.1109/18.6037

    Article  MathSciNet  MATH  Google Scholar 

  26. Knuth DE: Art of Computer Programming, Volume 3: Sorting and Searching. 2nd edition. Addison-Wesley Professional, Reading, MA; 1998.

    Google Scholar 

  27. Zhao W, Serpedin E, Dougherty ER: Inferring connectivity of genetic regulatory networks using information-theoretic criteria. IEEE/ACM Trans Comput Biol Bioinf 2008,5(2):262-274.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steffen Schober.

Additional information

5 Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Schober, S., Kracht, D., Heckel, R. et al. Detecting controlling nodes of boolean regulatory networks. J Bioinform Sys Biology 2011, 6 (2011). https://doi.org/10.1186/1687-4153-2011-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-4153-2011-6

Keywords