As shown in Table 1, two time series of variables x1 and x2 are considered. Firstly, records of variables are transformed into a discrete space (Alfonso 2010): 5
Equation (5) rounds the original variable x into its nearest lowest integer multiple of a, called X. In this paper, the recorded discharges are rounded to their integer value (by a = 1) in order to apply the grouping property of mutual information. This choice makes the information retained by stations more precisely quantified, and provides good estimations for the entropy of original variables (Samuel et al. 2013). Therefore, the time series in Table 1 are transformed to the discrete values through rounding the digits (columns 3 and 4). Table 1 shows the mechanism of calculating the entropy of variables X1 and X2, using Equation (1). To calculate the joint entropy H(X1, X2), two approaches can be followed. The first one is using Equation (2), illustrated in columns 1 to 3 of Table 2. This approach, however, is not straightforward for more than two variables. The second approach is agglomerating the variables. Agglomeration for two variables X1 and X2 is done by A= 10X1+X2. This creates a unique value for new variable A corresponding to unique pairs of (X1, X2). Then, the probability of samples of new variable A can be calculated as shown in column 5 of Table 2. Subsequently, the entropy of A (i.e. H(A)) is calculated using Equation (1) which is exactly the same as H(X1, X2) obtained from Equation (2) by the first approach.
Table 1

Process of calculating the entropy (Equation (1)) for separate variables (Alfonso 2010)

x1x2X1X2p(X1)p(X2)log[p(X1)]p(X1)*log[p(X1)]log[p(X2)]p(X2)*log[p(X2)]
3.24 2.2 0.1 0.3 −1 0.1 −0.52288 0.157
4.25 2.08 0.1 – −1 0.1 –
5.3 1.15 0.7 0.1 −0.155 0.108 −1 0.1
5.33 4.81 – 0.3 – −0.523 0.157
5.45 5.4 – 0.2 – −0.699 0.140
5.7 4.36 – – – –
6.55 4.6 0.1 – −1 0.1 –
5.42 5.21 – – – –
5.4 3.13 – 0.1 – −1 0.1
5.25 2.91 – – – –
H(X1) = 0.408  H(X2) = 0.654
x1x2X1X2p(X1)p(X2)log[p(X1)]p(X1)*log[p(X1)]log[p(X2)]p(X2)*log[p(X2)]
3.24 2.2 0.1 0.3 −1 0.1 −0.52288 0.157
4.25 2.08 0.1 – −1 0.1 –
5.3 1.15 0.7 0.1 −0.155 0.108 −1 0.1
5.33 4.81 – 0.3 – −0.523 0.157
5.45 5.4 – 0.2 – −0.699 0.140
5.7 4.36 – – – –
6.55 4.6 0.1 – −1 0.1 –
5.42 5.21 – – – –
5.4 3.13 – 0.1 – −1 0.1
5.25 2.91 – – – –
H(X1) = 0.408  H(X2) = 0.654
Table 2

Process of agglomerating two variables and calculating joint entropy (Alfonso 2010)

p(X1, X2)log[p(X1, X2)]p(X1, X2)*log[p(X1, X2)]AP(A)log[P(A)]p(A)*log[p(A)]
0.1 −1 0.1 32 0.1 −1 0.1
0.1 −1 0.1 42 0.1 −1 0.1
0.1 −1 0.1 51 0.1 −1 0.1
0.2 −0.699 0.14 54 0.2 −0.699 0.14
0.2 −0.699 0.14 55 0.2 −0.699 0.14
– – 54 – –
0.1 −1 0.1 64 0.1 −1 0.1
– – 55 – –
0.1 −1 0.1 53 0.1 −1 0.1
0.1 −1 0.1 52 0.1 −1 0.1
H(X1, X2) = 0.88    H(A) = 0.88
p(X1, X2)log[p(X1, X2)]p(X1, X2)*log[p(X1, X2)]AP(A)log[P(A)]p(A)*log[p(A)]
0.1 −1 0.1 32 0.1 −1 0.1
0.1 −1 0.1 42 0.1 −1 0.1
0.1 −1 0.1 51 0.1 −1 0.1
0.2 −0.699 0.14 54 0.2 −0.699 0.14
0.2 −0.699 0.14 55 0.2 −0.699 0.14
– – 54 – –
0.1 −1 0.1 64 0.1 −1 0.1
– – 55 – –
0.1 −1 0.1 53 0.1 −1 0.1
0.1 −1 0.1 52 0.1 −1 0.1
H(X1, X2) = 0.88    H(A) = 0.88
Close Modal