Abstract
To improve the prediction accuracy of ammonia nitrogen in water monitoring networks, the combination of a bio-inspired algorithm and back propagation neural network (BPNN) has often been deployed. However, due to the limitations of the bio-inspired algorithm, it would also fall into the local optimal. In this paper, the seagull optimization algorithm (SOA) was used to optimize the structure of BPNN to obtain a better prediction model. Then, an improved SOA (ISOA) was proposed, and the common functional validation method was used to verify its optimization performance. Finally, the ISOA was applied to improve BPNN, which is known as the improved seagull optimization algorithm–back propagation (ISOA–BP) model. The simulation results showed that the prediction accuracy of ammonia nitrogen was greatly improved and the proposed model can be better applied to the prediction of complex water quality parameters in water monitoring networks.
HIGHLIGHTS
The structure of BPNN was optimized to obtain a better prediction model by using the seagull optimization algorithm (SOA).
We proposed an improving SOA (ISOA) and used the common functional validation method to verify its optimization performance.
The ISOA was used to improve BPNN, via the improved seagull optimization algorithm–back propagation (ISOA–BP) model.
INTRODUCTION
As an important part of the new generation of information technology, the Internet of Things (IoT) has been widely researched and applied in various scenarios (Fang et al. 2016, 2020a; Khan et al. 2020). Although the emergence of IoT has greatly improved our lives with the increase in the type and number of sensor devices, the amount of data to be processed is also increasing (Taonameso et al. 2019; Fang et al. 2020b, 2020c). How to deal with data efficiently and accurately has become the focus of people's research. As a method of data fusion, neural networks can abstract human brain neurons from the perspective of information processing and establish models, and compose different networks according to different connections which can be used for analysis and prediction. However, the neural network also has some shortcomings, such as falling into local optimization and poor generalization ability.
As one of the most widely used neural network structures, back propagation neural network (BPNN) is widely used in different fields. However, it is often unable to achieve global convergence and falls into local minima when solving complex problems, resulting in an invalid learning process. Additionally, the learning algorithm converges slowly, especially near the target (Han & Huang 2019). Therefore, in practical applications, the bio-inspired algorithm is often used to optimize its model structure. For example, the particle swarm optimization (PSO) algorithm is used to optimize BPNN to overcome the sensitivity and error fluctuation of the initial value of gradient descent method, and obtain the global optimal initial parameters which make the neural networks converge quickly (Wu et al. 2018). These algorithms also include fruit fly optimization algorithm (FOA) (Wu et al. 2019), genetic algorithm (GA) (Li et al. 2017), as well as mind evolutionary algorithm (MEA) (Wang et al. 2018).
As a newly proposed bio-inspired algorithm, the seagull optimization algorithm (SOA) has been proven to achieve better performance than some traditional algorithms (Dhiman & Kumar 2019). Therefore, in this paper, SOA was used to optimize the structure of BPNN to obtain an improved ammonia nitrogen prediction model. Then, considering the shortcomings of the SOA, an improved algorithm was proposed, and some benchmark functions were used to verify its performance. Finally, the improved SOA (ISOA) was applied to improve the BPNN to obtain a better prediction model. The simulation results verified that the performance of the model is better than that of the traditional model and the model using PSO to optimize BPNN. In other words, the proposed algorithm can be applied to a more complex water quality environment for water quality detection.
MATERIALS AND METHODS
SOA
The SOA is a novel bio-inspired algorithm for solving computationally expensive problems. This algorithm has a good global search ability, it imitates the way a seagull circles over its prey, and its attack will affect the local search ability of this algorithm (Dhiman & Kumar 2019; Jia et al. 2019).
Mathematical models of predator migration and attack are discussed. During the migration, the algorithm simulated how a group of gulls moved from one location to another. A seagull must meet the conditions in Equations (1)–(5).












SOA is summarized in Table 1 (Dhiman & Kumar 2019).
SOA procedure
Seagull optimization algorithm . |
---|
Input: seagull population ![]() |
Output: optimal search agent ![]() |
1: procedure SOA |
2: Initialize the parameters A, B, and ![]() |
3: Set ![]() |
4: Set ![]() |
5: Set ![]() |
6: while (![]() ![]() |
7: ![]() ![]() |
8: ![]() |
9: ![]() |
10: ![]() ![]() |
11: Calculate the distance ![]() |
12: ![]() ![]() |
13: ![]() ![]() |
14: ![]() ![]() |
15: end while |
16: return ![]() |
17: end procedure |
1: procedure ![]() |
2: for i ← 1 to n do |
3: ![]() |
4: end for |
5: ![]() |
6: return ![]() |
7: end procedure |
1: Procedure ![]() |
2: ![]() |
3: for i ← 1 to n do |
4: if ![]() |
5: ![]() |
6: end if |
7: end for |
8: return Best |
9: end procedure |
Seagull optimization algorithm . |
---|
Input: seagull population ![]() |
Output: optimal search agent ![]() |
1: procedure SOA |
2: Initialize the parameters A, B, and ![]() |
3: Set ![]() |
4: Set ![]() |
5: Set ![]() |
6: while (![]() ![]() |
7: ![]() ![]() |
8: ![]() |
9: ![]() |
10: ![]() ![]() |
11: Calculate the distance ![]() |
12: ![]() ![]() |
13: ![]() ![]() |
14: ![]() ![]() |
15: end while |
16: return ![]() |
17: end procedure |
1: procedure ![]() |
2: for i ← 1 to n do |
3: ![]() |
4: end for |
5: ![]() |
6: return ![]() |
7: end procedure |
1: Procedure ![]() |
2: ![]() |
3: for i ← 1 to n do |
4: if ![]() |
5: ![]() |
6: end if |
7: end for |
8: return Best |
9: end procedure |
ISOA











The idea of the improved SOA is to use chaotic after iteration to conduct chaotic iteration on the location of a seagull with the best fitness and increase its diversity. First, the original variables are mapped to chaotic variables using Equation (13), and then transformed using Equation (12). Finally, the original spatial position value is returned by Equation (14). If the position after the chaos is better than before the chaos, save it; otherwise, save the position before the chaos.
Functions test
In this section, the improved algorithm is tested on some unimodal and multimodal benchmark functions (Dhiman & Kumar 2019). The information of these functions is shown in Table 2.
Information of benchmark functions
Type . | Name . | Expression . | Domain of definition . | Global optimum . | Optimal value . |
---|---|---|---|---|---|
Unimodal | Sphere | ![]() | ![]() | ![]() | 0 |
Schwefel's 2.22 | ![]() | ![]() | ![]() | 0 | |
Schwefel's 1.2 | ![]() | ![]() | ![]() | 0 | |
Schwefel's 2.21 | ![]() | ![]() | ![]() | 0 | |
Noise | ![]() | ![]() | ![]() | 0 | |
Multimodal | Rastrigin | ![]() | ![]() | ![]() | 0 |
Ackley | ![]() | ![]() | ![]() | 0 | |
Griewank | ![]() | ![]() | ![]() | 0 |
Type . | Name . | Expression . | Domain of definition . | Global optimum . | Optimal value . |
---|---|---|---|---|---|
Unimodal | Sphere | ![]() | ![]() | ![]() | 0 |
Schwefel's 2.22 | ![]() | ![]() | ![]() | 0 | |
Schwefel's 1.2 | ![]() | ![]() | ![]() | 0 | |
Schwefel's 2.21 | ![]() | ![]() | ![]() | 0 | |
Noise | ![]() | ![]() | ![]() | 0 | |
Multimodal | Rastrigin | ![]() | ![]() | ![]() | 0 |
Ackley | ![]() | ![]() | ![]() | 0 | |
Griewank | ![]() | ![]() | ![]() | 0 |
Here, the PSO, the traditional SOA and the ISOA are used for comparison. The parameter settings of each algorithm are shown in Table 3, and the Maxiterations are 500, the number of seagulls is 100 and the dimension of seagulls is 30.
Parameter settings of each algorithm
Algorithm . | Parameter . | Value . |
---|---|---|
ISOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
SOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
PSO | ![]() | 1.49445 |
![]() | 1.49445 | |
![]() | 0.5 |
Algorithm . | Parameter . | Value . |
---|---|---|
ISOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
SOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
PSO | ![]() | 1.49445 |
![]() | 1.49445 | |
![]() | 0.5 |
Table 4 compares the optimization results of each function.
Optimization results of each function
Function /algorithm . | ISOA . | SOA . | PSO . |
---|---|---|---|
F1 | 1.867 × 10−33 | 2.888 × 0−18 | 0.01132 |
F2 | 3.964 × 10−15 | 6.757 × 10−7 | 0.618 |
F3 | 9.142 × 10−26 | 2.076 × 10−15 | 0.2642 |
F4 | 3.201 × 10−17 | 1.576 × 10−8 | 0.1465 |
F5 | 0.000192 | 0.005356 | 0.1678 |
F6 | 0 | 0 | 5.386 |
F7 | 6.839 × 10−14 | 5.152 × 10−11 | 0.1454 |
F8 | 0 | 0 | 0.001802 |
Function /algorithm . | ISOA . | SOA . | PSO . |
---|---|---|---|
F1 | 1.867 × 10−33 | 2.888 × 0−18 | 0.01132 |
F2 | 3.964 × 10−15 | 6.757 × 10−7 | 0.618 |
F3 | 9.142 × 10−26 | 2.076 × 10−15 | 0.2642 |
F4 | 3.201 × 10−17 | 1.576 × 10−8 | 0.1465 |
F5 | 0.000192 | 0.005356 | 0.1678 |
F6 | 0 | 0 | 5.386 |
F7 | 6.839 × 10−14 | 5.152 × 10−11 | 0.1454 |
F8 | 0 | 0 | 0.001802 |
As can be seen from the figures and the tables, the optimization effect of the SOA is better than that of the PSO, and the convergence speed is faster. When the number of iterations is about 100 times, the optimization effect of the PSO is better than that of the PSO for 500 iterations. Furthermore, the improved ISOA has better optimization results, faster convergence speed and a better effect than the traditional SOA algorithm.
IMPROVED NEURAL NETWORK AMMONIA NITROGEN PREDICTION MODEL
BPNN
There are many kinds of neural networks, among which BPNN is one of the most widely used. It has the advantages of simple structure, self-learning, self-organization, self-adaptation, fast training speed, local approximation and global convergence. It is generally composed of the input layer, the hidden layer and the output layer. It has been widely used in the field of prediction. The main idea of BPNN is to divide learning into forwarding the propagation of signal and back propagation of error. Specifically, in the learning process, the sample input is input through the input layer, and then transferred to the output layer through the operation of hidden layer neurons. Then, the error between the actual data and the predicted data of the output layer is calculated, and the error is put into the stage of back propagation. In the process of back propagation, the connection weights between each layer of neurons are constantly adjusted based on the gradient descent strategy until the deviation between the final predicted value and the actual value is minimized (Yang & Wang 2018; You et al. 2018; Zhang et al. 2019b). The model of BPNN is shown in Figure 5.
Suppose there are neurons in the input layer of BPNN model, a hidden layer with
neurons and
neurons in the output layer. The input is
, where
.
represents the number of samples and the output is
, where
. The connection weight of
th neuron in the input layer to
th neuron in the hidden layer is
, and the threshold value of
th neuron in the hidden layer is
. The connection weight of
th neuron in the hidden layer and
th neuron in the output layer is
and the threshold is
. The input of the
th neuron in the hidden layer is
and
is the input of the
th neuron in the output layer.

ISOA–BP
The main idea of the ISOA–BP hybrid programming is to optimize the weights and thresholds of the back propagation network based on the ISOA. The main steps are shown as follows.
Step 1: Set the parameters of ISOA, including the number of seagulls. The weights and thresholds of the BPNN that need to be optimized are encoded as the initial seagull population.
Step 2: Initialize the position of seagulls and use Equation (7) to change their position so as to avoid collisions.
Step 3: Calculate the fitness of all seagulls at present, and find the best one as the best seagull in this iteration.
Step 5: Chaotic algorithm (Equations (12)–(14)): logistic mapping is used to map the individual extremum of particles to (0, 1) for chaotic iteration. After the iteration, the inverse mapping is returned to the spatial range of the original solution. Calculate the fitness value of the current solution, and output the new solution when the new solution is better than the old one.
Step 6: To determine whether the iteration times or required accuracy are reached, output the final position as the optimal seagull position; otherwise, return to step 3.
Step 7: Decode the optimal output into the initial weights and thresholds of BPNN, and train the neural network until it meets the requirements.
Data pre-processing
Data from May to August in 2016 for a river in Qinghai province were collected once a day, including water temperature (°C), pH, dissolved oxygen (mg/L), conductivity (μs/cm), turbidity (nephelometric turbidity units, NTU), permanganate index (mg/L) and ammonia nitrogen (mg/L). A total of 123 groups of data were collected. The first 100 groups of data were used to train the network and the last 23 groups of data were used to verify the network performance.




Simulation results
In this subsection, the back propagation (BP), PSO–BP, SOA–BP and ISOA–BP models are compared to verify the performance of BPNN optimized by the ISOA. The parameters of each model are shown in Table 5, and the Maxiterations are 1,000, train function is trainrp, number of seagulls is 100, iteration times of seagulls is 30 and the structure of BP is 6-7-1.
Parameter settings of different models
Algorithm . | Parameter . | Value . |
---|---|---|
ISOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
SOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
PSO | ![]() | 1.49445 |
![]() | 1.49445 | |
![]() | 0.5 |
Algorithm . | Parameter . | Value . |
---|---|---|
ISOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
SOA | ![]() | 1 |
![]() | 0.1 | |
![]() | 2 | |
PSO | ![]() | 1.49445 |
![]() | 1.49445 | |
![]() | 0.5 |
The convergence comparison of different models is shown in Figure 6. As can be seen from Figure 6, the proposed ISOA–BP model has a smaller convergence value, which is better than that of the BP, PSO–BP and SOA–BP models.
As can be seen from Figure 7, the proposed ISOA–BP model has a faster convergence speed, which converges to the optimal value faster, and its convergence value is better than that of the BP, PSO–BP and SOA–BP models. In other words, the predicted value of the proposed algorithm is closer to the actual value. The simulation results of 23 groups of validation data from different models are shown in Figure 7.
As can be seen from Figure 8, the average error of the predicted value of the proposed algorithm is the smallest, which indicates that the predicted value is closer to the actual value. The predicted value of the proposed ISOA–BP prediction model is closer to the actual value, which shows more accurate prediction accuracy than the traditional BP, PSO–BP or SOA–BP prediction models.
In Figure 8 shows the error comparison of the four models, and the value is the absolute value of the error. It can be seen from the figure that the proposed model has the lowest error value for the verification samples, the lowest average error and the highest prediction accuracy.
The following two evaluation methods are used to evaluate the prediction accuracy of different models (Yu & Bai 2018).



The results of the above two evaluation algorithms calculations are shown in Table 6.
Comparison of different evaluation algorithms
Algorithm/evaluation . | RMSE . | NS . |
---|---|---|
BP | 0.658100 | 0.699794 |
PSO–BP | 0.109997 | 0.949822 |
SOA–BP | 0.053962 | 0.975384 |
ISOA–BP | 0.046361 | 0.978851 |
Algorithm/evaluation . | RMSE . | NS . |
---|---|---|
BP | 0.658100 | 0.699794 |
PSO–BP | 0.109997 | 0.949822 |
SOA–BP | 0.053962 | 0.975384 |
ISOA–BP | 0.046361 | 0.978851 |
As can be seen from Table 6, compared with the traditional single neural network prediction model and PSO or SOA model, the proposed improved optimization model, namely the ISOA model, has a higher prediction accuracy.
CONCLUSIONS
Since the traditional BPNN is easily limited to local optimization, which leads to the low accuracy of ammonia nitrogen prediction, in this paper, the SOA is adopted with strong optimization performance to optimize the weights and thresholds of BPNN. Because of the shortcomings of the SOA, this paper proposes an improved algorithm to optimize BPNN by using chaos. The simulation results show that the prediction accuracy of the proposed model is higher than that of the traditional BPNN, PSO and SOA models. The prediction accuracy of the new model is higher and the effect is better, which can be applied to predict ammonia nitrogen in more complex water environments.
ACKNOWLEDGEMENTS
This work is supported by the National Natural Science Foundation of Qinghai Province, China (No. 2020-ZJ-724).
DATA AVAILABILITY STATEMENT
All relevant data are included in the paper or its Supplementary Information.