Around the world, it is growing harder to provide clean and safe drinking water. In wastewater treatment, sensors are employed, and the Internet of Things (IoT) is used to transmit data. Chemical oxygen demand (COD), biochemical demand (BOD), total nitrogen (T-N), total suspended solids (TSS), and phosphorous (T-P) components all contribute to eutrophication, which must be avoided. The wastewater sector has lately made efforts to become carbon neutral; however, the environmental impact and the road to carbon neutrality have received very little attention. The challenges are caused by poor prediction. This research proposes deep learning modified neural networks (DLMNN) with Binary Spotted Hyena Optimizer (BSHO) for modeling and calculations to address this challenge. All efforts for resource recovery, water reuse, and energy recovery partially attain this objective. In contrast to previous modeling techniques, the DLMNN-training BSHOs and validation demonstrated outstanding accuracy shown by the model's high coefficient (R2) for both training and testing. Also covered are recent developments and problems with nanomaterials made from sustainable carbon and graphene quantum dots, as well as their uses in the treatment and purification of wastewater. The proposed model DLMNN-BSHO achieved 95.936% precision, 95.326% recall, 93.747% F-score, and 99.637% accuracy.

  • The carbon neutrality has received much attention for water treatment.

  • The deep learning modified neural networks (DLMNN) with Binary Spotted Hyena Optimizer (BSHO) for modeling were used.

  • The proposed model DLMNN-BSHO achieved 95.936% precision, 95.326% recall, 93.747% F-score, and 99.637% accuracy.

Intelligent models have developed to the point where they can model complex processes in water treatment simulations. It is challenging to evaluate and forecast complex ecological systems' behavior (Amoueyan 2017). Numerous factors influence environmental impacts, and the environmental engineers who study them and their interactions can be complicated, making evaluation challenging. It is difficult and demanding to operate industrial wastewater treatment systems with changing effluent quality and quantity (Elgallal et al. 2016). Industrial wastewater must be treated despite any barriers.

Wastewater treatment is one environmental industry that has benefited from AI techniques (ANN). Treatment of water is a complicated process. Thanks to recent advancements in intelligent approaches, they can now be used in complex modeling systems (Liu & Chung 2014). They can enhance performance prediction because of their accuracy, dependability, and engineering applications. A wastewater treatment facility's effectiveness is impacted by various variables: biological oxygen demands (BOD), total suspended matter, and chemical oxygen requirements (TSSs). Recent WWTP assessments used these characteristics as a model (Courault et al. 2017).

Pollutants from wastewater are removed at WWTPs by activated sludge (ASP) (Nasrollahzadeh et al. 2020a). Nitrogen, phosphorus, and biochemicals are examples of contaminants. A nitrogen-based carbon-doped ammonia sensor is used to predict wastewater quality using total suspended solids (TSS), TN, TP, and biological oxygen demand (BOD). This is separated from the sewage system using mathematical and physical models to improve wastewater treatment based on weather conditions (Khan et al. 2020a). Deep learning or machine learning must be used to process it as indicators of influent wastewater (Pakzad et al. 2019). IoT devices can deliver granular data at a much higher level of detail. For the energy usage of a building, for instance, IoT devices can collect data at the level of individual rooms or equipment rather than depending on aggregated data. Better insights into energy usage trends and areas for development or optimization are made possible by this granularity. IoT gadgets can monitor the functionality and state of infrastructure and machinery. Forecasting models can determine the need for maintenance before an equipment failure by analyzing real-time data. With this proactive strategy, energy waste from broken equipment is minimized, maintenance schedules are optimized, and the carbon impact of reactive repairs or replacements is decreased.

Organic contaminants and pharmaceutically active substances can be eliminated from nanomaterials using a variety of procedures, including adsorption, photocatalysis, AOPs, filtration, and others (Cai et al. 2018). Examples of emerging contaminants in wastewater discharge include pesticides, textile dyes, plasticizers, disinfection byproducts, PCBs, PAHs, PFOA, and PFOS, endocrine-disrupting substances, pharmaceuticals, and personal care products. When used as adsorbents, carbonaceous nanoparticles can remove PFASs from water. Adsorption of PFOS and PFOA involves ligand exchange and hydrophobic, electrostatic, and hydrogen bonding interactions. Large-surface-area nanomaterials with high reactivities (Zhang et al. 2019) show great promise for eliminating these pollutants.

Developing ‘greener’ and more environmentally friendly procedures has received considerable attention. Examples include using fewer dangerous solvents and fewer chemical reagents, precursors, and catalysts. Green chemistry and nanotechnology can address the challenges of emerging and significant contaminants and microorganisms. New and suitable methods are needed to destroy hazardous contaminants and pollutants safely. The environmental sustainability of processes that lead to negative externalities, such as water treatment using hydrogen and renewable energy, can be improved by green nanotechnology. Reduced costs and increased efficacy in wastewater treatment are possible thanks to nanoscale filtration, pollutant adsorption on nanoparticles (NPS), and contaminant breakdown by nanocatalysts (Sajjadi et al. 2020). Developing and developed worlds need more potable fresh water because micropollutants contaminate water sources (Ali et al. 2017; Das et al. 2018). Current decontamination techniques (chlorination and ozonation) produce toxic byproducts and use chemicals excessively (Westerhoff et al. 2016).

Nanoparticles' vast surface area, high reactivity, mechanical characteristics, low cost, and little power requirements make them ideal for treating and restoring water. These specifically defined and manageable compounds might work well as adsorbents (Yan et al. 2020). The creation of low-cost, improved water treatment processes for ‘real-time’ and ongoing water quality monitoring is made possible by the use of sophisticated nanomaterials. Wastewater treatment, prevention, and remediation all involve nanomaterials. While preserving water's cleanliness, usability, and availability, these recyclable materials can detect biological and chemical pollutants in produced, municipal, or industrial effluent. Graphene-based nanomaterials, carbon nanotubes, and quantum dots consisting of both carbon and graphene are some of the nanomaterials utilized to remove pharmaceutical contaminants and wastewater treatment described in the following section (Figure 1).
Figure 1

Specific compensations of different nanoparticles made of carbon.

Figure 1

Specific compensations of different nanoparticles made of carbon.

Close modal

Due to their distinctive advantages, even in the face of possible health hazards, higher manufacturing costs, selectivity, durability, and recyclability, nanomaterials are attractive alternatives to traditional wastewater treatment methods (Lu & Astruc 2020). More research is needed on nanomaterials' environmental effects, toxicity, removal kinetics, simulation, and the behavior of dangerous pollutants. Drugs, endocrine disruptors, pesticides, toxic organic dyes, personal care items, detergents, and other innovative and resistant contaminants have been the primary focus of research (Mukherjee et al. 2020).

This research introduces deep learning modified neural networks (DLMNN) with Binary Spotted Hyena Optimizer (BSHO) to model and compute solutions to this issue. The methods utilized within and outside wastewater treatment facilities to treat wastewater carbon-neutrally are then discussed. All efforts for resource recovery, water reuse, and energy recovery partly meet this goal. In contrast to previous modeling techniques, the recommended model DLMNN-training BSHOs and validation demonstrated the model's outstanding accuracy, as evidenced by the model's high determination coefficient (R2) for both the training and testing stages. Recent advancements and issues with nanomaterials made from maintainable carbon and graphene quantum dots and their uses in the purification and treatment of wastewater are also discussed.

Multiple boundaries for carbon accounting of the wastewater system

Sludge collection facilities, disposal sites, and other municipal infrastructure will only be able to help, including sewage treatment plants. WWTPs cannot perform their primary function: removing pollutants and maintaining a clean environment. Wastewater treatment facilities are transitioning from conventional to energy and resource recovery operations. These two elements can be used to gradually increase the scope of carbon accounting for wastewater treatment to include the entire ecological system, as opposed to the current focus on specific wastewater treatment processes (Figure 2). To address the multifunctionality of WWTPs, the methodology uses a ‘system expansion’ strategy. Using products recovered from wastewater could reduce production and the system's carbon footprint by substituting them for comparable products already on the market.
Figure 2

A sketch of multiple boundaries for wastewater treatment carbon accounting.

Figure 2

A sketch of multiple boundaries for wastewater treatment carbon accounting.

Close modal

There are three limits: the ecological system, human society, and the limits of WWTPs. As flows enter the boundary, the inputs for most of the facilities within Boundaries 1 and 2 are streamlined because they need these resources to run. Indicators represent particular carbon emissions. Consumption of energy, chemicals, and recycled product flows imply indirect carbon emissions, which positively and negatively affect carbon emissions.

The water line is the most critical component in inventories of carbon emissions produced by wastewater treatment facilities. The entire process is covered by this line, which begins with the intake of incoming sewage and ends with the effluent discharge after going through numerous physical, chemical, and biological treatment procedures. These inventories monitor the system's CO2 emissions. Depending on area effluent targets and rules, wastewater treatment might be optional. Secondary biological treatment effluent may be released or applied to land as irrigation (Mainardis et al. 2022). A tertiary treatment step, like adding a membrane reactor, is necessary to reuse effluent in industrial production or as reclaimed water (Perumal et al. 2022).

Sludge, a crucial consequence of wastewater treatment, tends to gather contaminants. When assessing carbon emissions in the context of wastewater treatment, the critical significance of successfully treating and managing sludge should be taken into consideration. (Geetha et al. 2022) After thickening, sludge can be disposed of in several ways (Arias et al. 2021). Through conditioning and dewatering, sludge volume can be decreased. Anaerobic digestion (AD), composting, and pyrolysis are a few techniques that can be used to salvage the energy and resources in sludge. Stagnant sludge is disposed of in landfills, incinerators, or on farms. The first circle represents CO2 emissions from WWTP sludge treatment. Determine the pollution caused by WWTP sludge treatment and disposal. To achieve energy independence at the WWTP, the first circle considers the water line, the sludge line, and a variety of other energy conservation and usage strategies. Solar panels, wind turbines, heat pumps, and other technologies have been installed.

WWTP infrastructure expansion to urban infrastructure

Water must be collected and routed through the sewer system before being processed in WWTPs. The existing sewer network and WWTPs combine to form an urban wastewater system. Municipal facilities incorporate methodical management practices as more functions are added to sewage treatment. Co-digesting municipal trash, especially food waste, with sludge in WWTPs boosts biogas generation and decreases the amount of sludge transferred to landfills or incinerators. Some tertiary treatment WWTPs also produce reclaimed water for use in municipal or industrial applications. They are everywhere to be found. Carbon-neutral wastewater treatment affects the entire water and wastewater system and urban infrastructure. This goes beyond the medical center.

It is further expanding to human society and ecological systems

The third circle includes human society, water, soil, vegetation, etc. Water bodies receive treated wastewater in addition to serving. The distribution of resources recovered during wastewater treatment may lower the demand for comparable industrial goods. Agriculture and human society both share these resources. Depending on whether a region is urban or rural, sewage treatment may vary. Decentralized sewage treatment systems are used in rural areas because they are more environmentally friendly than central systems used in densely populated urban areas. Despite the advantages of both centralized and distributed models, resource recovery through decentralization is increasingly popular.

Anomaly detection and sensor calibration

If a sensor is not working right or there are strange things in the data, it is a good idea to use anomaly detection and sensor calibration methods. These methods can find and fix wrong sensor readings, making the forecasting model more accurate. The model's performance can be improved by finding, removing, or correcting strange sensor data.

Robust system design

DLMNN-BSHO primarily focuses on the predicting model, but a robust system design can help prevent system outages. You can set up backup and fallback systems to ensure the forecasting system keeps working despite short-term problems. This can be done with backup power sources, multiple places to store data, and infrastructure that can handle situations.

Classification using DLMNN classifier

DLMNNs are fed pre-processed data values in this procedure (Figure 3). Weights are calculated by adding all the inputs together. The nodes in this layer are called ‘hidden nodes’ within the layer that follows this. These nodes are responsible for multiplying the input value by the node weight vectors.
Figure 3

DLMNN architecture for the classification.

Figure 3

DLMNN architecture for the classification.

Close modal

The CFOA labeled as DLMNN optimizes the DLNN weight values. As a result, the amount of BP required to achieve the desired result increases. As a result, it has been improved. Therefore, the DLMNN is known as DLMNN. DLMNN layers use a hidden activation layer, and the consequences of these layers are also passed on to the next layer. They have a significant impact on the classifier's output. These processes are involved in DLMNN classification:

  • First, use Equation (1) to set the pre-processed data values and their related weights.
    (1)
    (2)

There are w1, w2, w3, and so on, and the n no. of values of pre-processed data are represented by in , and wa, respectively, in this example.

  • A random weight vector is used to multiply the pre-processed data, and the results are then summed together, as shown mathematically by
    (3)
    where G is the sum of the individual values
  • Use the equation to determine the activation function (AF),
    (4)
    (5)

The exponential of FAa represents the exponential of Pda. Other activation functions can be used with the proposed DLMNN, such as the Sigmoid or Hyperbolic Tangents functions.

Calculate the output of the next hidden layer using the following equation.
(6)
where Bai denotes the bias value, pre-processed data values are specified, and wa sets the weight that will be applied between the input and hidden layers.
  • For each DLMNN layer, repeat the preceding three steps. By combining the weights of all the input signals, you can finally attempt to estimate the output by determining the value of the output layer neurons.
    (7)
    where Oa indicates the value of the layer preceding the output layer, wj defines the hidden layer weights, and Va signifies the in-question output component.
  • The output of the network is contrasted with the desired value. This expression shows the difference between these two numerical values. A warning symbol. The use of (8) implies this value mathematically as,
    (8)

Ti specifies the desired output, while Es indicates the error signal.

  • In this case, the target value is weighed against the output unit. Determine what went wrong. The output error is also distributed to all other network nodes that are centered on it using it.
    (9)
  • The BP approach determines the weight adjustment, which may be compared to the original weight.
    (10)

WCA is the weight correction, λ is the momentum term, and δa is the error distributed throughout the network.

Algorithm 1 is mentioned below:

Algorithm 1: DLMNN 
Do 
Compute the output 
End for 
k with k = n + 1 to DLMNN for k = 1 to n do 
Individual connections can be made by using 
Use the BSHO Algorithm to obtain the connection vector V for neuron i. 
Using the BSHO Algorithm, obtain the synaptic weights sw for neuron i. 
Using the BSHO Algorithm, 
Determine neuron i's bias bf. 
Neuron i's transfer function index tf can be obtained from the following: 
End for k = DLMNNm to DLMNN and 
Then compute neuron k's output as perform 
Weight Optimization with DLMNN Algorithm by Compiling the output at the end 
Algorithm 1: DLMNN 
Do 
Compute the output 
End for 
k with k = n + 1 to DLMNN for k = 1 to n do 
Individual connections can be made by using 
Use the BSHO Algorithm to obtain the connection vector V for neuron i. 
Using the BSHO Algorithm, obtain the synaptic weights sw for neuron i. 
Using the BSHO Algorithm, 
Determine neuron i's bias bf. 
Neuron i's transfer function index tf can be obtained from the following: 
End for k = DLMNNm to DLMNN and 
Then compute neuron k's output as perform 
Weight Optimization with DLMNN Algorithm by Compiling the output at the end 

The binary spotted hyena optimizer

The BSHO is strongly advised for use in this study. This program makes it possible to simulate the discrete binary search space used by spotted hyenas. Its hunting style is comparable to that of spotted hyenas. The spotted hyena is renowned for working together with other hunters. They locate their prey, then encircle and attack it. The BSHO visualizes the hypercube-like search space, restricting search agent movement to the cube's four corners. Since each solution is binary, it can only accept 0 or 1 values. Additionally, spotted hyenas are continually moving around, following the location of the most frequented one as fresh information becomes available. The spotted hyenas' positions are dynamically adjusted using the hyperbolic tangent function. The revised standings are calculated using the specified tangent function and lie between ‘0’ and ‘1’.

Spotted hyena optimizer

Encircle the prey, look for it, and then attack it – these are the three critical components of SHO. Prey is the closest thing to an ideal solution right now. When the remaining spotted hyenas are saved and the optimal solution is found, attempt to update their positions. A mathematical model of spotted hyena encircling behavior is provided below:
(11)
(12)
Let stand in for the hyena's target prey's distance from it in the current scenario. The vectors of the coefficients and are not modified. We'll use the letter P to denote the current iteration in the interim. The position vector of the prey is represented by , and the location vector of the spotted hyena is represented by . In order to determine and:
(13)
(14)
(15)

. In this case, is lowered linearly over iterations from 5 to 0. It keeps the balance between exploration and exploitation. a, are the random vectors in [0,1]. Contain the random vectors that fall between [0,1]. To give spotted hyenas access to more areas close to where they currently are, the values of and are modified. Spotted hyenas update their positions in a haphazard pattern around the prey using Equations (11) and (12).

We propose that the best effective search agent already knows where the prey is to mimic the hunting behavior seen in spotted hyenas. The other search agents realign themselves based on the results of the leading search agent once this agent forms a close-knit circle with its peers. The mathematical representation of the hunting mechanism is shown in the following equations:
(16)
(17)
(18)
where shows the location of the spotted hyena that is considered to be the best, and shows the locations of the spotted hyenas that are still present. The following equation can be used to determine the number of spotted hyenas, which is denoted by the letter N:
(19)
where can we find a randomly generated vector inside the range [0.5, 1]. The ‘NS’ variable represents the number of subsets of the search space that are equivalent to the optimal solution. In , the N-top solutions are compiled. The mathematical equation for attacking the prey is as follows.
(20)

where depending on where the top search agent placed you, all other search agents' positions are modified. Hyenas can use BSHO to update their positions and attack their prey.

BSHO was created to address issues involving ongoing optimization. Discrete problem-solving is yet to be possible. Binary SHO addresses this issue (BSHO). Since variables can only be 0 or 1, BSHO substituted binary encoding for SHO's float-encoding system (Wei et al. 2020). The spotted hyena's position updating system might make local binary searches more effective. The hyperbolic tangent function is the most effective method for tracking spotted hyenas in BSHO, utilizing only ‘0’ and ‘1’ states. The dimension of the search space is restricted to a range between 0 and 1 due to the function selection. In contrast to previous binary metaheuristics, BSHO uses a cluster creation technique. The mathematical graphic that follows illustrates the formation of clusters. Algorithm 2 is mentioned below.

Algorithm 2: for Binary Spotted Hyena Optimization 
Input: (i = 1, 2,…,n) Number of Spotted Hyenas 
 Output: Hyena Best Spotted 
1. Initialize the population by randomly seeding n hyenas. 
2. Determine the worth of each search agent based on their level of physical fitness. 
3. Taking (p Max iterations) into account 
4. Each and every spotted hyena 
5. Adjust the search agent's position to comply with Equation (16). 
6. End for 
7. Make the following adjustments to the control parameters U, T, h, and N. 
8. Examine the spotted hyenas’ individual levels of fitness. 
9. If the new solution is superior to the old one, update
10. Update the cluster w.r.t  
11. p = p + 1 
End while 
Algorithm 2: for Binary Spotted Hyena Optimization 
Input: (i = 1, 2,…,n) Number of Spotted Hyenas 
 Output: Hyena Best Spotted 
1. Initialize the population by randomly seeding n hyenas. 
2. Determine the worth of each search agent based on their level of physical fitness. 
3. Taking (p Max iterations) into account 
4. Each and every spotted hyena 
5. Adjust the search agent's position to comply with Equation (16). 
6. End for 
7. Make the following adjustments to the control parameters U, T, h, and N. 
8. Examine the spotted hyenas’ individual levels of fitness. 
9. If the new solution is superior to the old one, update
10. Update the cluster w.r.t  
11. p = p + 1 
End while 

(21)
Let stand in for the collection of ideal explanations produced by the optional approach.
(22)
(23)

Let RAND stand in for a 0–1 uniformly distributed random number. In this case, the spotted hyena's position is symbolized by the binary code , where = 1, where d stands for dimension and s for iterations.

This method employs Bayesian K-nearest neighbors (BKNN), Energy-based Learning approach for Multi-Agent activity forecasting (ELMA), and the fusion of DLMNN-BSHO with parametric values to predict wastewater quality using Python and version 3.1 for data collection and analysis. Evaluation matrices were used to indicate the classification, accuracy, specificity, sensitivity, precision, and score for water quality.

Evaluation metrics

Precision, recall, F-score, accuracy, and root mean squared error (RMSE) are standard metrics. These metrics are generally calculated using the four main metrics of a positive/negative binary classification outcome: true positive and true negative, which represent states that were correctly identified, and false positive and false negative, which signify conditions that were incorrectly identified (FN). The following are the statistical validation and evaluation parameters for our proposed wastewater treatment architecture:

The accuracy of the classifier is measured by the percentage of correct predictions it makes. It specifies the overall performance of the classifier. The precision is defined as,
(24)
The proportion of correctly anticipated positive observations to all correctly predicted positive claims is known as precision. A high level of precision is correlated with a low false positive recall.
(25)
Recalled is a valuable evaluation indicator that measures the percentage of correctly categorized positives. Calculating recall involves using the TP and FP values.
(26)
The precision and recall weighted average is used to get the F1-score. The performance of the classifier can be assessed using this statistical metric. This score accounts for both false positives and false negatives.
(27)
The average squared dissimilarity among the actual and anticipated values is represented by the RMSE. By resolving the equation, you may determine the system's weight (27).
(28)

Precision

In Figure 4 and Table 1, the precision of the DLMNN-BSHO approach is compared to that of other frequently employed methods. The graph illustrates that the deep learning approach yields more accurate results. DLMNN-BSHO has a precision of 95.764%, whereas the ANN, CNN, BKNN, and ELM models have precision values for data 100 of 88.947, 91.647, 86.547, and 85.103%, respectively. However, the DLMNN-BSHO model performed best across a range of data sizes. Under 300 data, the DLMNN-BSHO is more precise than the ANN, CNN, BKNN, and ELM, which have precision values of 90.546, 93.748, 88.446, and 86.544%, respectively.
Table 1

Precision analysis for the DLMNN-BSHO method using existing systems

Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 88.947 91.647 86.547 85.103 95.764 
150 89.647 91.537 87.446 85.537 94.038 
200 90.877 93.847 87.904 86.836 94.637 
250 92.543 91.974 88.747 87.746 95.635 
300 90.546 93.748 88.446 86.544 95.936 
Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 88.947 91.647 86.547 85.103 95.764 
150 89.647 91.537 87.446 85.537 94.038 
200 90.877 93.847 87.904 86.836 94.637 
250 92.543 91.974 88.747 87.746 95.635 
300 90.546 93.748 88.446 86.544 95.936 
Figure 4

Precision analysis for the DLMNN-BSHO approach using existing systems.

Figure 4

Precision analysis for the DLMNN-BSHO approach using existing systems.

Close modal

Recall

The recall of the DLMNN-BSHO technique is compared to other existing designs in Figure 5 and Table 2. The graph shows how deep learning methods significantly improved recall abilities. For instance, the recall for data 100 for the DLMNN-BSHO model is 93.448%, compared to recalls of 83.864, 92.437, 89.54, and 86.084% for the ANN, CNN, BKNN, and ELM models. The DLMNN-BSHO model nevertheless performed very well across a range of data sizes. The DLMNN-BSHO model had a recall of 95.326% for 300 data, compared to recalls of 84.653, 93.747, 90.964, and 87.226% for the ANN, CNN, BKNN, and ELM models.
Table 2

Analysis of DLMNN-BSHO approach recall using existing systems

Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 83.864 92.437 89.546 86.084 93.448 
150 84.873 93.764 90.308 86.844 94.833 
200 83.048 92.837 89.647 88.647 94.038 
250 85.327 92.984 90.647 88.225 95.536 
300 84.653 93.747 90.964 87.226 95.326 
Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 83.864 92.437 89.546 86.084 93.448 
150 84.873 93.764 90.308 86.844 94.833 
200 83.048 92.837 89.647 88.647 94.038 
250 85.327 92.984 90.647 88.225 95.536 
300 84.653 93.747 90.964 87.226 95.326 
Figure 5

Analysis of DLMNN-BSHO approach recall using existing systems.

Figure 5

Analysis of DLMNN-BSHO approach recall using existing systems.

Close modal

F-score

In Table 3 and Figure 6, the DLMNN-BSHO approach is compared to other widely used methodologies in terms of F-scores. The graph illustrates how the performance of the F-score has improved due to deep learning. For instance, the F-scores for the ANN, CNN, BKNN, and ELM models are 79.763, 83.408, 85.974, and 88.947%, respectively, while the F-score for the DLMNN-BSHO model is 92.748% with data 100. However, the DLMNN-BSHO model performed well for a range of data sizes. The F-score for the DLMNN-BSHO model is 93.747%, whereas those for the ANN, CNN, BKNN, and ELM models are 81.436, 84.947, 87.747, and 90.547%, respectively.
Table 3

Analysis of F-scores for the DLMNN-BSHO approach using existing systems

Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 79.763 83.408 85.974 88.947 92.748 
150 80.527 83.826 85.736 89.064 91.747 
200 80.043 84.863 86.436 88.646 92.647 
250 79.536 85.227 86.847 89.546 91.394 
300 81.436 84.947 87.747 90.547 93.747 
Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 79.763 83.408 85.974 88.947 92.748 
150 80.527 83.826 85.736 89.064 91.747 
200 80.043 84.863 86.436 88.646 92.647 
250 79.536 85.227 86.847 89.546 91.394 
300 81.436 84.947 87.747 90.547 93.747 
Figure 6

F-score analysis for DLMNN-BSHO technique with existing systems.

Figure 6

F-score analysis for DLMNN-BSHO technique with existing systems.

Close modal

Accuracy

The accuracy of the DLMNN-BSHO methodology is contrasted with that of other currently used processes in Figure 7 and Table 4. The graph shows that accuracy has increased using the deep learning method. In comparison to the ANN, CNN, BKNN, and ELM models with data 100, the DLMNN-BSHO has an accuracy rate of 98.574% as opposed to 96.436, 94.567, 88.735, and 93.038%. However, the DLMNN-BSHO model performed well for a range of data sizes. Similarly, the DLMNN-BSHO has an accuracy of 99.637% under 300 data, compared to accuracy values of 97.735, 95.546, 91.563, and 94.907% for ANN, CNN, BKNN, and ELM, respectively.
Table 4

Accuracy analysis for DLMNN-BSHO method with existing systems

Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 96.436 94.567 88.735 93.038 98.574 
150 96.753 94.903 88.393 93.674 98.054 
200 97.843 93.943 89.158 94.275 98.325 
250 97.325 94.363 89.335 93.827 99.745 
300 97.735 95.546 91.563 94.907 99.637 
Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 96.436 94.567 88.735 93.038 98.574 
150 96.753 94.903 88.393 93.674 98.054 
200 97.843 93.943 89.158 94.275 98.325 
250 97.325 94.363 89.335 93.827 99.745 
300 97.735 95.546 91.563 94.907 99.637 
Figure 7

Accuracy analysis for DLMNN-BSHO method with existing systems.

Figure 7

Accuracy analysis for DLMNN-BSHO method with existing systems.

Close modal

Root mean squared error

The RMSE of the DLMNN-BSHO methodology is compared to those of other recent methods in Figure 8 and Table 5. The graph demonstrates that the deep learning strategy had better results while RMSE was lower. The DLMNN-BSHO model's RMSE for data 100 is 42.86%, which is lower than the RMSEs for the ANN, CNN, BKNN, and ELM models, which are 54.73, 50.536, 46.826, and 44.432%, respectively. On the other hand, it has been demonstrated that the DLMNN-BSHO model may maximize performance while maintaining low RMSE values across a range of data sizes. For DLMNN-BSHO with 300 data, the RMSE is 42.827%, whereas it is 56.03, 51.8, 48.282, and 45.62% for ANN, CNN, BKNN, and ELM models, respectively. The proposed method performs its best with minimum RMSE values.
Table 5

Analysis of the RMSE for the DLMNN-BSHO approach with existing systems

Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 54.732 50.536 46.826 44.432 42.86 
150 54.63 51.73 47.453 44.738 42.08 
200 55.98 50.93 47.22 45.07 43.625 
250 55.637 51.532 46.53 45.972 43.946 
300 56.03 51.82 48.282 45.62 42.827 
Number of data from datasetANNCNNBKNNELMDLMNN-BSHO
100 54.732 50.536 46.826 44.432 42.86 
150 54.63 51.73 47.453 44.738 42.08 
200 55.98 50.93 47.22 45.07 43.625 
250 55.637 51.532 46.53 45.972 43.946 
300 56.03 51.82 48.282 45.62 42.827 
Figure 8

Analysis of the RMSE for the DLMNN-BSHO approach with existing systems.

Figure 8

Analysis of the RMSE for the DLMNN-BSHO approach with existing systems.

Close modal

Training and testing validation

The training and testing validation analysis for the DLMNN-BSHO technique using existing systems is described in Table 6 and Figure 9. According to the data, the proposed DLMNN-BSHO approach worked well in all respects. The DLMNN-BSHO training and testing validation are 1.25 and 1.23 with 10 epochs. Similarly, under 100 generations, the DLMNN-BSHO has training and testing validation coefficients of 0.15 and 0.13, respectively.
Table 6

Training and testing validation analysis for DLMNN-BSHO technique with existing systems

EpochsTraining validationTesting validation
1.36 1.34 
10 1.25 1.23 
20 1.06 1.03 
30 0.85 0.83 
40 0.72 0.71 
50 0.66 0.64 
60 0.53 0.51 
70 0.44 0.40 
80 0.24 0.22 
90 0.18 0.16 
100 0.15 0.13 
EpochsTraining validationTesting validation
1.36 1.34 
10 1.25 1.23 
20 1.06 1.03 
30 0.85 0.83 
40 0.72 0.71 
50 0.66 0.64 
60 0.53 0.51 
70 0.44 0.40 
80 0.24 0.22 
90 0.18 0.16 
100 0.15 0.13 
Figure 9

Training and testing validation analysis for DLMNN-BSHO technique with existing systems.

Figure 9

Training and testing validation analysis for DLMNN-BSHO technique with existing systems.

Close modal

According to predictions, activated carbon will absorb unwanted influent indicators in water treatment plants. The following recommendations are presented in this paper to address this issue: DLMNN and BSHO are used for modeling and calculations. The techniques used within and outside wastewater treatment plants to achieve carbon-neutral wastewater treatment are then described. All resource recovery, water reuse, and energy recovery efforts contribute to this goal. Compared to previous modeling techniques, the recommended model DLMNN-training BSHOs and validation demonstrated the model's exceptional accuracy, as evidenced by the model's high determination coefficient (R2) for both the training and testing stages. Recent developments and issues with nanomaterials made from sustainable carbon and graphene quantum dots and how they can be used to treat and purify wastewater are also discussed. Precision, recall, F-score, RMSE, Training and Testing validation, and accuracy were used to evaluate the model. Precision, recall, F-score, RMSE, Training and Testing validation, and accuracy averaged 95.936%, 95.326%, 93.747%, 42.827%, 0.15 l, and 99.637%. ANN, CNN, BKNN, and Extreme Learning Machines are the existing systems used in this paper (ELM). The work could be expanded to include optimizing the wastewater treatment model's performance across the range of analysis states. Performance enhancement is a goal of swarm intelligence.

L. S. S. rendered support in data curation and drafting the article. H. A. conceptualized the study deep learning algorithm. A. H. A. developed the methodology, rendered support in formal analysis and reviewed and edited the draft. V. R. A. developed the methodology and rendered support in formal analysis. All authors have read and agreed to this version of the manuscript.

All relevant data are included in the paper or its Supplementary Information.

The authors declare there is no conflict.

Ali
I.
,
Peng
C.
,
Naz
I.
,
Khan
Z. M.
,
Sultan
M.
,
Islam
T.
&
Abbasi
I. A.
2017
Phytogenic magnetic nanoparticles for wastewater treatment: a review
.
RSC Advances
7
,
40158
40178
.
Amoueyan
E.
2017
Quantifying pathogen risks associated with potable reuse: a risk assessment case study for cryptosporidium
.
Water Research
119
, 252–266.
Cai
Z.
,
Dwivedi
A. D.
,
Lee
W. N.
,
Zhao
X.
,
Liu
W.
,
Sillanpaa
M.
,
Zhao
D.
,
Huang
C. H.
&
Fu
J.
2018
Application of nanotechnologies for removing pharmaceutically active compounds from water: development and future trends
.
Environmental Science: Nano
5
,
27
47
.
Courault, D., Albert, I., Perelle, S., Fraisse, A., Renault, P., Salemkour, A. & Amato, P.
2017
Assessment and risk modeling of airborne enteric viruses emitted from wastewater reused for irrigation
.
Science of the Total Environment
592
,
512
526
.
Das
S.
,
Chakraborty
J.
,
Chatterjee
S.
&
Kumar
H.
2018
Prospects of biosynthesized nanomaterials for the remediation of organic and inorganic environmental contaminants
.
Environmental Science: Nano
5
,
2784
2808
.
Geetha
B. T.
,
Kumar
P. S.
,
Bama
B. S.
,
Neelakandan
S.
,
Dutta
C.
&
Babu
D. V.
2022
Green energy aware and cluster-based communication for future load prediction in IoT
.
Sustainable Energy Technologies and Assessments
52
,
102244
.
Khan
A.
,
Colmenares
J. C.
&
Glaser
R.
2020
Lignin-based composite materials for photocatalysis and photovoltaics
.
Lignin Chemistry
376
(3),
1
31
.
Mainardis
M.
,
Cecconet
D.
,
Moretti
A.
,
Callegari
A.
,
Goi
D.
,
Freguia
S.
&
Capodaglio
A. G.
2022
Wastewater fertigation in agriculture: issues and opportunities for improved water management and circular economy
.
Environmental Pollution
296
,
118755
.
Mukherjee
S.
,
Philip
L.
&
Pradeep
T.
2020
Sustainable materials for affordable point-of-use water purification
. In:
Frontiers in Water-Energy-Nexus – Nature-Based Solutions, Advanced Technologies and Best Practices for Environmental Sustainability
(Naddeo, V., Balakrishnan, M. & Choo, K-H., eds.).
Springer
,
Cham
, pp.
125
128
.
Nasrollahzadeh
M.
,
Nezafat
Z.
,
Gorab
M. G.
&
Sajjadi
M.
2020
Recent progresses in graphene-based (photo)catalysts for reduction of nitro compounds
.
Molecular Catalysis
484
,
110758
.
Perumal
S. K.
,
Kallimani
J. S.
,
Ulaganathan
S.
,
Bhargava
S.
&
Meckanizi
S.
2022
Controlling energy aware clustering and multihop routing protocol for IoT assisted wireless sensor networks
.
Concurrency and Computation: Practice and Experience
34
,
e7106
.
Wei
L.
,
Zhu
F.
,
Li
Q.
,
Xue
C.
,
Xia
X.
,
Yu
H.
,
Zhao
Q.
,
Jiang
J.
&
Bai
S.
2020
Development, current state and future trends of sludge management in China: based on exploratory data and CO2-equivaient emissions analysis
.
Environment International
144
,
106093
.
Westerhoff
P.
,
Alvarez
P.
,
Li
Q.
,
Gardea-Torresdey
J.
&
Zimmerman
J.
2016
Overcoming implementation barriers for nanotechnology in drinking water treatment
.
Environmental Science: Nano
3
,
1241
1253
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).