Research Article

Journal of Korea Water Resources Association. June 2020. 395-408
https://doi.org/10.3741/JKWRA.2020.53.6.395


ABSTRACT


MAIN

  • 1. INTRODUCTION

  • 2. MATERIAL

  •   2.1 Data description

  •   2.2 Artificial neural networks (ANN)

  •   2.3 Adaptive neuro-fuzzy inference system (ANFIS)

  •   2.4 Wavelet-based neural networks (WNN)

  •   2.5 Genetic algorithm (GA)

  •   2.6 K-fold cross validation (CV)

  • 3. METHODOLOGY

  •   3.1 Models development

  •   3.2 Coding for the ANN, ANFIS, and WNN models

  •   3.3 Measures of accuracy

  • 4. Results and Discussion

  • 5. Conclusion

1. INTRODUCTION

Streamflow forecasting based on accurate measurements can be used to design flood mitigation structures for agricultural and urban basins, and build water allocation systems for agricultural, industrial, and commercial purposes (Rezaie-Balf et al., 2019; Samsudin et al., 2011). The complex and significant variability of streamflow can be explained using spatial and temporal characteristics. It has guided the evolution and application of different approaches for estimation, modeling, forecasting, and prediction (Martins et al., 2011). Forecasting of hydrological time series (e.g., streamflow, water stage, evaporation, and groundwater stage etc.) is important for understanding the internal relationship of natural processes (Krishna et al., 2011). Since the complexity of hydrological time series requires specific tools of non-linear and non-stationary dynamic systems, the diverse forecasting methods have been proposed for hydrological forecasting (Sivakumar et al., 2001).

Data-driven approaches, although credible for hydrological forecasting, don’t have the capability to depict physical processes, because they only consider an adequate selection of hydrological variables with temporal and input-output modification. Therefore, these approaches can be categorized as two types (i.e., classicial and computational intelligence approaches) typically. The classical approaches can be exemplified as linear regression (LR), auto regressive integrated moving average (ARIMA), and ARIMA with exogenous input (ARIMAX) etc. The machine learning approaches can be classified as artificial neural networks (ANN), adaptive neuro-fuzzy inference systems (ANFIS), genetic programming (GP), gene expression programming (GEP), model tree (MT), extreme learning machines (ELM), support vector machines (SVM), and multivariate adaptive regression spline (MARS) etc. The field of machine learning approaches has undergone innovative changes for novel techniques of data simulation and processing (Chandwani et al., 2015).

For three decades, the ANN model based on neuron systems has been used for different types of hydrological forecasting. Various approaches have been applied to validate the accuracy and effectiveness of the ANN model for streamflow forecasting (Biswas and Jayawardena, 2014; Badrzadeh et al., 2013; Moradkhani et al., 2004; Cigizoglu, 2003; Abrahart and See, 2000). The ANFIS model (Jang, 1993), which has the merits of the ANN model and the fuzzy system, has been employed for streamflow forecasting (Talei et al., 2010; Nasr and Bruen, 2008; Keskin et al., 2006; Lohani et al., 2006).

Methodologies combining wavelet and machine learning approaches have been utilized for streamflow forecasting. Combined approaches have outstripped conventional models (Nourani et al., 2014). The wavelet-based machine learning approaches, including wavelet-based neural networks (WNN), wavelet-based support vector regression (WSVR), and wavelet-based adaptive neuro fuzzy inference system (WANFIS), have been effectively implemented for hydrological forecasting, including streamflow, water stage, runoff, and groundwater etc. (Seo et al., 2015; Kamruzzaman et al., 2013; Partal, 2009; Partal and Kisi, 2007). Many researchers have attempted to forecast streamflow using wavelet-based machine learning approaches (Zakhrouf et al., 2016, 2020; Shoaib et al., 2014; Badrzadeh et al., 2013; Nourani et al., 2013; Guo et al., 2011; Tiwari and Chatterjee, 2010).

The optimal structure of wavelet-based machine learning approaches can be constructed as a search method. A method for designing machine learning approaches using evolutionary optimization algorithm is proposed to format the best models. Evolutionary machine learning approaches for modeling hydrologic systems have been suggested by Zakhrouf et al. (2018), Kalteh (2015), Sahay and Srivastava (2014), Asadi et al. (2013).

K-fold cross validation (CV) method is one of the methods to assess the algorithmic generalization. The out-of-sample cross validation (OOS-CV) method is a frequently used method for hydrological modeling. This paper attempts to develop a new approach combining wavelet transformation, machine learning approach, evolutionary optimization algorithm, and k-fold cross validation method for multi-step (days) (i.e., t+1, t+2, and t+3 days) streamflow forecasting in the Seybous River, North Algeria. The paper is divided into five chapters. The first part provides a brief introduction. The second part discusses data and application tools, including ANN, ANFIS, WNN, GA, and k-fold CV, respectively. The third part applies the methodology, and results and discussion are presented in the fourth part. Conclusions are stated in the concluding part.

2. MATERIAL

2.1 Data description

The data for training and testing phases of the developed models were obtained from the Seybous River basin, Algeria. The basin is positioned between 36.0 N and 38.0 N (latitudes), and between 7.0 E and 8.0 E (longitudes), Algeria, North Africa, and covers a total catchment area of 6,471.0 km2 (Fig. 1). Daily streamflow data for 14 years (September, 1982-August, 1996) were obtained from National Agency of Water Resources of Mirebeck (14 06 01) gauging station and were utilized for multi-step (days) streamflow forecasting. These selected multi-step (days) can be adequate, considering the rapid surface streamflow in the Mediterranean basin and the watershed size. For this purpose, the first ten years (70% of data) were utilized for the training phase, and the second four years (30% of data) were utilized for the testing phase as shown in Fig. 2.

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F1.jpg
Fig. 1.

The Seybous River basin and Mirebeck station (Aichouri et al., 2015)

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F2.jpg
Fig. 2.

The observed streamflow hydrograph (14 years)

2.2 Artificial neural networks (ANN)

ANN is a parallel computational system based on the architectural and functional standard of biological networks (Imrie et al., 2000). The feedforward neural networks (FFNN) is one of the commonly applied neural network architectures. It consists of an input layer (i.e., first layer) which gets any outside news directly; an output layer (i.e., final layer); and one or more transitional layers (i.e., hidden layers) which divide the input and output layers. The neurons in the layers are usually fully connected using signs from the first layer to the final layer (Zhang et al., 1998), where the signs are handled by a consolidation function combining first all incoming signs, and second an activation function converting output neuron (Günther and Fritsch, 2010). Fig. 3 shows a conventional FFNN architecture.

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F3.jpg
Fig. 3.

The conventional FFNN model architecture

2.3 Adaptive neuro-fuzzy inference system (ANFIS)

ANFIS is a complex function combining an adaptive neural network and a fuzzy inference system (FIS). The basic formation of a FIS consists of a structure that draws input to membership functions (Moosavi et al., 2013). Since ANFIS is based on FIS, it can be described using fuzzy IF-THEN rules (Jang et al., 1997). There are two methods for FIS: Mamdani (Mamdani and Assilian, 1975) and Sugeno (Takagi and Sugeno, 1985). The differences between the two methods can be explained using the consequent part. Mamdani’s method utilizes fuzzy membership functions (MFs), whereas Sugeno’s method applies linear or constant functions. In this study, the procedures presented by Jang et al. (1997) using Sugeno’s approach were utilized for multi-step (days) streamflow forecasting. Fig. 4 shows the conventional ANFIS architecture. The ANFIS was trained using the least-squares method (LSM) and backpropagation gradient descent method (BP-GDM). Detailed explanation for the ANFIS model can be found in Jang (1993).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F4.jpg
Fig. 4.

The conventional ANFIS model architecture (Seo et al., 2015)

2.4 Wavelet-based neural networks (WNN)

The fundamental objective of the wavelet transform (WT) is to finalize a complete time scale formation of localized and passing events happening at different time scales (Labat et al., 2000). There are two categories of WT: continuous wavelet transform (CWT) and discrete wavelet transform (DWT). The CWT method can be applied for continuous functions or time series (Nourani et al., 2009; Mallat, 1989). Since the CWT method is time-consuming, it requires large resources, whereas the DWT method can be more easily applied (Nourani et al., 2009; Mallat, 1989). The DWT requires less calculation time, is not complicated (Ghaemi et al., 2019; Quilty and Adamowski, 2018; Seo et al., 2015, 2018; Kisi, 2011; Nourani et al., 2009; Mallat, 1989), and involves choosing scales and positions (Adamowski and Sun, 2010).

A WNN model is a combination of WT and ANN, where the original time series for each input neuron of the ANN model is decomposed to some multi-frequency time series using the WT algorithm. The decomposed details (D) and approximation (A) are formatted as new inputs to the ANN model architecture, as shown in Fig. 5. The WNN model consists of a two-stage algorithm. The first stage can be specialized with the multi-level WT where the original time series is decomposed utilizing the WT after choosing the decomposition level. The second stage can be classified as the training and testing phases using the ANN model. The details and approximation, which are obtained by the WT in the first stage, are utilized as input of the ANN model. In the WNN, the number of decomposition levels (details and approximation) of each input is calculated according to full data length (Nourani et al., 2009).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F5.jpg
Fig. 5.

The schematic diagram of WNN model with one input model

$$L=int\lbrack log(N)\rbrack$$ (1)

where L defines the decomposition level, N denotes the number of time series data, and int[·] depicts the integer-part function. In this study, the data comprised N = 5110 samples; therefore, the level decomposition was classified as L = 3.

2.5 Genetic algorithm (GA)

GA, developed by Holland (1975), is a heuristic and evolutionary optimization technique based on evolutionary and genetic theory (Kim and Kim, 2008). The procedure of GA starts with random strings representing design or decision variables. Each string is then evaluated to allocate the fitness value, based on the objective and constraint conditions. Then, the termination condition is validated in the algorithm (Gowda and Mayya, 2014). This study focuses on optimizing the structure of three machine learning approaches (i.e., ANN, ANFIS, and WNN) using the evolutionary optimization algorithm (i.e., GA).

The first step is to choose the GA chromosome. Each individual in the population represents a possible configuration for architecture of the machine learning approach. Based on the concept of evolutionary optimization theory, GA starts with a population of chromosomes, which evolve towards optimum solutions by GA operators, including selection, crossover, and mutation. These steps are reiterated from one generation to the next with the aim of arriving at the general optimal solution (Kim and Kim, 2008).

2.6 K-fold cross validation (CV)

The CV is a statistical methodology for comparing and evaluating training algorithms by separating data into two parts. One is utilized to train a model and the other is utilized to test it (Stone, 1974). K-fold CV assesses the generalization of algorithms in the evolutionary machine learning approach. Based on the category of k-fold CV, the data is first separated into k equally measured parts. Subsequently, k iterations of training and testing phases are performed such that a different part of the data is held-out for testing phase within each iteration, while the remaining (k-1) parts are used for training phase (Zhao et al., 2018)

Suppose we have a model with one or more unknown parameters f (α), and a data set y^ to which the model can be fitted. The primary method to estimate the tuning parameter α using k-fold CV divides the data into rougher parts (Fig. 6). Since the model computes the mean squared error (MSE) for each i= 1, 2, …, k, the cross validation error (CVE) can be calculated as Eq. (2).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F6.jpg
Fig. 6.

Representation of k-fold cross validation (CV)

$$CVE(\alpha)=\frac1K\sum_{i=1}^kMSE_i(\alpha)$$ (2)

where CVE defines the cross validation error.

3. METHODOLOGY

3.1 Models development

The machine learning approaches (i.e., ANN, ANFIS, and WNN) were utilized for multi-step (days) streamflow forecasting using previous time series values. The training phase of machine learning approaches provides a non-linear matching between inputs and outputs, and is useful in identifying patterns utilizing complicated data (Liu and Chung, 2014). Since the backpropagation (BP) algorithm cannot be guaranteed to find the minimum error margin, the convergence cannot occur with fast track. The solution for this problem can be applied with a fitness function that tests how well an architecture learns from the data. The fitness function can be expressed as Eq. (3).

$$F=Min\lbrack CVE\rbrack$$ (3)

where F denotes the fitness function. In general, GA can significantly reduce the weakness of BP algorithm. The data utilized for the training phase (the 75% of the data) were sub-divided into 10 subsets (10-fold cross validation). Real coding was utilized to find the favorable topology for the ANN, ANFIS, and WNN models. The coding to find the best architecture and parameters of machine learning approaches (i.e., ANN, ANFIS, and WNN) is described as follows.

3.2 Coding for the ANN, ANFIS, and WNN models

Feedforward neural networks (FFNN) models with two hidden layers and one neuron in final layer were utilized. A chromosome was built from a series of genes (Fig. 7) to find: the input delay (D), the number of neuron in the hidden layers (NHL1 and NHL2), the activation functions in the hidden and final layers (AFHL1, AFHL2, and AFOL) including linear transfer function (Purelin), symmetric saturating linear transfer function (Satlins), log-sigmoid transfer function (Logsig), and hyperbolic tangent sigmoid transfer function (Tansig), respectively, and the initial connection coefficients of weights and bias (IWB).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F7.jpg
Fig. 7.

Chromosome encoding for the ANN model

A chromosome of ANFIS model using a series of genes was created to find different parameters (Fig. 8): the input delay (D); the number of membership functions (NMF); the type of membership functions (TMF) including Π-shaped (Pimf), Trapezoidal-shaped (Trapmf), Triangular-shaped (Trimf), Gaussian curve (Gaussmf), and Built-in Gaussian function (Dsigmf); the definition of if-then (AND operation/OR operation) rules type (DRT); and the firing strength of a rule (FSR).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F8.jpg
Fig. 8.

Chromosome encoding for the ANFIS model

In this study, two methods were used as firing strength of AND rule, including Minimum (Min) and Product (Prod). Also, two methods were used as firing strength OR rule, including Maximum (Max) and the probabilisticOr method (Probor). For the membership functions type, there are many kinds of membership functions (MFs).

A chromosome of different parameters for wavelet-based neural networks (WNN) model was created from a series of genes (Fig. 9) to find: the input delay (D); the type of mother wavelet (TMW) based on the five most frequently used wavelet families, including Haar (Har), Daubechies (Db), Coiflets (Coif), Symlets (Sym), and Biorthogonal (Bior); the number of neuron in hidden layers (NHL1 and NHL2); the activation functions in hidden and output layers (AFHL1, AFHL2, and AFOL), including linear transfer function (Purelin), symmetric saturating linear transfer function (Satlins), log-sigmoid transfer function (Logsig) and hyperbolic tangent sigmoid transfer function (Tansig); and the initial connection weights and bias coefficients (IWB).

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F9.jpg
Fig. 9.

Chromosome encoding for the WNN model

3.3 Measures of accuracy

To assess the performance of three different machine learning approaches (i.e., ANN, ANFIS, and WNN) to forecast multi-step (days) streamflow during the testing phase, four statistical indices (measures of accuracy) were applied, including root mean square error (RMSE), Nash-Sutcliffe efficiency (NSE), correlation coefficient (R), and peak flow criteria (PFC).

RMSE would vary from zero to a large value which presents prefect forecasting by the difference between observed and forecasted streamflow. NSE is considered for evaluating the ability of hydrological models (Rezaie-Balf et al., 2019; Nash and Sutcliffe, 1970). R is an assessment of the precision of hydrologic modeling and is used for comparisons of alternative models. A perfect matching produces R = 1.0 (Kim and Kim, 2008).

These statistical indices (i.e., RMSE, NSE, and R) may not provide the model performance for extreme streamflow events (e.g., floods and drought). Therefore, it is fundamental to evaluate the extreme distributions using PFC for forecasting extreme events (Rezaie-Balf et al., 2019). PFC plays an important role in following the extreme events to achieve the efficient model. The RMSE, NSE, R, and PFC indices can be represented as Eqs. (4)~(7).

$$RMSE=\sqrt{\frac1N\sum_{i=1}^N\left[Qt_i-\widehat Qt_i\right]^2}$$ (4)
$$NSE=\left(1-\frac{{\displaystyle\sum_{i=1}^n}\left[Qt_i-\widehat Qt_i\right]^2}{{\displaystyle\sum_{i=1}^N}\left[Qt_i-\overline Qt_i\right]^2}\right)\times100$$ (5)
$$R=\frac{{\displaystyle\sum_{i=1}^N}\left[Qt_i-\overline Qt\right]\;\left[\widehat Qt_i-\widetilde Qt\right]}{\sqrt{{\displaystyle\sum_{i=1}^N}\left[Qt_i-\overline Qt\right]^2\;{\displaystyle\sum_{i=1}^N}\left[\widehat Qt_i-\widetilde Qt\right]^2}}$$ (6)
$$PFC=\frac{\sqrt[4]{{\displaystyle\sum_{i=1}^{T_p}}\left[Qt_i-\widehat Qt_i\right]^2\cdot Qt_i^2\;}}{\sqrt{{\displaystyle\sum_{i=1}^{T_p}}Qt_i^2}}$$ (7)

where Qti = the observed streamflow value, Q^ti = the forecasted streamflow value, Qt = the mean observed streamflow, Q~t = the mean forecasted streamflow, N = the number of data, Tp = the number of peak streamflow greater than one third of the observed mean peak streamflow.

4. Results and Discussion

Table 1 presents the optimal structure of ANN model using GA which can be found from table 1 for (t+1) day forecasting as follows; input delay = 3, number of neurons (1st hidden layer) = 9, number of neurons (2nd hidden layer) = 8, activation function (1st hidden layer) = log-sigmoid transfer function, activation function (2nd hidden layer) = linear transfer function, and activation function (final layer) = linear transfer function. For (t+2) day forecasting, the following can be suggested; input delay = 3, number of neurons (1st hidden layer) = 9, number of neurons (2nd hidden layer) = 8, activation function (1st hidden layer) = log-sigmoid transfer function, activation function (2nd hidden layer) = log-sigmoid transfer function, and activation function (final layer) = linear transfer function. For (t+3) day forecasting, the following can be provided; input delay = 5, number of neurons (1st hidden layer) = 5, number of neurons (2nd hidden layer) = 8, activation function (1st hidden layer) = symmetric saturating linear transfer function, activation function (2nd hidden layer) = log-sigmoid transfer function, and activation function (final layer) = linear transfer function.

Table 1.
The optimal structure for ANN model using GA
ANN parameters t+1 t+2 t+3
D 3 3 5
NHL1 9 9 5
NHL2 8 8 8
AFHL1 Logsig Logsig Satlins
AFHL2 Purelin Logsig Logsig
AFOL Purelin Purelin Purelin

Table 2 shows the optimal structure of ANFIS model using GA which can be expressed from table 2 for (t+1) day forecasting as follows; input delay = 1, number of membership functions = 3, type of membership functions = Gaussian curve, firing strength of a rule = product, and definition of if-then rules type = and. For (t+2) day forecasting, the following can be suggested; input delay = 2, number of membership functions = 3, type of membership functions = trapezoidal-shaped, firing strength of a rule = product, and definition of if-then rules type = and. For (t+3) day forecasting, the following can be provided; input delay = 2, number of membership functions = 2, type of membership functions = Gaussian curve, firing strength of a rule = probabilisticOr, and definition of if-then rules type = or.

Table 2.
The optimal structure for ANFIS model using GA
ANFIS parameters t+1 t+2 t+3
D 1 2 2
NMF 3 3 2
TMF Gaussmf Trapmf Gaussmf
FSR Prod Prod Probor
DRT And And Or

Table 3 proposes the optimal structure of WNN model using GA which can be supposed from table 3 for (t+1) day forecasting as follows; input delay = 4, type of mother wavelet = Symlets 7, number of neurons (1st hidden layer) = 5, number of neurons (2nd hidden layer) = 10, activation function (1st hidden layer) = log-sigmoid transfer function, activation function (2nd hidden layer) = log-sigmoid transfer function, and activation function (final layer) = linear transfer function. For (t+2) day forecasting, the following can be suggested; input delay = 4, type of mother wavelet = Daubechies 9, number of neurons (1st hidden layer) = 8, number of neurons (2nd hidden layer) = 10, activation function (1st hidden layer) = symmetric saturating linear transfer function, activation function (2nd hidden layer) = log-sigmoid transfer function, and activation function (final layer) = linear transfer function. For (t+3) day forecasting, the following can be provided; input delay = 5, type of mother wavelet = Symlets 6, number of neurons (1st hidden layer) = 8, number of neurons (2nd hidden layer) = 7, activation function (1st hidden layer) =linear transfer function, activation function (2nd hidden layer) = symmetric saturating linear transfer function, and activation function (final layer) = linear transfer function.

Table 3.
The optimal structure for WNN model using GA
WNN parameters t+1 t+2 t+3
D 4 4 5
MWT sym7 db9 Sym6
NHL1 5 8 8
NHL2 10 10 7
AFHL1 Logsig Satlins Purelin
AFHL2 Logsig Logsig Satlins
AFOL Purelin Purelin Purelin

Table 4 suggests the performances of ANN, ANFIS, and WNN models for different multi-step (days) (i.e., t+1, t+2, and t+3 days) forecasting. The statistical results of standalone (i.e., ANN and ANFIS) models yielded similar performances based on RMSE, NSE, R, and PFC indices for training and testing phases. The performances of standalone models were not better than those of hybrid model (i.e., WNN). For example, the values of RMSE and PFC for the WNN model (RMSE = 8.590 m3/sec and PFC = 0.252, (t+1) day forecasting) were lower than those of ANN (RMSE = 19.120 m3/sec and PFC = 0.446, (t+1) day forecasting) and ANFIS (RMSE = 18.520 m3/sec and PFC = 0.444, (t+1) day forecasting) models in the testing phase. In addition, the values of NSE and R for the WNN model (NSE = 92.000% and R = 0.969, (t+1) day forecasting) were higher than those of ANN (NSE = 60.400% and R = 0.783, (t+1) day forecasting) and ANFIS (NSE = 62.860% and R = 0.793, (t+1) day forecasting) models in the testing phase. The performances of hybrid model were superior to those of standalone models.

As the multi-step (days) for the three models increased from (t+1) to (t+3) days, the model accuracy decreased. The hybrid model applied sub-time series by using WT as model input, while the standalone models utilized the original time series as model input without WT. It can be suggested from table 4 that the hybrid model using sub-time series as input data of the standalone models can improve the performance of conventional standalone models. Fig. 10 shows the scatter diagrams for the ANN, ANFIS, and WNN models in the testing phase. Fig. 11 presents the relative errors of peak flow for the ANN, ANFIS, and WNN models in the testing phase. Figs. 10 and 11 show that the performance of hybrid model was superior to that of standalone models. From the performance comparison, the performance of three models with (t+1) day was superior to that of (t+2) and (t+3) days. In addition, Figs. 12(a)~12(c) display the daily streamflow hydrographs for the ANN, ANFIS, and WNN models based on (t+1) day forecasting in the testing phase.

Table 4.
Comparison between the performance results obtained by the models: ANN, ANFIS, and WNN in the training and testing phases
Training Testing
RMSE (m3/sec) NSE (%) R PFC RMSE (m3/sec) NSE (%) R PFC
ANN t+1 23.850 70.100 0.837 0.463 19.120 60.400 0.783 0.446
t+2 30.990 49.520 0.704 0.513 25.390 30.200 0.560 0.501
t+3 37.270 27.000 0.520 0.645 30.800 -2.710 0.352 0.552
ANFIS t+1 25.610 65.530 0.810 0.479 18.520 62.860 0.793 0.444
t+2 35.910 32.240 0.568 0.626 26.090 26.310 0.535 0.510
t+3 38.470 22.210 0.472 0.653 27.140 20.260 0.457 0.546
WNN t+1 3.530 99.350 0.997 0.103 8.590 92.000 0.969 0.252
t+2 5.230 98.560 0.993 0.128 13.360 80.680 0.917 0.318
t+3 7.950 96.680 0.983 0.103 12.950 81.850 0.907 0.391

The machine learning approaches were combined a data pre-processing technique and an evolutionary optimization algorithm based on k-fold cross validation. In addition, an attempt to do multi-step (days) streamflow forecasting is different compared to the previous studies (Rezaie-Balf et al., 2019; Seo et al., 2018; Zakhrouf et al., 2018; Abdollahi et al., 2017; Ravansalar et al., 2017; Zakhrouf et al., 2016). The forecasts confirmed the accuracy and effectiveness of developed models, based on the statistical indices and scatter diagrams. Different machine learning approaches (e.g., gene expression programming (GEP), support vector machines (SVM), extreme learning machines (ELM), model tree (MT), and multivariate adaptive regression spline (MARS) etc.) combined data pre-processing techniques (e.g., principal component analysis (PCA), maximum entropy spectral analysis (MESA), empirical mode decomposition (EMD), and ensemble empirical mode decomposition (EEMD) etc.) and evolutionary optimization algorithms (e.g., particle swarm optimization (PSO), ant colony optimization (ACO), harmony search (HS), tabu search (TS), and crow search algorithm (CSA) etc.) can also be pursued in further studies.

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F10.jpg
Fig. 10.

Scatter diagrams for the ANN, ANFIS, and WNN models in the testing phase

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F11.jpg
Fig. 11.

Relative errors of the peak flows for the ANN, ANFIS, and WNN models in the testing phase

http://static.apub.kr/journalsite/sites/kwra/2020-053-06/N0200530601/images/kwra_53_06_01_F12.jpg
Fig. 12.

Daily streamflow hydrographs for the ANN, ANFIS, and WNN models based on the (t+1) day forecasting in the testing phase

5. Conclusion

This study suggests three machine learning approaches, including artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and wavelet-based neural networks (WNN) models for multi-step (days) streamflow forecasting in Seybous River basin, Algeria. The accuracy and effectiveness of developed models (i.e., ANN, ANFIS, and WNN) are evaluated based on four statistical indices, including root mean square error (RMSE), Nash-Sutcliffe efficiency (NSE), correlation coefficient (R), and peak flow criteria (PFC).

Based on the combination of evolutionary optimization algorithm and k-fold cross validation, the statistical results of hybrid (i.e., WNN) model are superior to those of standalone (i.e., ANN and ANFIS) models for different multi-step (days). Also, the results of (t+1) day streamflow forecasting are superior to those of (t+2) and (t+3) days, based on statistical indices and scatter diagrams. The combination of machine learning approach and data pre-processing technique based on the evolutionary optimization algorithm and k-fold cross validation can be a potential implement for accurate multi-step (days) streamflow forecasting. Hybrid methodologies combining diverse machine learning approaches and data pre-processing technique based on other evolutionary optimization algorithms and cross validations can be recommended for further studies.

References

1

Abdollahi, S., Raeisi, J., Khalilianpour, M., Ahmadi, F., and Kisi, O. (2017). "Daily mean streamflow prediction in perennial and non-perennial rivers using four data driven techniques." Water Resources Management, Vol. 31, No. 15, pp. 4855-4874.

10.1007/s11269-017-1782-7
2

Abrahart, R.J., and See, L. (2000). "Comparing neural network and autoregressive moving average techniques for the provision of continuous river flow forecasts in two contrasting catchments." Hydrological Processes, Vol. 14, No. 11‐12, pp. 2157-2172.

10.1002/1099-1085(20000815/30)14:11/12<2157::AID-HYP57>3.0.CO;2-S
3

Adamowski, J., and Sun, K. (2010). "Development of a coupled wavelet transform and neural network method for flow forecasting of non-perennial rivers in semi-arid watersheds." Journal of Hydrology, Vol. 390, No. 1, pp. 85-91.

10.1016/j.jhydrol.2010.06.033
4

Aichouri, I., Hani A., Bougherira, N., Djabri, L., Chaffai, H., and Lallahem, S. (2015). "River flow model using artificial neural networks." Energy Procedia, Vol. 74, pp. 1007-1014.

10.1016/j.egypro.2015.07.832
5

Asadi, S., Shahrabi, J., Abbaszadeh, P., and Tabanmehr, S. (2013). "A new hybrid artificial neural networks for rainfall-runoff process modeling." Neurocomputing, Vol. 121, pp. 470-480.

10.1016/j.neucom.2013.05.023
6

Badrzadeh, H., Sarukkalige, R., and Jayawardena, A.W. (2013). "Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting." Journal of Hydrology, Vol. 507, pp. 75-85.

10.1016/j.jhydrol.2013.10.017
7

Biswas, R.K., and Jayawardena, A.W. (2014). "Water level prediction by artificial neural network in a flashy transboundary river of Bangladesh." Global Nest Journal, Vol. 16, No. 2, pp. 432-444.

10.30955/gnj.001226
8

Chandwani, V., Vyas, S.K., Agrawal, V., and Sharma, G. (2015). "Soft computing approach for rainfall-runoff modelling: A review." Aquatic Procedia, Vol. 4, No. 2015, pp. 1054-1061.

10.1016/j.aqpro.2015.02.133
9

Cigizoglu, H.K. (2003). "Estimation, forecasting and extrapolation of river flows by artificial neural networks." Hydrological Sciences Journal, Vol. 48, No. 3, pp. 349-361.

10.1623/hysj.48.3.349.45288
10

Ghaemi, A., Rezaie-Balf, M., Adamowski, J., Kisi, O., and Quilty, J. (2019). "On the applicability of maximum overlap discrete wavelet transform integrated with MARS and M5 model tree for monthly pan evaporation prediction." Agricultural and Forest Meteorology, Vol. 278, p. 107647.

10.1016/j.agrformet.2019.107647
11

Gowda, C.C., and Mayya, S.G. (2014). "Comparison of back propagation neural network and genetic algorithm neural network for stream flow prediction." Journal of Computational Environmental Sciences, Vol. 2014, p. 290127.

10.1155/2014/290127
12

Günther, F., and Fritsch, S. (2010). "Neuralnet: Training of neural networks." The R journal, Vol. 2, No. 1, pp. 30-38.

10.32614/RJ-2010-006
13

Guo, J., Zhou, J., Qin, H., Zou, Q., and Li, Q. (2011). "Monthly streamflow forecasting based on improved support vector machine model." Expert Systems with Applications, Vol. 38, No. 10, pp. 13073-13081.

10.1016/j.eswa.2011.04.114
14

Holland, J.H. (1975). Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor, M.I., U.S.

15

Imrie, C.E., Durucan, S., and Korre, A. (2000). "River flow prediction using artificial neural networks: Generalisation beyond the calibration range." Journal of Hydrology, Vol. 233, No. 1, pp. 138-153.

10.1016/S0022-1694(00)00228-6
16

Jang, J.S.R. (1993). "ANFIS: Adaptive-network-based fuzzy inference system." IEEE Transactions on Systems, Man, and Cybernetics, Vol. 23, No. 3, pp. 665-685.

10.1109/21.256541
17

Jang, J.S.R., Sun, C.T., and Mizutani, E. (1997). Neuro-fuzzy and soft computing: A computational approach to learning and machine intelligence. Prentice-Hall, New Jersey, U.S.

10.1109/TAC.1997.633847
18

Kalteh, A.M. (2015). "Wavelet genetic algorithm-support vector regression (wavelet GA-SVR) for monthly flow forecasting." Water Rsources Mnagement, Vol. 29, pp. 1283-1293.

10.1007/s11269-014-0873-y
19

Kamruzzaman, M., Metcalfe, A.V., and Beecham, S. (2013). "Wavelet-based rainfall-stream flow models for the southeast Murray Darling basin." Journal of Hydrologic Engineering, Vol. 19, No. 7, pp. 1283-1293.

10.1061/(ASCE)HE.1943-5584.0000894
20

Keskin, M.E., Taylan, D., and Terzi, Ö. (2006). "Adaptive neural-based fuzzy inference system (ANFIS) approach for modelling hydrological time series." Hydrological Sciences Journal, Vol. 51, No. 4, pp. 588-598.

10.1623/hysj.51.4.588
21

Kim, S., and Kim, H.S. (2008). "Neural networks and genetic algorithm approach for nonlinear evaporation and evapotranspiration modeling." Journal of Hydrology, Vol. 351, No. 3, pp. 299-317.

10.1016/j.jhydrol.2007.12.014
22

Kisi, O. (2011). "A combined generalized regression neural network wavelet model for monthly streamflow prediction." KSCE Journal of Civil Engineering, Vol. 15, No. 8, pp. 1469-1479.

10.1007/s12205-011-1004-4
23

Krishna, B., Satyaji Rao, Y.R., and Nayak, P.C. (2011). "Time series modeling of river flow using wavelet neural networks." Journal of Water Resource and Protection, Vol. 3, pp. 50-59.

10.4236/jwarp.2011.31006
24

Labat, D., Ababou, R., and Mangin, A. (2000). "Rainfall-runoff relations for karstic springs. Part II: Continuous wavelet and discrete orthogonal multiresolution analyses." Journal of Hydrology, Vol. 238, pp. 149-178.

10.1016/S0022-1694(00)00322-X
25

Liu, W.C., and Chung, C.E. (2014). "Enhancing the predicting accuracy of the water stage using a physical-based model and an artificial neural network-genetic algorithm in a river system." Water, Vol. 6, No. 6, pp. 1642-1661.

10.3390/w6061642
26

Lohani, A.K., Goel, N.K., and Bhatia, K.K.S. (2006). "Takagi-Sugeno fuzzy inference system for modeling stage-discharge relationship." Journal of Hydrology, Vol. 331, pp. 146-160.

10.1016/j.jhydrol.2006.05.007
27

Mallat, S.G. (1989). "A theory for multiresolution signal decomposition: The wavelet representation." IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 11, No. 7, pp. 674-693.

10.1109/34.192463
28

Mamdani, E.H., and Assilian, S. (1975). "An experiment in linguistic synthesis with a fuzzy logic controller." International Journal of Man-machine Studies, Vol. 7, No. 1, pp. 1-13.

10.1016/S0020-7373(75)80002-2
29

Martins, O.Y., Sadeeq, M.A., and Ahaneku, I.E. (2011). "ARMA modelling of Benue river flow dynamics: Comparative study of PAR model." Open Journal of Modern Hydrology, Vol. 1, pp. 1-9.

10.4236/ojmh.2011.11001
30

Moosavi, V., Vafakhah, M., Shirmohammadi, B., and Behnia, N. (2013). "A wavelet-ANFIS hybrid model for groundwater level forecasting for different prediction periods." Water Resources Management, Vol. 27, pp. 1301-1321.

10.1007/s11269-012-0239-2
31

Moradkhani, H., Hsu, K.L., Gupta, H.V., and Sorooshian, S. (2004). "Improved streamflow forecasting using self-organizing radial basis function artificial neural networks." Journal of Hydrology, Vol. 295, No. 1, pp. 246-262.

10.1016/j.jhydrol.2004.03.027
32

Nash, J.E., and Sutcliffe, J.V. (1970). "River flow forecasting through conceptual models I: A discussion of principles." Journal of Hydrology, Vol. 10, pp. 282-290.

10.1016/0022-1694(70)90255-6
33

Nasr, A., and Bruen, M. (2008). "Development of neuro-fuzzy models to account for temporal and spatial variations in a lumped rainfall-runoff model." Journal of Hydrology, Vol. 349, pp. 277-290.

10.1016/j.jhydrol.2007.10.060
34

Nourani, V., Alami, M.T., and Aminfar, M.H. (2009). "A combined neural-wavelet model for prediction of Ligvanchai watershed precipitation." Engineering Applications of Artificial Intelligence, Vol. 22, No. 3, pp. 466-472.

10.1016/j.engappai.2008.09.003
35

Nourani, V., Baghanam, A.H., Adamowski, J., and Kisi, O. (2014). "Applications of hybrid wavelet-artificial intelligence models in hydrology: A review." Journal of Hydrology, Vol. 514, pp. 358-377.

10.1016/j.jhydrol.2014.03.057
36

Nourani, V., Hosseini, B.A., Adamowski, J., and Gebremicheal, M. (2013). "Using self-organizing maps and wavelet transforms for space-time pre-processing of satellite precipitation and runoff data in neural network based rainfall-runoff modeling." Journal of Hydrology, Vol. 476, pp. 228-243.

10.1016/j.jhydrol.2012.10.054
37

Partal, T. (2009). "Modelling evapotranspiration using discrete wavelet transform and neural networks." Hydrological Processes, Vol. 23, No. 25, pp. 3545-3555.

10.1002/hyp.7448
38

Partal, T., and Kisi, O. (2007). "Wavelet and neuro-fuzzy conjunction model for precipitation forecasting." Journal of Hydrology, Vol. 342, pp. 199-212.

10.1016/j.jhydrol.2007.05.026
39

Quilty, J., and Adamowski, J. (2018). "Addressing the incorrect usage of wavelet-based hydrological and water resources forecasting models for real-world applications with best practices and a new forecasting framework." Journal of Hydrology, Vol. 563, pp. 336-353.

10.1016/j.jhydrol.2018.05.003
40

Ravansalar, M., Rajaee, T., and Kisi, O. (2017). "Wavelet-linear genetic programming: A new approach for modeling monthly streamflow." Journal of Hydrology, Vol. 549, pp. 461-475.

10.1016/j.jhydrol.2017.04.018
41

Rezaie-Balf, M., Kim, S., Fallah, H., and Alaghmand, S. (2019). "Daily river flow forecasting using ensemble empirical mode decomposition based heuristic regression models: Application on the perennial rivers in Iran and South Korea." Journal of Hydrology, Vol. 572, pp. 470-485.

10.1016/j.jhydrol.2019.03.046
42

Sahay, R.R., and Srivastava, A. (2014). "Predicting monsoon floods in rivers embedding wavelet transform, genetic algorithm and neural network." Water Resources Management, Vol. 28, pp. 301-317.

10.1007/s11269-013-0446-5
43

Samsudin, R., Saad, P., and Shabri, A. (2011). "River flow time series using least squares support vector machines." Hydrology and Earth System Sciences, Vol. 15, pp. 1835-1852.

10.5194/hess-15-1835-2011
44

Seo, Y., Kim, S., Kisi, O., and Singh, V.P. (2015). "Daily water level forecasting using wavelet decomposition and artificial intelligence techniques." Journal of Hydrology, Vol. 520, pp. 224-243.

10.1016/j.jhydrol.2014.11.050
45

Seo, Y., Kim, S., and Singh, V. (2018). "Machine learning models coupled with variational mode decomposition: A new approach for modeling daily rainfall-runoff." Atmosphere, Vol. 9, No. 7, p. 251.

10.3390/atmos9070251
46

Shoaib, M., Shamseldin, A,Y., and Melville, B,W. (2014). "Comparative study of different wavelet based neural network models for rainfall-runoff modeling." Journal of Hydrology, Vol. 515, pp. 47-58.

10.1016/j.jhydrol.2014.04.055
47

Sivakumar, B., Berndtsson, R., Olsson, J., and Jinno, K. (2001). "Evidence of chaos in the rainfall-runoff process." Hydrological Sciences Journal, Vol. 46, No. 1, pp. 131-145.

10.1080/02626660109492805
48

Stone, M. (1974). "Cross-validatory choice and assessment of statistical predictions." Journal of the royal statistical society. Series B (Methodological), Vol. 36, No. 2, pp. 111-147.

10.1111/j.2517-6161.1974.tb00994.x
49

Takagi, T., and Sugeno, M. (1985). "Fuzzy identification of systems and its applications to modeling and control." IEEE Transactions on Systems, Man, and Cybernetics, Vol. 15, No. 1, pp. 116-132.

10.1109/TSMC.1985.6313399
50

Talei, A., Chye, C.L.H., and Wong, T.S.W. (2010). "Evaluation of rainfall and discharge inputs used by adaptive network-based Fuzzy Inference Systems (ANFIS) in rainfall-runoff modeling." Journal of Hydrology, Vol. 391, pp. 248-262.

10.1016/j.jhydrol.2010.07.023
51

Tiwari, M.K., and Chatterjee, C. (2010). "Development of an accurate and reliable hourly flood forecasting model using wavelet-bootstrap-ANN (WBANN) hybrid approach." Journal of Hydrology, Vol. 394, pp. 458-470.

10.1016/j.jhydrol.2010.10.001
52

Zakhrouf, M., Bouchelkia, H., and Stamboul, M. (2016). "Neuro-wavelet (WNN) and neuro-fuzzy (ANFIS) systems for modeling hydrological time series in arid areas. A case study: The catchment of Aïn Hadjadj (Algeria)." Desalination and Water Treatment, Vol. 57, No. 37, pp. 17182-17194.

10.1080/19443994.2015.1085908
53

Zakhrouf, M., Bouchelkia, H., Stamboul, M., Kim, S., and Heddam, S. (2018). "Time series forecasting of river flow using an integrated approach of wavelet multi-resolution analysis and evolutionary data-driven models. A case study: Sebaou River (Algeria)." Physical Geography, Vol. 39, No. 6, pp. 506-522.

10.1080/02723646.2018.1429245
54

Zakhrouf, M., Bouchelkia, H., Stamboul, M., and Kim, S. (2020). "Novel hybrid approaches based on evolutionary strategy for streamflow forecasting in the Chellif River, Algeria." Acta Geophysica, Vol. 68, No. 1, pp. 167-180.

10.1007/s11600-019-00380-5
55

Zhang, G., Patuwo, B.E., and Hu, M.Y. (1998). "Forecasting with artificial neural networks: The state of the art." International Journal of Forecasting, Vol. 14, No. 1, pp. 35-62.

10.1016/S0169-2070(97)00044-7
56

Zhao, X., Guo, X., Luo, J., and Tan, X. (2018). "Efficient detection method for foreign fibers in cotton." Information Processing in Agriculture, Vol. 5, No. 3, pp. 320-328.

10.1016/j.inpa.2018.04.002
페이지 상단으로 이동하기