- Description
- HYDRUS Editions
- HYDRUS Add-on Modules
- H1D
- H1D-Pro
- Particle Tracking
- PFAS
- Cosmic
- HYPAR
- Unsatchem
- Wetland 1
- Wetland 2
- HPx
- Dual-Permeability
- GOpt
- C-Ride
- Furrow
- Module
- SLOPE Stability
- SLOPE Cube
- ParSWMS
- HYDRUS Licenses
- HYDRUS Maintenance
- HYDRUS Pricing
- HYDRUS Ordering
- Installation and Activation
- References
- Applications
- Books
- Reviews
- Tutorials for Version 1.x
- Tutorials for Version 2.x
- Tutorials for Version 3.x
- Tutorials for Version 5.x
- Services
- Distributors
- FAQ
- Troubleshooting
- HYDRUS Links
- New Features

Home / Programs / HYDRUS / Add-on Modules / Module GOpt

## Global Optimization (GOpt) Module

HYDRUS historically implemented a **Marquardt-Levenberg** type parameter estimation technique [Šimůnek and Hopmans, 2002] for inverse estimation of soil hydraulic [Hopmans et al., 2002] and/or solute transport and reaction [Šimůnek et al., 2002] parameters from measured transient or steady-state flow and/or transport data.

The **GOpt** module includes three global optimization algorithms:

- Particle swarm optimization (PSO) [Kennedy and Eberhart, 1995; Shi and Eberhart, 1998]
- Comprehensive learning particle swarm optimization (CLPSO) [Liang et al., 2006]
- Gradient-Based comprehensive learning particle swarm optimization (G-CLPSO) [Brunetti et al., 2022c])

### Particle Swarm Optimization (PSO)

PSO is a gradient-free search strategy based on a social-psychological metaphor involving individuals of a swarm Sp [-] that interact with each other to achieve an optimum state [Kennedy and Eberhart, 1995; Shi and Eberhart, 1998]. The j-th particle’s position Xj is a vector whose components are the values of the calibrated parameters, and it is updated using the following equations:

see eq. 9.3 in the Technical Manual

where Vj is the velocity of the j-th particle, ω is an inertia weight that balances the local and global search capability of the swarm [-], U is a random number sampled from a uniform distribution in the interval 0 and 1 [-], φp and φp are cognitive and social coefficients [-], respectively, Xjbest is the best position recorded for the j-th particle [-], and is the best position recorded in the entire swarm [-]. Preliminary indications on how to set Sp, ω, φp and φp can be found in Pedersen [2010]. However, these parameters shall be adapted to the specific calibration problem. The PSO algorithm is recommended for low to moderate dimensional inverse problems (e.g., < 20 parameters).

### Comprehensive Learning Particle Swarm Optimization (CLPSO)

CLPSO tries to address the premature convergence problems observed when using PSO for multimodal high-dimensional problems [Liang et al., 2006]. The algorithm uses a different learning strategy that preserves the individual behavior in the swarm and proposes different movements in different dimensions. The j-th particle’s movement in the i-th dimension is described as follows:

see eq. 9.4 in the Technical Manual

where Vj,i and Xj,i are the velocity and position of the i-th parameter of the j-th particle, respectively; ω is an inertia weight that is reduced as the number of iterations grows to favor exploitation [-], Ui is a random number sampled from a uniform distribution in the interval 0 and 1 [-], c is a learning parameter typically set to 1.4995 [-], and defines which particles’ the particle j should follow. can be the corresponding dimension of any particle’s , including its own, and the decision depends on probability Pc, referred to as the learning probability [Liang et al., 2006]. A random number is generated for each dimension of the particle j. If this number is lower than Pcj, the corresponding dimension learns from other particles based on a tournament selection procedure; otherwise, it will learn from its own. The learning process is continuously monitored. If a particle ceases to improve for a certain number of iterations (i.e., refreshing gap), m [-], the exemplar from which the particle is learning, is reassigned. Indications on how to set Sp and m can be found in Liang et al. [2006] and Brunetti et al. [2022]. The main advantage of the CLPSO learning strategy is that all particles can be potentially used to guide the search direction of other individuals, and this learning process can be different for each dimension of the particle. Due to its high exploration capabilities, CLPSO shall be used for high dimensional inverse problems (i.e., > 20 parameters), especially when a high correlation between parameters is expected.

### Gradient-based Comprehensive Learning Particle Swarm Optimization (G-CLPSO)

G-CLPSO increases the exploitation features of CLPSO by mixing it with the Marquardt-Levenberg algorithm [Brunetti et al., 2022]. In particular, the CLPSO is used first for NL iterations (-). Then, one random individual rand is selected from the swarm and is used as the starting point for the Marquardt-Levenberg local search. If the calculated fitness value" is lower than the corresponding personal best , then is replaced by the optimum found by the local search, . By doing so, the new can enter the CLPSO tournament selection procedure and improve the swarm without significantly reducing the diversity of the swarm. The main advantage is the possibility to start the ML algorithm after every NL iterations at different points. Indications on how to set Sp and m can be found in Brunetti et al. [2022]. Even though the G-CLPSO can be used for any inverse modeling problem, care must be taken if a very high correlation between parameters is expected, since the Marquardt-Levenberg can exhibit very slow convergence in these conditions.

### Initialization, Boundary Handling, and Convergence Criterion

Initialization and Boundary Handling: Swarm particles are initialized using a multidimensional random uniform distribution spanning user-defined upper, ub, and lower, lb, parameters’ bounds. To embody prior knowledge about the calibrated parameters in the optimization (e.g., laboratory estimated values), the initial estimate defined in the GUI is included in the initial population. The initial velocity is set to zero for all particles. If, during the optimization process, the proposed particle position falls outside of the allowed parameters’ space, its velocity is set to zero, and a reflect boundary handling technique is used to repair its position:

see eq. on page 151 in the Technical Manual

During the optimization, non-convergent model runs are identified and a large positive value is attributed to the objective function Ф. To improve convergence when optimizing soil hydraulic parameters, the value of the minimum allowed pressure head at the soil surface, hCritA, is internally adjusted so that the corresponding water content is at least 0.005 higher than the residual water content. Nevertheless, it is strongly recommended that the number of non-convergent runs be to minimized by properly implementing the numerical model (i.e., appropriate spatial and temporal discretization, physically realistic parameters’ bounds).

Convergence Criterion: The swarm is evolved for a user-defined maximum number of iterations, after which the algorithm stops. During iterations, algorithms are considered to be converged if all particles best positions" and the global best position recorded for the entire swarm, , simultaneously exhibit negligible improvements in the last Nc consecutive iterations. An improvement is considered negligible if the relative change in the objective function between two consecutive iterations is below a user-defined tolerance value rtol. Once the maximum number of iterations is reached or convergence is formally achieved, the resulting global optimum is used as the starting point for the Marquardt-Levenberg algorithm, which further refines the optimization process and calculates other important metrics (e.g. correlation matrix, confidence intervals).

### References

Brunetti, G., C. Stumpp, and J. Šimůnek, Balancing exploitation and exploration: A novel hybrid global-local optimization strategy for hydrological model calibration, Environmental Modeling Software, 150, 105341. doi: 10.1016/j.envsoft.2022.105341, 2022.

Kennedy, J., and R. Eberhart, Particle swarm optimization, in: Proceedings of ICNN’95 - International Conference on Neural Networks, IEEE, pp. 1942–1948, doi: 10.1109/ICNN.1995.488968, 1995.

Liang, J. J., A. K. Qin, P. N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput., 10, 281–295, doi: 10.1109/TEVC.2005.857610, 2006.

Shi, Y., and R. Eberhart, A modified particle swarm optimizer, IEEE Int. Conf. Evol. Comput. Proceedings. IEEE World Congr. Comput. Intell. (Cat. No.98TH8360) 69–73, doi: 10.1109/ICEC.1998.699146, 1998.