A modified nonlinear conjugate gradient algorithm for unconstrained optimization
MetadataShow full item record
CitationBabaie-Kafaki, S., Ghanbari, R. ve Mahdavi-Amiri, N. (2009). A modified nonlinear conjugate gradient algorithm for unconstrained optimization. Maltepe Üniversitesi. s. 359.
Conjugate gradient (CG) algorithms have played special roles in solving large scale nonlinear optimization problems with smooth objective functions f : R n → R. Search directions in the CG algorithms are generated by the sequence d1 = −∇f(x1) and dk = −∇f(xk) + βkdk−1, for k ≥ 2. By introducing different conjugacy conditions, researchers proposed different formulas for βk. The related CG algorithms may have quite different behaviors for general functions. Recently, Dai and Liao  proposed some new formulas for βk based on the standard secant equation. On the basis of the idea proposed by Dai and Liao, researchers made some efforts to obtain new formulas for βk [2, 4, 5]. Here, we first make a modification on the secant equation proposed by Zhang and Xu , and then, using our modified secant equation and Dai-Liao’s approach, we propose a new conjugacy condition and obtain a new formula for βk. It can be shown that under some proper conditions our CG algorithm is globally convergent for general functions. Numerical results showed that our algorithm is competitive and sometimes preferable to some recently proposed CG algorithms.
SourceInternational Conference of Mathematical Sciences
- Makale Koleksiyonu 
The following license files are associated with this item: