Hybrid back-propagation training with evolutionary strategies

Abstract

This work presents a hybrid algorithm for neural network training that combines the back-propagation (BP) method with an evolutionary algorithm. In the proposed approach, BP updates the network connection weights, and a ( 1+1 ) Evolutionary Strategy (ES) adaptively modifies the main learning parameters. The algorithm can incorporate different BP variants, such as gradient descent with adaptive learning rate (GDA), in which case the learning rate is dynamically adjusted by the stochastic ( 1+1 )-ES as well as the deterministic adaptive rules of GDA; a combined optimization strategy known as memetic search. The proposal is tested on three different domains, time series prediction, classification and biometric recognition, using several problem instances. Experimental results show that the hybrid algorithm can substantially improve upon the standard BP methods. In conclusion, the proposed approach provides a simple extension to basic BP training that improves performance and lessens the need for parameter tuning in real-world problems.

  1. José Parra, Leonardo Trujillo and Patricia Melin. Hybrid back-propagation training with evolutionary strategies. Soft Computing, pages 1-12, 2013. URL, DOI BibTeX

    @article{,
    	year = 2013,
    	issn = "1432-7643",
    	journal = "Soft Computing",
    	doi = "10.1007/s00500-013-1166-8",
    	title = "Hybrid back-propagation training with evolutionary strategies",
    	url = "http://dx.doi.org/10.1007/s00500-013-1166-8",
    	publisher = "Springer Berlin Heidelberg",
    	keywords = "Neural networks; Back-propagation; Evolutionary strategies",
    	author = "Parra, José and Trujillo, Leonardo and Melin, Patricia",
    	pages = "1-12",
    	language = "English"
    }
    

Additional Info

Bibtex:
J-2013-3.bib
PDF:
Hybrid back-propagation training with evolutionary strategies.pdf

Download

Download as PDF
Read 11610 times
Feedback