Research

Optimization is one of the oldest challenges in science and engineering. A primitive example of an optimization problem (though one many people still tackle on a daily basis) is to find the shortest distance or path between two places. This problem may be constrained in some way – for example, one may only be able to travel on public roads in order to reach his or her destination. Constraints, of course, greatly affect the optimization process and its outcome. In Science Technology, Engineering and Mathematics (STEM) fields, optimization is an overarching goal, especially in this current era of budget austerity. In STEM fields, an optimization problem may take the form of minimizing the weight of an aircraft’s frame without losing strength, or designing the optimal shape of an airfoil to produce the best lift to drag ratio. Similarly, aerospace vehicle trajectory optimization is an important problem that has been studied for both aircraft and spacecraft.

Recently, the use of Evolutionary Computation (EC) has exploded in use in the applied sciences and engineering. As an engineer, I am most familiar with Evolutionary Algorithms (EAs) and Evolutionary Strategies (ESs), though other examples such as neural networks do exist. In their most basic formulations, EAs and ESs are population-based heuristic search techniques that often model physical or biological processes such as mating, mutation, or swarm behavior. It is generally accepted that the principal differences between ESs and EAs are that ESs: 1) employ search steps that are deterministic and 2) almost always work with vectors of real numbers that are representations of the solutions; EAs use stochastic processes coupled with selection to produce ever-improving potential solutions. They are useful tools for optimization when one is unwilling or unable to employ traditional gradient-based optimization techniques.

Aided by [1] it is possible to present a brief overview of the early history of EC. The origins of EC date back to the late 1950’s; examples from this era include [2], [3], [4] and [5]. Mostly due to the lack of processing power, this area was largely neglected early in the literature. Eventually, during the 1970’s, this situation began to change, and the foundational works by Holland [6], Rechenber [7], Schwefel [8] and Fogel [9] led to a paradigm shift and a renewal of interest in EC. Assuming Moore’s Law [10] continues to hold true, one could reasonably expect interest in EC to continue unabated into the near future.

Recently, I performed a comparison of three EC algorithms [11] and compared their performance on a classic problem in the literature, the Hohmann transfer. The Hohmann transfer is well studied and has proven solutions (e.g. see Hohmann’s originally work [12], proofs using the calculus of variations in [13] and [14], Green’s Theorem [15], graphical construction [16], Lagrange multipliers [17], and elementary calculus [18] and [19]). As such, this problem makes a good test problem because it enables one to debug the implementation of these algorithms and verify their efficacy against a well-known solution. This is important to do because these Evolutionary Computation algorithms by design exhibit non-deterministic behavior and so bugs or logic errors can often be very difficult to locate.

The first algorithm explored was Particle Swarm Optimization (PSO), which was first introduced by Kennedy and Eberhart in 1995 [20]. PSO is classified as a swarm intelligence method that is easy to use because there are very few parameters that need to be adjusted, unlike in the more commonly seen genetic algorithms. A very simple description of PSO is given by Poli [21]. The work by Pontani and Conway [22] formed the basis of my comparison; the authors in [22], however, focused only on the implementation and use of PSO, while my work involves examining this and other algorithms.

Additionally, the next algorithm explored was Bacterial Foraging Optimization (BFO), which was first proposed by Passino in [23]. In his original paper, he described the biology and physics underlying the foraging (chemotactic) behavior of the Escherichia coli (E. coli) bacteria. E. Coli occur naturally as part of the normal flora of the gastrointestinal system and produce vitamin K2 [24]. The algorithm then simulates this behavior of bacteria searching for nutrients; an excellent description of this process can be found in [25].

Finally, Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) was originally set out by Hansen in [26] and is a matrix-based algorithm designed to have a computational cost of . Hansen’s paper introducing the algorithm relies heavily on [27]. The Generating Set Adaptation (GSA) was proposed in [27] as the first Evolutionary Strategy invariant to coordinate system rotations and that learned a particular problem’s scaling. GSA did not include the formal covariance matrix in the algorithm. Then, using this foundation, Hansen in [26] introduced the (1,λ)-ES CMA-ES algorithm. The new algorithm used a covariance matrix with great success, yielding an algorithm that, while still heuristic, has a more concrete theoretical basis in statistical analysis than many other algorithms found in Evolutionary Computation. After declarations and initializations, the details of CMA-ES can be written in around 20 lines of MATLAB code. Hansen provides source code on his website [28].

As previously stated, recent work focused on benchmarking these algorithms by using the Hohmann transfer. In The role of fixed and varying penalties is explored for each algorithm and compared, as more fully described in [11]. Each algorithm was run 1000 times and the performance metrics were compared. PSO using fixed penalties ran with an average central processing unit (CPU) time of 0.138 seconds and yielded a mean error of 1.30% and a median error of 0.48%. Using varying penalties, the algorithm ran with an average CPU time of 0.107 seconds and yielded a mean error of 1.78% and a median error of 0.43%. BFO with fixed penalties had a mean CPU time of 0.655 seconds and yielded a 2.19% mean percent error and 1.91% median percent error. For the varying penalty case, BFO averaged a CPU time of 0.727 seconds, a mean percent error of 0.27% and a median 0.36%. CMA-ES with fixed penalties yielded a mean CPU time of 0.572 seconds, a mean percent error of 0.26% and a median percent error of 0.00%. The varying penalty case for CMA-ES yielded a mean CPU time of 0.582 seconds, a mean percent error of 0.43% and a median percent error of 0.43%. The algorithms all excelled in some areas and had poor performance in others, especially as the penalty case varied. A clear result is that algorithm selection is problem-dependent, which is anticipated behavior due to the heuristic nature of the algorithms. Detailed results are available in [11].

Future work in Evolutionary Computation is bright. Many variants exist in the literature that tackle some of the flaws that these algorithms inherently have. Beyond the better studied single-objective algorithms, more recent approaches to optimization using EC include using multi-objective Evolutionary Algorithms (MOEAs). MOEAs are similar to single objective algorithms, except that more than one fitness function is being optimized simultaneously. Many early examples of MOEAs included vague quantities such as a niche radius. An older, yet practical example of an MOEA would be Non-dominated Sorting Genetic Algorithm II (NSGA-II) [29]. More recently, a newer algorithm that takes advantage of ϵ-dominance is known as ϵ-MOEA and was proposed in [30]. In [31], a colleague of mine used ϵ-MOEA to solve several spacecraft station-keeping problems and his results were very promising. A very recent algorithm takes the form of Borg as proposed in [32]; this algorithm recently received a patent from the U.S. Patent and Trademark Office. Clearly, multi-objective optimization is an important area of current research in Evolutionary Computing.

[1]         Bäck, T., Hammel, U. and Schwefel, H-P., “Evolutionary Computation: Comments on the History and Current State,” IEEE Transactions on Evolutionary Computation, Vol. 1, No. 1, April 1997, pp. 3-17.
[2]         Bremermann, H.J., “Optimization through evolution and recombination”, in Self-Organizing Systems, edited by M. C. Yovits et al., Spartan, Washington, DC, 1962.
[3]         Friedberg, R.M., “A Learning Machine: Part I,” IBM Journal, Vol. 2, No. 1, January 1958, pp. 2-13.
[4]         Friedberg, R.M., “A Learning Machine: Part II,” IBM Journal, Vol. 3, No. 7, July 1959, pp. 282-287.
[5]         Box, G.E.P., “Evolutionary Operation: A Method for Increasing Industrial Productivity,” Applied Statistics, Vol. VI, No. 2, 1957, pp. 81-101.
[6]         Holland, J.H., “Outline for a Logical Theory of Adaptive Systems,” Journal of the Association of Computer Machinery, Vol. 9, Issue 3, July 1962, pp. 297-314.
[7]         Rechenberg, I., “Cybernetic Solution Path of an Experimental Problem,” Royal Aircraft Establishment, Library Translation No. 1122, Farnborough, Hants., U.K., August 1965.
[8]         Schwefel, H-P., “Experimentelle Optimierung einer Zweiphasendiise Teil I”, AEG Research Institute Project MHD-Staustrahlrohr 11034/68 Technical Report 35.
[9]         Fogel, L.J, “Autonomous Automata,” Industrial Research Magazine, Vol. 4, No. 2, February 1962, pp. 14-19.
[10]     Moore, G.E., “Cramming More Components Onto Integrated Circuits,” Electronics, Volume 38, Number 8, April 1965, pp. 114-117.
[11]     Sottile, B.J., “Evolutionary Computation for Spacecraft Trajectory Optimization,” M.S. Thesis, Department of Aerospace Engineering, The Pennsylvania State University, University Park, PA, August 2013. [Available at https://etda.libraries.psu.edu/paper/19135/.]
[12]     Hohmann, W., Die Erreichbarkeit der Himmelsköper, Oldenbourg, Munich, 1925; also available as NASA Technical Translation F-44, 1960.
[13]     Lawden, D.F., Optimal Trajectories for Space Navigation, Buttersworth, London, 1963, Chapter 6.
[14]     Barrar, R.B., “An Analytic Proof that the Hohmann-Type Transfer is the True Minimum Two-Impulse Transfer,” Acta Astronautica, Vol. 9, No. 1, 1963, pp. 1-11.
[15]     Hazelrigg, G.A., Jr., “The Use of Green’s Theorem to Find Globally Optimal Solutions to a Class of Impulsive Transfers,” American Astronomical Society, AAS Paper 68-092, Sept. 1968.
[16]     Marec, J.P., Optimal Space Trajectories, Elsevier, Amsterdam, 1979, Chapter 2, pp. 21-27.
[17]     Battin, R.H., An Introduction to the Mathematics and Methods of Astrodynamics, AIAA Education Series, AIAA, New York, 1987, pp. 529-530.
[18]     Palmore, J.I., “An Elementary Proof of the Optimality of Hohmann Transfers,” Journal of Guidance, Control and Dynamics, Vol. 7, No. 5, 1984, pp. 629-630.
[19]     Prussing, J.E., “Simple Proof of the Global Optimality of the Hohmann Transfer,” Journal of Guidance, Control and Dynamics, Vol. 15, No. 4, 1992, pp. 1037-1038.
[20]     Kennedy, J., and Eberhart, R. “Particle Swarm Optimization,” Proceedings of IEEE International Conference on Neural Networks, Vol. 4, Institute of Electrical and Electronics Engineers, Western Australia, 1995, pp. 1942-1948.
[21]     Poli, R. “Analysis of the Publications on the Applications of Particle Swarm Optimization,” Journal of Artificial Evolution and Applications, Vol. 2008, Article ID 685175, 10 pages, 2008.
[22]     Pontani, M. and Conway, B.A., “Particle Swarm Optimization Applied to Space Trajectories,” Journal of Guidance, Control and Dynamics, Vol. 33, No. 5, 2010, pp. 1429-1441.
[23]     Passino, K.M., “Biomimicry of Bacterial Foraging for Distributed Optimization and Control,” IEEE Control System Magazine, Vol. 22, Issue 3, pp. 52-66, June 2002.
[24]     Bentley, R., and Meganathan, R., “Biosynthesis of Vitamin K (Menaquinone) in Bacteria,” Microbiological Reviews, Vol. 46, No. 3, pp. 241-280, September 1982.
[25]     Melton, R.G., “Hybrid Methods for Determining Time-Optimal, Constrained Spacecraft Reorientation Maneuvers,” Acta Astronautica, Vol. 94, No. 1, p. 294, January 2014.
[26]     Hansen, N.A., Ostermeier, A., “Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation,” Proceedings of the 1996 IEEE International Conference on Evolutionary Computation, 1996, pp. 312-317.
[27]     Hansen, N.A., Ostermeier, A., and Gawelczyk, A., “On the Adaptation of Arbitrary Normal Mutation Distributions in Evolutionary Strategies: The Generating Set Adaptation,” Proceedings of the Sixth International Conference on Genetic Algorithms, Edited by Eshelman, Pittsburgh, PA, 1995, pp. 57-64.
[28]     Hanson, N.A., personal website. [https://www.lri.fr/~hansen/. Accessed 7/1/2013.]
[29]     Deb, K., Pratap, A., Agarwal, S., and Meyaravian, T., “A Fast and Elitist Multi-Objective Genetic Algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, Vol. 6, No. 2, 2002, pp. 182-197.
[30]     Deb, K., Mohan, M., and Mishra, S., “Evaluating the ε-Domination Based Multi-Objective Evolutionary Algorithm for a Quick Computation of Pareto Optimal Solutions,” Evolutionary Computation, Vol. 13, No. 4, 2005, pp. 501-525.
[31]     Myers, P.L., “Application of a Multi-Objective Evolutionary Algorithm to the Spacecraft Stationkeeping Problem,” M.S. Thesis, Department of Aerospace Engineering, The Pennsylvania State University, University Park, PA, May 2013. [Available at https://etda.libraries.psu.edu/paper/18122/.]
[32]     Hadka, D. and Reed, P. “Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework,” Evolutionary Computation, Vol. 21, No. 2, pp. 231-259, Summer 2013.