Issue 50

N. A. Fountas et alii, Frattura ed Integrità Strutturale, 50 (2019) 584-594; DOI: 10.3221/IGF-ESIS.50.49 591 even enhancing current ones. In addition this simplicity contributes to the rapid understanding of metaheuristics for their ap- plication to problem-solving. Flexibility deals with the metaheuristics’ applicability to the various problems without changing the basic algorithmic infrastructure. Applications of metaheuristics to problems involve only the inputs and outputs of a system. Therefore what needs to be known is how to successfully determine a specific problem to be handled by a meta- heuristic. Furthermore the majority of metaheuristics comprise derivation-free utilities. As opposed to gradient-based optimi- zation techniques, metaheuristics optimize problems stochastically. Optimization initializes randomly created candidate solutions, and there is no need to compute derivatives for search spaces to converge to optimal solutions. This makes meta- heuristics highly suitable for real-world problems with expensive or unknown derivative information. Finally, metaheuristics exhibit superior abilities to avoid local trapping compared to traditional optimization techniques. This occurs owing to the stochastic nature of metaheuristics allowing them to avoid stagnation in local regions and search the solution space thoroughly. When it comes to real-world problems, solution spaces are usually unknown and quite complex with a vast number of local optima, hence, metaheuristics are suitable modules for optimizing such challenging real-world problems. According to the No-Free Lunch theorem [26] there is no metaheuristic best suited for successfully solving all optimization problems. This implies that a particular metaheuristic may exhibit quite promising results on a set of problems, but the same one might achieve poor performance on a different set of other kinds of problems. Obviously, No-Free Lunch theorem turns this research field to a highly active one where contributions in enhancing current algorithms and/or developing novel ones are published very often. Fundamental features of the grey wolf algorithm and optimization problem setup To optimize cutting parameters during turning of CuZn39Pb3 brass alloy, the multi-objective version of grey wolf optima- zation algorithm was implemented [16]. According to the optimization problem’s nature the responses were set for being optimized as follows: 1. Minimization of the arithmetic surface roughness average, Ra ; 2. Minimization of the maximum height of the profile, Rt ; 3. Minimization of the main cutting force, Fc . The three-objective genetic algorithm optimization method used is a population-based non-dominated intelligent algorithm developed by Mirjalili et al. [16]; following the behavior and social life of grey wolves. This algorithm simulates the haunting hierarchy of wolves towards capturing their prey. The algorithm establishes candidate solutions according to the hierarchy of grey wolves in nature, i.e.; alpha, beta, delta and omega. Mirjalili et al. have properly developed the mathemat- ical relations concerning the unique features of this algorithm based on the physical characteristics of grey wolves. The main stages that grey wolves follow to capture their prey are: 1. Tracking, chasing, and approaching the prey; 2. Pursuing, encircling, and harassing the prey until it stops moving and 3. Attacking the prey. These stages mimic the different intelligent operators an artificial algorithm requires so as to be effective. Hence, attacking a prey technically means the ex- ploitation of the search space whilst searching for prey simulates the exploration of the search space. Of particular interest and practical merit is the C-vector in the grey wolf optimization algorithm. C-vectors suggest random weights for prey so as to stochastically emphasize or partially ignore the effect in defining the Euclidean distance. This contributes to the algorithm’s random behavior thus removing the inherent bias of elite solutions and facilitating exploration as well as local optima avoidance. In addition the C-vector can be assumed as the effect of obstacles towards approaching the prey. Thus, depending on the position of a wolf it may randomly give the prey a weight for making it harder to be reached by wolves; or increasing its distance and vice versa. Searching initializes by randomly creating a population of grey wolves (candidate solutions) in the algorithm. Over the course of iterations, alpha, beta, and delta wolves estimate the probable position of the prey. Each candidate solution updates its distance from the prey. The algorithm-specific parameters of the grey wolf algorithm are set accordingly in order to emphasize exploration and exploitation. Finally, the algorithm is terminated once the optimization criteria have been satisfied or the number of generations has been reached. Optimization results The results obtained from the grey wolf algorithm for the responses of Ra, Rt and Fc with respect to turning parameters n ( rpm ), f ( mm/rev ) and a ( mm ) are presented in Fig.8 that depicts the non-dominated Pareto optimal solutions obtained. By examining the spread of the non-dominated solutions appeared in the three-objective Pareto front it can be concluded that the non-dominated solutions are concentrated such that all potential values for objectives are obtained. However, since all three optimization criteria are of minimization, it is expected to consider that the non-dominated solutions close to the Pareto front’s origin are those of particular benefit. Therefore the non-dominated solutions close to the axes re- presenting the objectives Ra, Rt and Fc are considered as the optimal points for achieving simultaneous optimization, characterized by their parameter values corresponding to the independent variables n ( rpm ), f ( mm/rev ) and a ( mm ).

RkJQdWJsaXNoZXIy MjM0NDE=