Issue 37

R. Sepe et alii, Frattura ed Integrità Strutturale, 37 (2016) 369-381; DOI: 10.3221/IGF-ESIS.37.48 376 uniform, in order to avoid the gathering of results around the mode value. It is also pointed out that sometimes the process fails to accomplish its task because of the existing physical limits, but in any case SDI allows to quickly appreciate the feasibility of a specific design, therefore making easier its improvement. Of course, it may happen that other stochastic variables are present in the problem (the so called background variables): they can be characterized by any type of statistical distribution included in the code library, but they are not modified during the process. Therefore, the SDI process is quite different, for example, from the classical design optimization, where the designer tries to minimize a given objective function with no previous knowledge of the minimum value, at least in the step of the problem formulation. On the contrary, in the case of the SDI process, it is firstly stated what is the value that the objective function has to reach, i.e. its target value, according to a particular criterion that can be expressed in terms of maximum displacement, maximum stress, and so on The SDI process gives information about the possibility to reach the objective within the physical limits of the problem and determines which values the project variables must have in order to get it. In other words, the designer specifies the value that an assigned output variable has to reach and the SDI process determines those values of the project variables which ensure that the objective variable becomes equal, in the mean sense, to the target. Therefore, according to the requirements of the problem, the user defines a set of variables as control variables, which are then characterized from an uniform statistical distribution (natural variability) within which the procedure can let them vary, observing the corresponding physical (engineering) limits. In the case of a single output variable, the procedure evaluates the Euclidean or Mahalanobis distance of the objective variable from the target after each trial: * d = y -y i =1,2,…,N i i (15) where y i is the value of the objective variable obtained from the i-th iteration, y * is the target value and N is the number of trials per run. Then, it is possible to find, among the worked trials, that one for which the said distance gets the smallest value and, subsequently, the procedure redefines each project variable according to a new uniform distribution with a mean value equal to that used in such “best” trial. The limits of natural variability are accordingly moved of the same quantity of the mean in such way as to save the amplitude of the physical variability. If the target is defined by a set of output variables, the displacement toward the condition where each one has a desired (target) value is carried out considering the distance as expressed by:   2 * d = y -y k i i,k k  (16) where k represents the generic output variable. If the variables are dimensionally different it is advisable to use a normalized expression of the Euclidean distance:   2 d = ω δ i k i,k  (17) where: y i,k * δ = -1, if y 0 i,k k *y k * δ = y if y = 0 i,k i,k k  (18) But, in this case, it is of course essential to assign weight factors  k in order to define the relative importance of each variable.

RkJQdWJsaXNoZXIy MjM0NDE=