Can someone help me understand how GD "learns"?

This refers to the process inherent to the NSGA2 algorithm which Generative Design utilizes.

Simply put for any optimization study:

A population of inputs is provided between the ‘min’ and ‘max’ value of their domain. This is the first generation of results.

The best performing of those results are added to the ‘hall of fame’ (HOF), and the input values which produced those results are used to inform the input values for the next generation. These are the results you see in the solution browser.

This repeats itself which the ‘higher performing results’ added to the HOF at the conclusion of each population, until you finish processing the given number of generations.

The best way to study this is to build a ‘toy problem’ which will allow you to quickly iterate a series, and do a series of studies. The graphs from the AU session I presented last year with Alexandra Nelson, or the sample graphs which are optimization ready will work for this. First study would be 1 generation with a population of 20. Second study would be 2 generations with a population of 20. Third study would be 3 generations of 20. Fourth study would be 4 - 20. And so on until you start to see something approaching the optimal in your solution space.

With that data set you can then see how the HOF updates on each successive run by comparing the results of any one run against the results of the run after it.

You can also see how the HOF updates over time in a long study by looking at your HOF history results in this folder: %appdata%\GenerativeDesign

1 Like