Can someone help me understand how GD "learns"?

Hello there,

Does anyone know what is meant by Autodesk by “learns”:
“It tests and learns from each iteration what works and what doesn’t.”

Where in the algorithm does it exactly “learn” and how?
Perhaps they’re talking about some conditional operators in their algorithm?

This is a bit of a stretch. It doesn’t actually learn. It just finds the best case inputs for one evolution and uses similar inputs on the next evolution. Over the course of multiple evolutions GD will “learn” the best inputs for a specified/optimized output.

I think this may be useful reading:
https://www.generativedesign.org/

my guess is that statement refers to optimization - for the most part, but not necessarily, each iteration should get closer to optimal (depending on optimization/search algorithm).

GD optimizes based on the values in your Dynamo graph that you try to maximize or minimize.

@Nick_Boyts
@Michael_Kirschner2

Allright, good to know. I was afraid i was missing something AI/ML related.
Thank you for the clarification.

This refers to the process inherent to the NSGA2 algorithm which Generative Design utilizes.

Simply put for any optimization study:

A population of inputs is provided between the ‘min’ and ‘max’ value of their domain. This is the first generation of results.

The best performing of those results are added to the ‘hall of fame’ (HOF), and the input values which produced those results are used to inform the input values for the next generation. These are the results you see in the solution browser.

This repeats itself which the ‘higher performing results’ added to the HOF at the conclusion of each population, until you finish processing the given number of generations.

The best way to study this is to build a ‘toy problem’ which will allow you to quickly iterate a series, and do a series of studies. The graphs from the AU session I presented last year with Alexandra Nelson, or the sample graphs which are optimization ready will work for this. First study would be 1 generation with a population of 20. Second study would be 2 generations with a population of 20. Third study would be 3 generations of 20. Fourth study would be 4 - 20. And so on until you start to see something approaching the optimal in your solution space.

With that data set you can then see how the HOF updates on each successive run by comparing the results of any one run against the results of the run after it.

You can also see how the HOF updates over time in a long study by looking at your HOF history results in this folder: %appdata%\GenerativeDesign

1 Like