To start with let’s think about the the a genetic algorithm works. You start by taking a random sampling, evaluate all of the results to find which inputs were are better performing, and then take another (less random) sampling based on the better performing results. Each sampling is random, of a given size, and repeated a set number of times.

The **Seed** value controls what the random values provided will be. From a functional standpoint the inputs could be completely random (or as random as computers can be) and this setting removed, but by allowing seed values we can ensure there is a pre-set list of sample inputs which are explored. This allows us to ensure that changed to your graph don’t make things worse by pushing things in the same order, while allowing us to change the order in which they are pushed.

The **Population** value determines how many randomized outcomes are taken. So if you set the value to 12 you get 12 random values (based on the seed) which produce the first set of results. After each population is taken the top 25% of each metric and combination thereof is weighted to find ‘where about’ the next population’s inputs should be taken.

The **Generations** value is the number of times to take a population. Each time we take a population the ‘space’ being explored is narrowed down. This makes sense both from a ‘there is a finite number of studies’, but also from the perspective of ‘previous results inform the next generation’.

Now knowing this we can start to think about how the CPU explores the design space.

The first option would be to have the computer run all possible studies. Now with 100 options this is actually feasible, but most studies I work with have something so much greater that the numbers are incomprehensible as a fraction. Yesterday I was working on a problem with a design space of `(19! * 12! * 10! *3!)`

which has been fairly average for me of late. If we do that math it rolls out to something on the order of `126866180000000000000000000000000`

possible options. If the computer could run the study in 1 second (it can’t) was run six times in parallel (how Generative Design works) that would take `6704833700000000000000000`

years to complete, which is longer than the inevitable heat death of the universe (~100000000000000 years). This is kind of like how *Cross Product* or *Space Evenly* (depending on your Generative Design version) works, except the team was smart enough to limit it to 10 steps per slider (meaning you’d need an inconceivable number of sliders to approach a fraction of that design space).

The second option would be to randomly explore some of that space, having the CPU randomly select a handful of input values, and pick out some we like. This is doable in the sense that if you can submit a large enough random sampling you might get lucky; however getting lucky in a space this large is highly unlikely - possible, but not likely. This is how the *Randomize* option works in Generative Design, but with the capability to control which random values are selected via the seed option. The *Like This* option is similar to randomize, but limiting the values to within 10% of the overall domain (so one less zero on that massive number above).

The third option is to utilize the genetic algorithm available via the *Optimize* option. By taking a random sampling, learning a bit in terms of which is good, and then take another random sampling and learning some more, eventually landing on something close to our optimum.

Now in design spaces as large as noted above you can safely assume that a true optimum won’t be found (if it even exists). However the goal of generative design isn’t to ‘find THE answer’, but rather to give the human doing the design insights which they wouldn’t have otherwise had.