Back-To-Back GD Scripts - Data Best Practice?

Hello Again!
I am wondering what you best practices are for managing data between two major steps in your generative design studies. I have an idea for a study that could be one heavier script or two smaller scripts. But one script will typically be run right after the other to continue the next step of the study. If I go the route of two smaller scripts, what is the best method to pass the data of one study into another?

The few options I see:

  • Write the results of the study back into Civil 3D and run the next script based off of the drawing.
  • Write the results of the study into a stand alone JSON file and then run the next script off of the JSON.
  • Create the large script, leverage the remember node, then freeze off unneeded sections and turn off inputs where they are not needed.

How do you guys typically segment these out?

I could define this in my scenario what I’m doing currently.

As I’m having a bigger workflow which is subdivided across 6 scripts and to integrate the output of previous files and input on the next one 2 things are coming handy for me.

  1. For geometry:- Extracting geometry as SAT file at a specified location and importing it back from the same path for the next file.

  2. For Data :- Excel is playing an important role here in the same manner as geometry writing it at a specific path and fetching it back.

I’m sure there are many other ways and will also like to hear other’s perspective, that was all from my side.

Regards,
BM

1 Like

Maybe use the data.remember node to pass the data through the scripts?

1 Like

This is a REALLY good question. And like many questions in this space the answer is ‘Well that depends on how well you know the content you are designing, the quality of your graph, your end user’s technical capabilities, the domain of each of the tasks, the capacity to ‘find optimal’, the degree to which abstraction can be utilized, and likely more stuff I am not thinking of. But the less number of times you hit the ’create study’ button the better.’

The tendency of most in the industry think if each ‘decision’ involved with design in a vacuum, but nothi mg exists in that context. So while this makes studies easier to build, it also means subsequent studies are held hostage by the static constraints present in earlier studies. The earliest study thereby having the most impact. So when possible run everything at once. Doing so will require the most expertise, technical skill, best code, manageable domain, and the most explorable design topic. This doesn’t happen, so often an intermediate output followed by subsequent studies is a must.

When this has to happen I like to do provide the data in the same format the designers are used to as it reduces the amount of change we have to manage. So if users are happiest looking at the location of the cabins in the context of a 2D AutoCAD drawing, give them that. If they usually have Civil 3D objects give them those. If they want 3D geometry in FormIt, go that route. If they want direct shapes in Revit build that. If they want data in Excel export to that. And if they want data in Dynamo (cool they sound like my kind of users!) give them that.

The important bit is to reduce this frequency as each time you build an output into the generative workflow the optimization engine has less insights as you are setting something in stone. To get the best results you want to optimize all the variables at once (thereby making no decisions in a vacuum) you’re going to get the most complete set of results in Generative Design.

Does this mean you need to prepare yourself (and users) for 100 parameter studies, with population sizes exceeding 800, and generation counts exceeding 1000? Well no, maybe someday but certainly not yet. But maybe someday. For now just concentrate on building your graphs to optimize major aspects of the design, and plan on evolving the process in steps - try to capture the major milestones, and build your output in such a way as to allow the designers to manually update the portions as they go, and add new evaluations based on those results.

3 Likes