I’m working with a script in Dynamo that takes a considerable amount of time to execute, which got me wondering: is it possible to implement parallel processing in Dynamo? I’ve come across a few posts here on the forum, but the details still aren’t very clear to me. Additionally, is GPU processing feasible in Dynamo?
Has anyone successfully managed to implement these approaches? If so, is it possible to wrap this functionality in a custom node or call it within a Python script? Any insights or examples would be greatly appreciated.
I’m exploring these options for a model that contains over 5,000 solids and 40,000 surfaces, where I’m trying to find intersections between each solid and all the surfaces.
Assuming you are converting to Dynamo geometry, and doing 200,000 intersection tests, and then doing 200,000 filter by bool mask, and then 5,000 additional edits… no amount of parallel processing will offset the real pain you are feeling here. Instead you need to reduce the amount of memory consumption and data processing by writing your code differently.
If that is all correct, then as a start you could use a filtered element collector in Revit for geometry intersection and simplify it to an instant’s worth of runtime, without the memory bottle neck assuming your loop is well written. Doubly so if you manage the rest of the content in the same loop. This could be done via Python (as you explicitly control what is kept active and reuse variables), C# (where garbage collection is more automated), design script definitions or custom nodes (as only the result is stored in memory in both of these methods).
The ‘best’ path forward can’t really be shared without real context in what you are building.
Reading and writing to the Revit file will always be limited to one thread at a time
Some tasks (a good many actually) will suffer from parallelization.
Anything where a change in element A will impact element B means parallelization on A and B is a no-go.
Some contexts (i.e. Generative Design, Dynamo as a Service) will not permit parallelization as they already make use of it or don’t have the hardware available.