Dynamo performance based on PC components

Hey all,

Maybe for some a little off-topic question i’ve got here.
Currrently I’m working on my Thesis and planning on comparing the runtime of my scripts on different devices. This to see if the specs of the pc make a big differents in the runtime.

I am wondering if any of you have any experience in the performance getting better or maybe worse after upgrading your pc?
Besides this, my most important question would be what component would make the difference in performance if there is any difference?

After finishing my tests and gathering your answers I hope to use this information in the recommendations section of my Thesis.

I thank you in advance.

@manonio ,

currently i read about this

I think most problem is the single core tasks performed by Revit. Since 2006, revit almost did not change only interface… Regarding Dynamo, i like it very much, but it is also more or less “ducktapping” tasks…
I hope one day Revit and all Cloudservices will be opensource…

PyRevit can also be a solution or to investigate regarding performance

KR

Andreas

Lots of things can impact speed. Generally the component which would benefit the most from “more power” is memory though. Much of that can be explained readily when you look at this informal and impromptu listing of performance hits.

Size of dataset: Making 1 point takes n time, but making 10,000,000 points will not take 10,000,000*n time. At some point you cross the limit of available resources and as a result page faults start to become frequent.
Data types: A mesh of a cube is is slower than a cuboid as the data which defines the first (every triangle, and all the points which make up said triangles) is much larger than the data which defines the second.
Node count: More nodes on canvas means Dynamo has more data points to manage.
Action complexity: Some things are quick and easy (ie: pulling the perimeter boundaries of a room), but others are more complex (ie: adding a new point into a toposurface) and will take longer. Generally speaking if you feel it’d take you longer to do by hand, it’ll take the PC longer.
Bindings: The least understood feature of Dynamo - these can require your graph execute twice and pack everything it did before into memory before it executes the new run. See the topic and AU session I wrote on “element binding in revit” to learn more.
Marshaling between host and Dynamo: Every time you have to move data from the host to Revit you need to maintain a reference back to the object in Revit. This is why it advocate to ‘go to the well once’ and filters down data set rather than use multiple selection methods for large data sets.
Data conversion: Any time you convert data from type A to type B you have an added step; when possible utilize the data you’ve got.
Collection type: Dictionaries are faster than lists from a processing standpoint.
System configuration: If we reduce the load on the system things run faster. Similarly if we asked you to sing while doing work you’d work less effectively. Stuff like disabling the geometry preview, turning off node previews, etc. will reduce runtime.
Code efficiency: When you do stuff like load all of the Revit API into your codebase you slow it down; that’s a lot of classes to put into the mix if you don’t need all of them.

The biggest impact is usually a combination of dataset size and node count. Optimizing code for memory consumption is also key.

2 Likes

Hi, @manonio!

As Dynamo is not optimized for multithreaded parallel calculations, i’d get CPU with highest single threaded performance. And remember, “cache” data from Revit with Data.Remember node from Generative Design, when you debugging your scripts. Also use Bimorph package wherever possible, it’s very fast.

Somewhat related:

@john_pierson made some experiments comparing execution speed between OOTB, C# and python nodes:

For python scripts the loading of libraries takes a lot of times, as he found later: https://twitter.com/johntpierson/status/1504474996172681227

So don’t forget to measure multiple times, not just the first run!

4 Likes

Interesting technical aspect to note on this… first runs with Python are always going to be slower than the second runs, as they have to spin up an extra virtual machine to interpret the code, while all the other Dynamo nodes have their machine started up when you launch Dynamo.

This is particularly impactful for people who plan on using player for most runs.

So yes, test multiple runs, but perhaps put a bit more weight on the first one.

1 Like