Node to Code vs. Custom Node, Speed

Does anyone know if converting nodes to code results in faster performance. Anyone know of any automated optimizations inherent in the node-to-code procedure?



Converting nodes to code does not offer any performance improvements. Code in code block nodes and your graph (nodes and connectors), both get converted to an intermediate form called abstract syntax trees (ASTs). These are what are executed by the Design Script virtual machine. The ASTs generated from each are the same, hence the execution in the VM is the same.

I hope this helps.

My experience on this is the following:

I recently did a thorough comparison between two identical solutions, one with nodes and the other in code . The node solution ended up being close to 200 nodes and over 450 wires, with a file size of ~ 200kb, while the code solution was just two large code blocks with a file size at about 40kb. Now both of those sound tiny, however keep the following in mind - a definition file is a plain text file with XML formatting and stores the position of every single node on canvas and all of the wires that are connected to the inputs and outputs of that node.

Every time you open a definition, it has to be rebuilt from scratch, first the nodes have to be laid out on the right place and then all the connecting wires have to be created. That meant that the node solution took over 15 seconds, just to open, while the code solution opened almost instantly.

The node solution felt sluggish during navigation, while the code solution remained snappy at all times. The node solution also took noticeably longer to close down.

I didn’t time the two but the code solution felt like it ran marginally faster at least on the initial execution and afterwards both solutions took about the same to re-run.

A single line of code can have as much information as five or more nodes. So while it takes one cycle to go through that single line of code, it takes multiple cycles to chain the same logic when it is split in separate nodes. So my theory here is that it takes longer to convert large amounts of nodes to AST because while a code block’s logic executes linearly downwards(line by line), a node graph’s execution is much more obfuscated due to the many inter-connected wires that interrupt the flow. It might be placebo, so I’ll let the experts confirm this part. Here’s a quick example of how a few wires look inside a Dynamo file:


Note that each node has a unique id and then each in and out port has a sequential number assigned. A node’s output will first have to be paired up according to multiple identifiers before the correct input on a secondary node can be zeroed in.

Something else to keep in mind is that code can be modified a lot faster. If you need to take bunch of nodes and move them closer to the start of the definition, you have to first disconnect all of the group’s inputs and outputs, possibly shift all other nodes to make room, move the group and then finally re-connect all its inputs and outputs. In a code solution, you simply need to cut the few lines of code and paste them a bit higher up in your code block.

However nodes do have one huge advantage. When a particular node misbehaves, its colour changes immediately and you can instantly see where a problem occurs. When your whole code is in a single code block and the whole node turns red, without any other indication, you’ll have to literally go through the whole thing line by line. I think that code blocks are in dire need of better debugging options - at the very least, it should tell you at which line an error occurred. ( just like in every other modern IDE? )


So to summarize, large node definitions are:

  • slower to open, close and navigate

-slower to modify

  • possibly slower to execute initially

  • definitely much easier to debug.


Thanks Ian for the reply. And thanks Dimitar, your statements make a lot of sense too. Visual graph IS usually easier to debug, and if I’m going for ninja speed on large quantities a clean code block is likely an upgrade.


With the Tuneup extension, that can be tested more accurately, now.

In my experience, there are three representative ways to speed-up.

  1. Use numbers rather than geometries.
  2. Don’t make the same data multiple times.
  3. Use python

Another item to be cognizant of with nodes is that the preview geometry and watch settings on those nodes can exponentially slow things down as each node can replicate the geometry or data. You can mitigate this by disabling all the previews on each node but a single code block makes that effort a bit more manageable.