1.3 speed

Is it just me or 1.3 seems insanely slow compare to 1.2?

1 Like

Hi @Daniel_Hurtubise

Have you tried restarting your PC?

1 Like

I haven’t seen a noticeable difference. I also haven’t ran anything too crazy yet. :thinking:

I stayed with 1.3 for a few days. Just connecting nodes got me the spining wheels and no john it wasnt anything crazy ahahahha

1 Like

It happens sometimes with me. The solution is always to restart my REVIT, and sometimes the PC.
BTW it happens also with the previous versions.

Performance has always been an issue with Dynamo, when working with large data sets, mainly due to the functional way in which a Dynamo graph operates. You have to remember that data is never mutated. Instead, it is always copied between nodes.

When you input something into a node, Dynamo’s backend creates a duplicate copy of it, performs the node’s function on the duplicate and finally returns the changed copy of your original input. That means that if you have ten nodes in your graph, by the end of the execution, your memory footprint could easily have increased tenfold. To top it all off, all of that extra data must be managed at all times, which is no small task either and can quickly add additional overhead to a graph’s overall performance.

An ideal and perfectly functioning backend would track the dataflow for changes, invalidate the old copies and free up their resources at a convenient time. However, that’s easier said than done and if you observe dynamo’s memory usage during normal operation, you’ll see that that happens very rarely, if at all.

Simply put, Dynamo is a bit of a memory hog, and to make things even worse, it has to run on top of Revit - which I think we would all agree, is not an application known for its efficiency. That means that Dynamo has to share the memory pool and AppDomain of the currently running Revit instance.

You can test all of the above fairly easily - create a simple number range with 1 million values, start applying some basic arithmetic operations to it and observe the memory spikes after each new node is connected.

Where things go awry, is when you start changing inputs and deleting nodes. You’d expect the memory would eventually go down at some point but unfortunately that doesn’t seem to happen often enough.

I’ve found that avoiding the use of preview bubbles and watch nodes, when you have really long lists helps a lot. It seems like their current implementations get bogged down when asked to display a lot of information and result in huge memory spikes.

In the end, if your dataset is that large, that it causes Dynamo and Revit to grind to a halt, this might just not be the right platform for the task, because it simply hasn’t been designed with such a use case in the first place. You should either try an alternative approach or limit the scope of your actions.

13 Likes

That’s for the explanation Dimitar.
I used the same dataset with 1.2.1 and Dynamo was acting faster but i’ll definitely keep that in mind :slight_smile:

@Dimitar_Venkov If a group of nodes is reduced using “Node to Code” does this have any impact on the processing power?

I think that you might have gained some improvements in the 0.7x days with a single code block instead of individual nodes.

However, the last time I tested this with a more recent version, I did not find any measurable increase in speed.

1 Like

The performance of Dynamo does seem quite unpredictable- for example

There is always this:

1 Like

@Dimitar_Venkov Thanks!

@Andrew_Hannell I agree there is something nebulous for me about the speed. My experience is that the speed is not neccessarily linear either, where 100 elements would take 10 times longer than 10 and 1000 elements takes 10 times longer than 100. With regards to speed my experience has been that there is a threshold after which the program takes 20-50 times longer. I am guessing it has to do with RAM and processor capacity because I have regularly hit over 90% on both when running some scripts.

It is ram and processor stuff. Open task manager, pull up the performance tab, and watch how the CPU sits with one core at max capacity until your graphs finish running. Meanwhile your memory goes up, and up, and up, and up… only way i’ve seen memory drop is closing dynamo. As each node is worked through another spike hits the ram. Eventually you reach a point where your Ram is exceeded and you dive into the land of the scratch disk, which appears to require more processing power to keep track of what is written where… all the while most users are just looking at a pinwheel while mumbling “I hope this works I hope this works” under our breath over and over again until it finishes.

I can echo what @Dimitar_Venkov said, watch nodes and preview bubbles are absolute hogs. I have also made use of my fair share of combined code blocks to reduce computation times (ram usage isn’t exponentially decreased but it’s still noticeably better - which makes sense as it’s one calculated value being added to the ram instead of one set of values for each node).

It also feels as if node to code is less efficient than it could be, so I have been writing most of my code blocks myself. Makes it harder to teach coworkers (“wait what did you type there?”) but it feels faster in the end. Python feels even faster still, but i’m too green to even attempt a good solid test.

Has anyone noticed an increase in performance when using the Dynamo Player? I haven’t had a chance to really push it yet (just starting my first revit 2017 project this week) and I’m hopeful for another performance boost here, and am hopeful that it’s basically running the script entirely on the processor as if it was just one giant code block. Otherwise tower jobs are going to complain about having to restart Revit every time they run a big script. Blah.

Here at the office there is an even weirder situation…

Two computers running a script
Same project file
Same dynamo script
Same computer

One is ready after 5 min while the other has to run it overnight because of the long wait…

How is this even possible?

Can you post a screenshot of the script and a screenshot showing all of the running processes ? My guess is that this is because of something else happening on the system causing a complete lack of resources before the script is run.

Script which your running on both pc’s are in network environment?

Yes the are. Not a good idea?