I’m running through a strange issue.
I have a fairly complex script which usually takes from 1 to 3 minutes to run and it displays some info as output in the Dynamo Player and as a final step in the graph it generates a CSV report.
The issue is that in certain models once you hit run in either the Dynamo Player or Dynamo itself, it would start running, at some point export the CSV (which depends on almost everything in the graph to be run correctly) and then keep running indefinetly without end. So it doesn’t display any outputs in the Player and the only option is to close.
What makes it more rare is that for a colleage it finishes smoothly in all files, meanwhile for me it finishes only in some ones and for most of the team is the same issue.
Do you have any idea why this woukld be happening when its clear that everything already was run and calculated (as reflected by the CSV export)
After you get to the lockup point, what happens if you resize the Revit window?
Havent tried it. What makes you think it would make a difference?
I will test it and get back to you.
Was working in what appears to be the same version as you (hard to tell with the limited data you’ve provided) and found that helped. I believe it has to do with background processing files and writing out to a CSV leading to a lack of system resources for the graphics pipeline. Resizing the window forces an update.
Thanks for your answer.
I tried resizing both the Revit Window and the Dynamo Player window but it doesn’t work. It stills keeps running.
Does task manager show a lot of compute or memory in use?
Are there element bindings in the file?
At some point without posting the graph it may become impossible to identify the issue.
Sorry for not posting the graph earlier.
There you go!
And yeah, I checked with Notepad and didn’t find any bindings. Memory use was high but the same is true even if I only have the graph open withour running it.
Automatic Quality Check.dyn (2.0 MB)
This .dyn is 61,592 lines long… My system locks up just opening it.
The issue is certainly memory consumption causing page faults to the point that non-responsive should be expected - I am kind of surprised you get stuff into the CSV consistently. I’m not even sure where to begin, but reducing scope seems somewhat obvious. Compacting to a design script entry, custom nodes, or a single Python node is also likely a good call. Moving to another system entirely also should be a consideration - no work is ‘lost’ here as I you could not prototype these actions as effectively in most other tools, but the end result would be something besides a .dyn. Perhaps a forge app, or a Revit add-in. A Python node can actually help bridge those gaps in the interim.
At a minimum I’d disable preview bubbles (there is a Dynamo setting for this) and delete all the watch nodes but one, and reduce what you show there (rendering all that text is more memory consumption, which you don’t have to spare). If ‘user seeing the outcome’ is the issue, having the ‘report’ open in a text editor would likely be more effective.
That said, for short term i think breaking it into many graphs is likely best. You can still output to a CSV (or several) for use in a dashboard or other tool, but attempting to keep all of this in one DYN is asking for stability issues from where I stand.
I hope none of this comes off as a knock against you or Dynamo - the graph is actually immaculately organized. If I still did training sessions as before I’d use this as an example of a well organized graph.
I’ll try and wrap my head around this and let you know of any any inefficiencies I see later this week, but no promises as my calendar is pretty booked up and free time is hard to come by as I’ll be relocating soon.
Thanks, for the comprehensive response.
Don’t worry, I’m aware of some of the inneficiencies.
The goal down the line is to have our developper to translate all this workflow into a C# plugin. I was looking for a short term solution for the meantime to not slow down our current production.
I know that breaking it down to separate dyn files would be more efficient for processing time, but sadly we think that most users would miss running all the scripts.
Do you think that compacting it more using python nodes to achieve the same that I do with multiple OOB nodes would be more efficient?
I’ll try to follow your tips in the meantime, thanks a lot for your suggestions and don’t worry if you don’t get a chance to analize it further.
Dynamo is designed to be “Inspectable at every stage”, which refers to nodes holding on to their data This helps discoverability and the creation, but each node here is using memory to do so. If you wrap it up into Python or DesignScript you’ll only need to port out one set of data (Singleton, list or list of lists) and the other interim functions (That were nodes) wouldn’t hold onto their memory
So short answer == Yes, you would be saving on memory this way!
Reducing the number of nodes on canvas would be my first task, be it by Design script (better), Custom nodes (better+), Python (Better+) or a zero touch nodes (better ++).
Honestly from what I saw transitioning to a Python node in a custom node might be best overall, and it will help that developer pick up the work a bit quicker as the code would likely be more familiar to them then the nodes.
Thanks a lot! Very helpful.
I’ve had this question for a while. Are frozen nodes still using memory? I tend to leave some groups as frozen for WIP but I wonder if that could also slow the graph
Thanks for your insights, Jacob.
I think that Python will be the best option right now.
Hi @JulioPolo - There’s a couple of things at play with
- If any node has already been executed (As in has data in it’s preview bubble) that node’s results will still be held in memory, regardless of it’s Frozen or not.
- If any node is in a Frozen state, the Dynamo Engine will not execute that node or any downstream portion, saving execution time.
So to answer your question, freezing a portion of the graph will help the overall graph execution time remain performant, but will not assist with memory overhead issues.
Freezing before any graph execution can help quite a bit, as it keeps anything from being passed into the node (they stay in a ‘null’ or function state). However the visual representation of the node still takes up memory no matter what, and that amount of memory is always in flux due to the nature of navigating the graph; As such my general approach is to break the separate tasks into frozen custom nodes (my reasoning: one node display with no execution = one minimal hit on ram; 50 nodes in a group with no execution = 51 hits on ram), or even do a save as, delete 90% of the graph, and focus on what I need.