Two questions here, the first a little more intuitive than the last… First, I have not been able to find a way to interrupt a run of a Dynamo execution once it begins, is there some simple keyboard command or button I can click to stop a run? – for clarification my program is taking at least a handful of minutes to compute… lots of reading from Revit and Excel, extensive comparison and list manipulation and finally writing to Excel - in the 10’s of thousands of sub lists being manipulated, mainly through python nodes I have coded, where as I have used all Dynamo out of box nodes to read and write from Excel and Revit.
That leads me into my next question, any advice on how to decrease run time on a program? is having more or less custom nodes better for decreasing run time? (I do not care how messy my work space gets, I’d rather get some shorter compute times than have an organized work space) My python custom nodes I write are anywhere from 100 to 2000 lines of code each. Theoretically, lets use a 1,000 line python custom node that performs task Z… Would my 1 python node compute faster than say, 50 Dynamo nodes doing task a, b, c, d, e… etc… that eventually completes task Z?
I believe the consensus is that code is faster than nodes. DesignScript and Python are the way to go for faster computing. Also, break up major tasks into multiple graphs. Running multiple manageable graphs seems to be faster than running one big all-encompassing graph.
Thanks for the response. Would you mind explaining what you mean by running multiple manageable graphs is faster than one big one? For your reference - My code is reading from Revit, reading anywhere from 10 - 150 sheets from on Excel file, (the number of referenced sheets changes dynamically based upon which Revit project I’m working with, the details as to why are a little to complex for the nature of this forum).
That might not be something you want to break up. But I believe I’ve heard that splitting up tasks makes Dynamo run faster as the computations tend to build on each other. Dynamo keeps track of all changes at all times. So the more individual functions in a graph, the slower the graph runs.
For example: If you had a graph that read information from a model, made a bunch of modifications, exported it to Excel, imported it back into Dynamo, then updated the parameters on thousands of elements it might be slow. It would actually be faster to split each of those tasks into its own graph.
Agh. I think I’m tracking now… this is my first time I’ve used Dynamo so I’m only slightly familiar with its terminology; to clarify, when you say “it would be better to split each task to its own graph”, is a graph a new .dyn project? or is “its own graph” refereeing to a .dyf custom node file?
Thanks, I’ve stumbled across the “graph” terminology a few times throughout other forums and my understanding has been a bit hazy until now. So how exactly would one go about linking an output of one graph to the input of another? Would the Dynamo Player be the key to bridge the gap? I have only seen it on my ribbon, I’ve never looked into the Dynamo Player functionality, so forgive me if that is a stupid question
Not a stupid question - a very important one actually. You can’t really connect the the inputs/outputs. You have to break your graph at reasonable points. Usually this involves exporting to Excel or committing changes to Revit. Then the next graph in the process reads the information again from either Excel or Revit. Rather than stringing these graphs together with one output going to the next input, you have to consider them as totally separate graphs that can just pick up where the last one left off.
I see… That seems to be a pretty big flaw in the Dynamo system… with a database as large Revit being such a big aspect of Dynamo, you’d think they would get some run time optimizing features in the out of box capabilities… The ability to link up multiple different Dynamo projects without having to open a bunch of graphs running separately would seem like a simple addition that could dramatically decrease run time. Anyways, thanks so much for the help, it has been very insightful!
You have to remember this isn’t all Dynamo’s fault. Running that many computations in any program is going to take time. Just think of if you were doing all of this manually in Revit… how long would it take? Revit still has to do all the work in the model.
Very good point - I’m taking a full day or two… maybe more depending on the project… of mid level electrical engineering work time, and Dynamo turned turned it into about 5 minutes of processing time. Pretty amazing software, especially seeing as I’ve never coding a thing in my life other than an entry level C++ programming class my freshman year of college years ago.
Generally speaking if you can reduce everything to a single node your graph will produce the fastest run time. Two nodes slightly slower. Three nodes… well you get the idea. Each output has to be stored on the ram or the scratch drive, so less outputs is better.
The next aspect to keep in mind for reducing time is reducing the complexity of the calculations. Pulling a value which already exists in the database will always be faster than re-calculating the data from scratch. Rooms are one good example of this. Why query boundaries, create surfaces, and then ask for the area when the room already stores that data? Better yet if you’re only looking for the area of rooms on a certain level, filter by the value of the level first. And if you only want a particular wing on a certain level, filter by both of those before you ask for the area.
Stay away from geometry creation whenever possible if you’re just doing calculations or manipulating existing data.
Lastly, if you’d like help speeding up a graph best to post it to the forum (i’d start a separate thread) and ask if there are any ideas on how to speed up the process. Generally speaking the input here will be spot on and help you learn, and you’d also help others grow as well as we would all be able to refer to your question in response to future runs.
My understanding is custom nodes are only used for visual appeal, using code blocks is what is being referenced here. Due to the nature of the program, from what I’ve read, at every input of every node, Dynamo actually makes a copy of what is passed to is, it doesn’t directly manipulative what its passed, its manipulating a copy of the the input, storing it, and sending it to the next node. I think nested code block statements all manipulate the same copy of the input, rather than copying an input for every function being called.
It’s tough to tell, I’m still in the debugging process and haven’t run the program on a project other than the one I’ve been doing the testing in. It seems like dynamo has some intelligence as far as looking for nodes changed and only running the nodes that have been added or edited, thus dramatically reducing run time. If I make edits to Revit where data has to be sent from beginning to end of my program its generally taking around 2~3 minutes. A bit less when I delete the write to Excel nodes.
I work in the consulting industry on the Electrical Engineering design side of commercial builds… airports, factories, schools, military facilities, etc. My program essentially does this - It searches my Revit project for anything that has an electrical load, receptacles (outlets), lights, machine equipment connections for any machine etc. Sorting it all out and organizing it so every circuit panel board is a sub list, then comparing it to the Excel document that holds a table with all the equipment, loads, etc used for contractors to buy the right equipment and is also used to do all our load calculations, etc. The info from Revit is compared to the Excel, anything that is mismatching is flagged, anything in Excel but not in Revit will be re-written back to Excel, and anything found in Revit not in the Excel doc will be written to the Excel file. There’s a snippit of the Excel file output. Each circuit panel in the project has its own Excel sheet, number of panels per project range from 10-150.
The test project I’m working with is very small compared to what I plan to use this on down the road, but I’m already seeing lists in lengths of over 10,000.
It would be a little bit more of a manual process but you could try adding an input for a certain number of panels to run at a time. So you could start off my running your script on the first 50 panels, then run it again on panels 51-100, the 101-150, and so on. It means you need to be around for each run but it could potentially cut down on overall runtime. The other option is to run the script on a separate computer or overnight/during down time. Running your graph multiple times also means less headache when you encounter errors. Just something to keep in mind.