Interesting post from over a year ago comparing Python C# C++ and C++x64 bit. There is no question that for larger long term development shifting or converting to C is immensely beneficial - particularly for longer term runs of applications or addins. Would externalizing addins on separate threads also speed things up or does REvit even allow this?
Python is exponentially slower than C++ by a factor of more than 100 in these examples. Longer term runs show exponential performance increases with lower level languages i.e. C++ and C#.
@john_pierson also has a good Dynamo specific comparison here:
The only real constraint to using the most efficient techniques/languages is time I think. C requires a better understanding of how computers work, so from what I can see is an end game for most people coming through visual programming. Iāve found generally the entry point for using C based languages is also higher, as many people learnt Computer Science as their entry point and treat it as assumed knowledge in their educational content. That or itās way too basic and never gets past sending messages through console.
A lot of people settle on Python because itās got forgiving (if a little bit too) flexible syntax, but I always recommend if users need to scale and develop complex UIās (not just step by step/hit the button) then C and Visual Studio will eventually be necessary.
Personally Iāve settled with pyRevit for now, but I do include loading bars in all my transactions and generally a cancel option to stop these loops as they arenāt always fast on large projects. Iāve found the bottleneck is more often than not just Revit however (e.g. deleting a large list of elements will take a fairly long time regardless of which language you are using, at least in my experience).
My couple of extra thoughts, apologies if any of these are obvious and please excuse that Iām not an expert
For most Revit tasks threading is avoided, I believe because the risk of creating instability in the Revit database is not worth the time savings⦠All the data must come back together seamlessly. I believe that for tasks like Rendering, this isnāt such a risk, so multi-threading is utilised.
Speed is related to the total time spent⦠16 minutes vs 0.1 seconds to run is impressive, but the total time taken for many tasks would be a lot closer, particularly if you are taking a similar piece of code and varying it.
If requirements are happening very infrequently and predictably with heavily geometry work and a sophisticated UI, building an addin in C is going to be the most efficient. Still though, the ongoing management of API updates alone can be a significant commitment.
Of course, for Dynamo, there are certain things it just canāt do⦠Idling events etc. which require you to step outside its environmentā¦
For me the little bit of extra development time, which honestly isnāt much if you have a decent template you work from, is the stability and error handling capabilities. However, I am sure the pyRevit environment is better for that then Dynamo directly.
I think one of the other benefits, which is what most hate about #, is that you learn so much more when you have to know what variable types have to be cast as to make it work. There have been numerous times it ājust worksā in python, but takes some time to determine what is actually being used. This can also be a problem if it isnāt giving you what youād expect. This is probably the fundamental time savings, but some prefer the convenience over the time, and most often that difference isnāt perceptible to the user.
Risk is a part of it. There has actually been a lot of effort to āuse more coresā over the years, with the consistent outcome of āthis utilizes all the cores but itās actually significantly slower then just using the single core as a result of this change, for all but these few functions.ā Those functions then get added to the list of multi core functions, including but not limited to: exporting, printing, rendering, graphic display, wall joins, etc. .
So the lesson we can learn from an outside perspective is that while pulling stuff onto multiple cores sounds like it should make things faster, that is often not the case if youāre already optimized in terms of how the code runs and youāre dealing with datasets of significant enough complexity. However if speed is really the goal then itās likely the focus shouldnāt be in either Dynamo nodes, Python code, zero touch, or an add-in. Instead an external automation, or even better an out of process application (similar to how the steel connections toolset works) will not only run faster but also feel faster to the users. To me that is one of the brightest spots for the industry with the with announced Forma developments (make no purchasing decisions as this is speculation and I have no more insight then what is already publicly available).
With the zero-touch nodes compiled in VS they leverage the intermediate language which is optimized prior to compiling to ML. This makes highly efficient code at runtime so thank you @GavinNicholls for this and a lead on zero-touch nodes! That might be a really good segway to coding in-between as the Filtered Element Collectors(FECs) take considerably less time (guessing 1/2 to 1/3) to do their magic in gathering elements to manipulate.
@jacob.small, Agreed. The app I am thinking of would likely best run externally with a bridge to data in Revit. Would love to see some of the FORGE teams join this conversation for a much broader basis of comparison of run times and flexibility in coding and environments.
Any recommendations on good zero-touch tutorials on LinkedIn or YouTube?
Itās quite easy to set up a chat with a member of the team if youāre interested - the developer advocates can be booked for a chat as needed for technical or āgetting startedā content. Feel free to set up a time to chat: Calendly - We have moved! Please see link below.
As far as weighing in on speed⦠the answer will always be āit dependsā, as a blanket statement just doesnāt work when you have inconsistent sets of scope, scale, inputs and outputs.
Actually this has been around for as long as I can remember, at least as long as Iāve been at Autodesk⦠That team has always been very accessible, but not always in the public eye, much like the Dynamo team.