What do you use for Python Code Snippet management?

I’m curious what people are using to organize the python code base for your dynamo projects? I’ve mostly given up on custom packages and package management because it is too hard to keep a whole department constantly up to date. It’s too much setup for every version of Revit for every new hire. So the easy solution was to just migrate as many custom nodes to python as possible for the frequent use graphs. But that means keeping track of the python somehow. So how do you do

Personally, right now I have a OneNote file with dozens of snippets. But this was never meant to be a permanent archive.

My dream situation would be to right click any python node and have a ‘save to library’ option. And to have the library be searchable as part of the custom nodes. I was wondering if there was anything like this being worked on. Or if there is another clever to manage your code base?

Please let me know how you organize your snippets.

This is exactly what package management is. While I agree with your sentiment, it’s the same situation and the same problem as dealing with custom packages. While I always recommend minimizing your reliance on custom packages as much as possible, it’s usually going to be easier to just manage someone else’s packages in these cases.

2 Likes

So I am trying to move away from package management. The issue is the amount of ceremonial steps. It absolutely is a solution, but all the external referencing and user error, I find causes too many potential points of failure. If I get a bug reported back to me, I have to error check all the way through the reference chain, the script, the users set up, the package version, the definitions, the source python. I can cut out 4 of the 5 steps by relying only on python and oob nodes.

The onenote solution is not ideal, but to add to the code base it is a single copy paste step. In the context of package management, I have to edit the custom node, save it, republish. And still have to manage the users’ deployments.

It’s a matter of use cases. If my goal was to provide the user with custom nodes, then the packages would be the best solution. But in my case, I tend to be developing a lot of one-off scripts for individual projects. The driving need is sharing the graphs not the custom nodes.

I’ve moved away from external packages as much as possible. The only ones still in my libraries are ones with zeroTouch features that python can’t replace.

The myth of having to manage user environments is a lie. You’re chasing a white whale which doesn’t even exist by shifting the burden of ‘managing packages’ over to the burden of ‘managing graphs’ as eventually your Python has to differ for Dynamo for 2022, 2023, 2024 and 2025 host application versions. If you ‘put the python’ into graph 1 written for Dynamo for Revit 2022 and a user saves it to the server somewhere, when it fails to work in 2023 you’re going to have to edit that python in the graph. Then when they save another copy you’re going to have to edit it again in 2024. By the way when someone updates the 2022 version with the 2024 version your graph is now broken in 2022… see the issue yet? You need a way to reference the right codebase at the environment level - aka use a package or custom node.

I do understand the pain of having to publish and then distribute - it’s an extra step to make things scale into environments where you don’t have control (i.e. not your office). If you want me to have access to the code you develop Packages are the best path forward. The added benefit of managing list levels for you can’t be understated either… How many times do you have to deal with ensuring things coming in are a list (or aren’t a list) in your Python? Custom nodes just do that for you. However it does feel like an added step when you just want the graphs to work on the CPU next to you.

Conveniently you don’t have to deploy the packages via the package manager - write one bit of code to ‘copy from library to the environment’ and move all your .dyf files to the user’s definitions folder (typically %appdata%\Dynamo\Dynamo Type\X.YY\definitions\ - be sure to use the right Dynamo Type such as Dynamo Core or Dynamo Revit and version such as 2.18 instead of what I have there). Only added step is to setup the script (a once a year thing) to copy your library of dyfs from the network to the local users systems on demand or on log-in. Your IT team can do this in a heartbeat (or you need better IT).

3 Likes

Maybe I’m misunderstanding your situation. You mentioned that your focus is on sharing one-off scripts, not custom nodes, but your initial post is about saving and sharing python nodes which is the same as saving and sharing any other custom node except that it’s not part of a package.

Sorry, maybe I mis-phrased the explanation. The graphs are the one-offs, the python is the part I need to catalog.

The efficiency in avoiding using custom nodes comes from encapsulating all the logic in a single dyn. We can introduce dyf’s but now that either needs to be managed somehow or bundled with sharing mechanism.

The advantage of the encapsulation is I can give anybody a graph, even subcontractors, and the graph will run error free without the need for prerequisite setup. The graph will even run on a fresh install.

Personally, I find it easier to manage just a single directory of dyn files, as opposed to dyn, dyf and packages. It’s not that is not possible to, it is just a much simpler organization scheme imo. From the user experience side, they just have to get the graph they and run it, they don’t have to know whch dyf’s are ref’d. (We don’t run graphs over network, because the vpn bottlenecks the execution speed)

Also not every graph/custom node is going to be error free. encapsulation allows a single update to one dyn. If we are relying on an updated dyf, I either have to explain which dyf’s to replace, or have everybody in the dept run the package update process in case they may want to use that single graph in future.

It’s a lot easier to just send an email saying, ‘hey I fixed it, try it now’.

The moment you face issues with scripts not running in newer or older versions of revit than it was written in you are going to regret wanting to manage the python yourself instead of simply having the correct package for that version of revit on the users PC.

When sharing a script as you mention there are no issues and it is easier for the end user and you until it is an issue. This is going to be prevalent for a lot of scripts pre 2024 and post 2024 with the int.32 / 64 changes.

Personally i share scripts as you mention to send to individuals for that project but i keep the packages installed on my machine and duplicate the graph as an unpacked version to share.

This can be solved with file naming convention. Since we have have to trouble shoot the issue anyways, once it’s solved just save a new version of the graph suffixed with revit year. ‘_R23’, ‘_R24’, etc

Lacing isn’t an issue if you are comfortable with lambda statements. This is the design pattern I use to handle lacing for any item/list/nested list. It passes out the same list structure as comes in.

ProcessLists = lambda function, lists: [ProcessLists(function, item) if isinstance(item, list) else function(item) for item in lists]
ApplyFunction = lambda func, objs: ProcessLists(func, objs) if isinstance(objs, list) else [func(objs)]

def Unwrap(item):
    return UnwrapElement(item)
    
if isinstance(IN[0], list):
    item = ProcessLists(Unwrap, IN[0])
else:
    item = Unwrap(IN[0])

def func(x):
    return x.GetType()
    
OUT = ApplyFunction(func,item)

That is a really good suggestion. I implemented something similar, you don’t need IT, you can actually do it yourself in dynamo. This takes in a directory and target location to recursively copy to. I just have users run the graph for new deployments and library updates.

import sys
sys.path.append(r"C:\Program Files (x86)\IronPython 2.7\Lib")
import subprocess
subprocess.Popen(["robocopy",IN[0],IN[1],"/mir","/xf"])
OUT = IN[1]

This covers the deployment process. However lets say we updated a graph and dyf. What is your process to disseminate the fix throughout the office? Do we make everyone periodically recopy our libraries? Do we say if your graph doesn’t run, then update your deployment to latest and then try again? Are we going hold their hand and step them through the update process? I don’t want the employee to find an error in a older script, have them report it, tell them to run updates, the error still persists, then go fix it, then send it back, then republish, then blast out the dept email asking everyone to update again. We can cut a lot of these ceremonial steps out.

The big advantage of python first design is that if a graph is not running as expected, I’ve already eliminated all the environment variables. I do this intentionally, so that after I’ve released a script only true negative errors can be reported up the chain.

So no element binding there. And your library is now 4x the size. I think some firms I work with would be looking at a directory with ~700 graphs in it at that point which isn’t reasonable. But I guess it works for small firms.

Absolutely is still an issue - lambda or not. Write some Python to generate a point using an X, Y and Z value. Then make that same Python generate a series of points though a list of X, list of Y, and list of Z values. Then make that same Python mix every X and Y value against every Z. Then make that same Python sort those lists by Z first instead of Z. Then Y first instead of Z. Then YZX. ZXY. Etc… One custom node can do this with the same Python. Point creation is likely a straw man use case, but this also applies to setting a list of parameters to list of values (unique or not) for a list of elements; loading a list of families into a list of projects, etc.… Lambda is great and it works in simple cases where you know the only possible input/output structure, but the complete flexibility isn’t there the way it is with lacing and list levels.

Update just the dyf - the DYN should be static if you built it well. Then have users request the new build via your tool (or similar), or push it at log-in via GPL or a robocopy tool (contact your IT if you need help with this but - very easy once set up). As you only update the dyf, not the dyn everyone of the users leverages the same dyn file in every Revit build so one path for player for 90% of your uses, and for customization users can quickly edit stuff without having to learn Python or asking you to customize it for a more unique application. By doing the IT lead GPL/robocopy method you also remove the issue as people get updates as soon as they log in, or it even happens to everyone across the network all at once (save for the one user who was running the graph when the updates were pushed… they get theirs sometime after closing the app).

Sadly we can’t control the environment from inside this Python environment. Python is dependent on Dynamo (see the Python engine component), the host application (Revit vs Civil 3D vs Alias vs Dynamo Core) and it’s many iterations (offhand I count 11 versions of Revit right now) and customizations (installations of an add-in using a conflicting version of NewtonSoftJSON as an example), and finally Dynamo Core and the many changes we have had there over the last four years (15 versions!) which means things like Curve.Offset impact things which you might not expect, so that user who is missing the update gets an error which we couldn’t have accounted for… management of the environments has to happen at some point or things go sideways. The more you partner with IT to make that a reality the better off you are in the end.

you absolutely can do cross product lacing with basic python

cube_size = 4
points = []
for x in range(cube_size):
    for y in range(cube_size):
        for z in range(cube_size):
            points.append((x, y, z))
sorted(points, key=lambda point: point[0])
sorted(points, key=lambda point: point[1])
sorted(points, key=lambda point: point[2])

But I’ll concede the point at large numbers this for loop would hang up. I’m not going to thread the code to prove a point. I’m not stopping using dynamo nodes all together. I’m only replacing our custom packages with direct python. When I said oob nodes, I meant I’m still using Out-of-the-box nodes (I forgot the t…). The selection nodes are not replaceable.

Lacing is not a problem in python is because there no scenario where you would write custom scripts not knowing what the list structure you are processing is. There should be no reason you would ever change the lacing on a python script.

I still Stan python as package replacement. You are right there lacing is problem, but it is on the dynamo side. You aren’t limited to 3 types of lacing in python, you write the lacing. If I want to grab the last 5 items from every second list that includes an “W24” in it and keep the list structure the way it was except keeps the list structure? It’s either a unreadable chain, or a code block. If someone doesn’t know Python, they can just throw it gpt and ask what it does.

How about this scenario: let us take a 20x20 point cube, twist it around the center 2 times and remove a random set of 1/10 of the points. Now for every point get the closest point above within a threshold of 2m and create a centerline representing the columns of our gravity system. The line will either be single height or double height depending on if it is under one of the removed points or not.

Now, this is possible using just nodes. But if you think about how you would set it up and the absolute lacing nightmare spaghetti it would look like. We could either have that, or we can get comfortable implementing python and have a graph that is 5 or 6 scripts piped together.

Even in step one, you have the dynamo limitation in comparing a list of points to itself, you have to come up with some form of work around just to have a point not return itself as potential match. Python actually opens up a lot more lacing options, it’s simple to write logic for every point in the list check it against every other point.

What to use is going to be case by case. There are going to be so many situations where Python was the more appropriate. True it is an intermediate skillset, but if you feel you are good with Dynamo, python is the next level. I started in Dynamo and added in Python later, it absolutely simplified my graphs and improved turn around time.

I’m going to push the GPL solution up the chain of command. That was really good suggestion, I’m stealing it.

But you have no control over the order they are laced in without writing that code. If you want the Z values iterated first you’re editing your inputs or modifying the order the data goes in and conceding that the variable named X is now the Z data (which has possible ramifications on other parts of the code.

If we simplify things it starts to be a bit more clear - with only X and Y values. Using a Point.ByCoordinates node with inputs of 0…10 you can set the list level for the X to @L1 to get the points ordered by column, or you can set Y to @L1 and get the points ordered by row. This isn’t doable in Python without added controls. As far as when you would need to adjust this to change the order - when you are prototyping a graph to use in generative design to optimize the placement of sprinkler heads on a golf course… making a change in the code to go the other direction is a LOT harder than altering that list level.

Again with simple cases this isn’t a concern, and if you’re using each node a single time per graph you might not need it ever as you can control the input order and controls.

All ideas here are freely offered, and as such there is no theft. Only ‘price’ is using the tool and sharing your perspectives and expertise. :slight_smile:

One last thought on this: GitHub or similar tool for repository management is really ideal for managing code snippets (and even .dyn and dyf files). Such snippets can then be incorporated into .py files to serve as feeder files for Python definitions, or via a PythonScriptFromString node. As long as everyone can ‘see’ the .py you’re good to run the graph (so publish a copy to the network as part of the build process).

These snippets can then be loaded into the IDE of your choice to edit (I like Visual Studio, and Visual Studio Code), but there are tons of options which work with the git process, and you can quickly setup a debugging method in your node to read the PY which overrides the input path to your local disc for testing. This not only makes things maintainable as Python code, but can make things scale better (direct the path to the .py for a particular environment) and allows reuse of one code snippet in many other locations which reduces update efforts.

1 Like

I love this idea, maybe we hash the script strings so we only download if updated. Then the user only has to be able to access the github. Or maybe even set up an endpoint for getting scripts by name. This has a lot a lot of potential.

Also, I should let it drop. But… the lacing argument doesn’t make sense. I’d prefer my scripters do something like this:

point = (x, y, 0) if not IN[1] else (y, x, 0)
points.append(point)

Now you have a toggle bool that controls the lacing type and we can reference it downstream to not mess up the logic. and the toggle can be made input to be exposed in player. So we can change XY orientation without having to edit the graph.

I don’t want a design choice like that to be controlled as arbitrarily as it set up as the last person to use saved it.

For a while I toyed with the system append approach for entire blocks of code, but found in the long term the best approach was to begin using a lot of functions/classes etc. as this leads into C# more naturally in the longer term. So the Python node was just boilerplate, inputs, system append the library(s) from server with specified functions, run function(s), declare outputs. It was a pain due to speed of server read and testing was awkward as it wasn’t all natively in Dynamo.

I also ended up having to have a dev/live copy where I wanted to point to my own copy so that I wasn’t getting live updates whilst testing as this would mean users may run nodes that break, so the idea of having a ‘live’ Python environment was challenging. Effectively I had a function at the front for system appending which knew if it was my windows username, point to my dev folder instead of the company one. A git approach would have been nice, but we don’t have enterprise for all users and most don’t understand push/pull etc. - it would be how I would do this if I had full control/choice though.

Whilst these days I deploy my libraries with my pyRevit toolbar (copied to user machines locally by IT), when I did this in Dynamo I was toying around with appending in functions that were in effect entire Python nodes beyond the boilerplate, inputs and outputs. It was sort-of manage-able but felt like it would have been just as much work to coordinate with IT a company package onto each users appdata folder in the end. At that point your libraries could just be in a package extras folder. A lot of packages actually do this to snippet their code.

If you find yourself using more Python these days I’d suggest looking into pyRevit as well, especially if you have a deployment support team as well. Can technically run dyns through it as well. It has a lib folder which it knows it can draw libraries/functions out of, some examples below of how this looks the way I use it:

Functions/classes stored in lib:

Importing functions from lib by name in actual code:

1 Like

I’ll look into pyRevit. I’ve been making everyone install it anyway (revision and sheet set management). I’ve made few custom toolbars, but I haven’t dived into using it for utility functions yet. It is a good direction for me to study.

Can I ask, do you manage both pyRevit and CS addins? Did you lean towards developing in pyRevit over custom addins for a particular reason. I noticed the UI management seemed nicer in pyRevit than CS. But I’m curious to hear a more experienced opinion.

My impression was that with pyRevit, I’d be using the developer tools to set up the UI and then VScode for the scripting? If you get comfortable you get comfortable with it, do you feel it is about as fast as Dynamo development?

1 Like

We use both as we have some fully fledged web/C devs and some that err towards Python (including me). For tools with complex UI and API dependencies outside Revit we tend to business case them into full apps in C# and it protects them also. General utility we have our own toolbar for pyRevit which I maintain. Our dev group meets/chats from time to time to ensure overall we’re on the same page and not doubling up efforts. We use git together to maintain our code in one place. I’m aware this isnt normal and I’m very lucky to work in a team of this nature. For reference our digital team is 20 people to 700 staff. Some firms like aurecon have 200 comp designers+ to a few thousand staff, so its all relative. I think engineering more readily grasps the value of computation given a lot of engineering is dependent on calculation and optimisation/testing. I’m fairly sure I’m one of the larger digital architecture firm teams in the world, and likely largest in Australia.

Pyrevit has whole libraries with basic UI (forms) as well as the rpw.ui library from revitpythonshell itself. You can dev wpf/xaml in it (see eftoold for reference) but it’s a lot more work. I like pyrevit for general utility as with some effort and good code management you can develop quite capable tools at scale that whilst a bit slower than C# still get there, and I can turn things around quickly there as pyrevit covers a lot of the deeper stuff an addin would need. Having said that with some work you can dev C# libraries and templates to make dev there just as quick I think.

https://pyrevit.readthedocs.io/en/latest/pyrevit/forms.html

https://revitpythonwrapper.readthedocs.io/en/latest/ui.html

One thing I really like about pyrevit is that you dont need to compile before runtime, so you can edit code in session and test it instantly without any workarounds needed like in C# for this behavior. Because the toolbars are just folder structures it is also very easy to batch deploy but you cant protect is easily - that has to be made clear to the firm when selling pyrevit as an option and was a tough sell at first. I’ve literally got my toolbar at the point where it can now reinstall/update Itself which saves so much hassle. All I had to do was write a staged process to download/replace/extract it to the same location and finally use pyrevits extension reloading feature.

Any company wanting to inhouse solutions that can be protected should set its sights on C#/web dev in the longer run. If its a smaller firm or earlier days there is no shame in dynamo/pyrevit i think - it’s just not going to be as robust longer term, and you need to watch out for technical debt.

We don’t use Dynamo often, but I use it for project specific tools where I can predict the version, and to train users interested in programming in Revit. I use it a lot personally to solve smaller tasks but despite my better efforts was not able to win over users to like dynamo player vs buttons in panels/tabs ultimately. It holds a special place in my heart and I love that it opened up this side of AEC for myself and others, and will always swear by it as the best entry point to programming in AEC out there (including grasshopper etc. - it is far easier to grasp Dynamo originally I think). Noding is technically faster to develop with until the moment you hit a task a node is not developed for, and that’s typically where it gets complicated to support Dynamo at scale.

2 Likes