Track Dynamo Usage

Hello,

I have currently develop a python script which will output some basic user information for me to track the dynamo script usage across the company. However, instead of copy and paste this node into every single script, is there a way to read this Python script from a shared location (and what file would that be)? So that if I ever need to update it, I will only need to update this share file.

I am new to Python! So if you can kindly explain in details that would be very helpful to me, too. :slight_smile:

Thanks in advance for any suggestions and helps!

image

Welcome to the forums!

As a single python node, no, I don’t think that’s possible. I’d suggest looking into shared packages. That’s basically what you’re describing. You can accomplish this by creating a custom node and adding that to all your graphs. Then you would only need to update the custom node in the shared package location to have it updated across all instances.

2 Likes

Thank you @Nick_Boyts

Just to clarify that I get this correctly. So I will need to create a custom node, which will live in all the graphs, and this node reads from the shared packages folder. The custom node in the shared package will be in .dyf ?

Correct. The nodes you place in a graph are obviously references to the dyf that they represent. By creating a custom node (with or without deploying an actual package), you can then reference that node in any graph and maintain the python code within the actual dyf. You just need to make sure that everyone running graphs with the custom node have access to where it’s located.

2 Likes

I have a node in Crumple package which can generate and write to a user specific log file on a fixed location. I use this in a 400 person firm to track script usage but opted for native Python in the end to manage the case when users don’t have the package. I believe you could probably write the node to call on a python library (.py) with the code written there in one place potentially, I do that in pyRevit from time to time successfully. The Bumblebee package is a good reference for how this can be done - it has a .py library in its extras/bin folders.

From there I use Python to jam all the journals together (written to a server) and read the master csv in Power BI.

One metric I didn’t include in my data collection I’d recommend is checking if the dynamo script threw any errors. This can be retrieved using Dynamo API I think, as long as it runs after all the other nodes have run as well.

I also have a video showing how I put this node together:

3 Likes

Thank you @GavinCrump . These are great information. I will definitely look into these as well. Appreciated your help!

1 Like