Looking for Python help - modifying custom node

I’m hoping for some quick help here. I am attempting to copy/modify Aussie Bim Guru’s custom node Log.Write (Crumple package) to pass a couple added variables to the CSV export (permission to do this is granted in the linked video below.) I am obviously doing something wrong with how I am adding code, but my lack of Python experience is not allowing me to see what.

What I have attempted to do is to add the following lines:

#Time Saved
if IN[3]:
	timesaved = IN[3]
else:
	timesaved = None
	
# Testing bool
if IN[4] == True:
	testing = "TRUE"
else:
	testing = "FALSE"

And modify original line #45 to read:

dataRow = dateStamp + “,” + userName + “,” + script + “.dyn” + “,” + docTitle + “.rvt” + “,” + errors + “,” + timesaved + “,” + testing

When I make these changes and attempt to run it, the modified node now passes nothing but null values in all outputs. Any advice?

Log_Write_Python Script_Original.py (1.4 KB)

Modified custom node inputs:

Tracking Dynamo script use! (revisted) - Aussie BIM Guru

2 Likes

Can you show us a screenshot of the node being run (with all inputs and outputs visible) as well as a copy of the full python code? That will give us a better understanding of what might be going on. As written, those new lines should work.

A screenshot and the full modified python code are below.

Log_Write_Python Script_Modified.py (1.6 KB)

Sorry, I meant paste the full python code here so we can see it without having to open and interact with the file.

Gotcha. Here you go:

# Made by Gavin Crump
# Free for use
# BIM Guru, www.bimguru.com.au

# Boilerplate text
import clr
import os
import datetime

clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager 

clr.AddReference('DynamoRevitDS')
import Dynamo

# Current doc and title
doc = DocumentManager.Instance.CurrentDBDocument
docTitle = doc.Title

# Check Dynamo workspace properties
dynamoRevit = Dynamo.Applications.DynamoRevit()
currentWorkspace = dynamoRevit.RevitDynamoModel.CurrentWorkspace
script = currentWorkspace.Name

# Get properties for writing the log file
dateStamp   = datetime.datetime.today().strftime("%d/%m/%y")
userName    = os.environ.get('USERNAME')
userProfile = os.environ.get('USERPROFILE')

# Determine the relevant path if provided
if os.path.exists(IN[1]):
	myPath = IN[1] + "\\"
else:
	myPath = userProfile + "\Documents\\"

# Error catch
if IN[2]:
	errors = "TRUE"
else:
	errors = "FALSE"

# Time Saved
if IN[3]:
	timesaved = IN[3]
else:
	timesaved = 0

# Testing bool
if IN[4] == True:
	testing = "TRUE"
else:
	testing = "FALSE"

# Generate data to write
myLog   = myPath + "DynamoLog_" + userName + ".csv"
dataRow = dateStamp + "," + userName + "," + script +".dyn" + "," + docTitle + ".rvt" + "," + errors + "," + timesaved + "," + testing

# Adds new line to log file or creates one if doesn't exist
try:
	with open(myLog, "a") as file:
		file.writelines(dataRow + "\n")
	result = True
except:
	result = False

# Preparing output to Dynamo
OUT = dataRow, myLog, result

It might be because your TimeSavedhrs input is defined as an integer but you’re providing a double. Try changing the input type.

1 Like

I tried you suggestion but ended up with the same result:

Run the python node (connected) directly in the graph and see what kind of error(s) it returns. Make sure you provide the default values anywhere you aren’t supplying an input.

I should have thought of this! I was literally scratching my head, wishing I could debug the Python code somehow. :man_facepalming:

I needed to cast the timesaved variable as a string within the dataRow variable. I was trying to concatenate a string with a float.

2 Likes

To expand on the topic I would suggest storing this code outside the dynamo script, then append it to system path as a separate .py file. I do that these days using almost the same code at a large firm and it works well as I can manage the code in 1 place, versus in every script. I store the journal creation as a function in that .py file effectively and then call on it by name in the python at the end of the script instead. I’ll have a video on the channel in a few weeks’ time of how this technique works in dynamo as well as pyrevit.

3 Likes

Great. I’ll be sure to check out the upcoming video!

I’m also looking forward to this one @GavinCrump, as I’ve been exploring how best to consolidate my codebase for better integration across more workflows in my company :+1:

2 Likes

Maintaining and managing .py files is a great method. Looking forward to watching your content on it @GavinCrump!

Note that custom nodes wrapping the Python code and published into a package are another option here too. I feel like there is a generation or two who discounted custom nodes as ‘managing dependencies is too difficult’, which has been to the detriment of quite a few.

1 Like

I’m on the fence RE custom packages really. I tried really hard to make one work inhouse where I work (effectively a derivation of Crumple + more company specific things), and keeping up with revit version installs and package deployment status was a real challenge. We have BIM beats now and that would have helped an awful lot for sure. pyRevit spanning multiple Revit builds as both a toolbar and having its own lib path and forms library ending up being my compromise, but it’s still a big effort to manage the code base well there as well vs a package management option that Dynamo provides.

In an ideal security lax world I’d just have it up on git and have users pull updates regularly. We use git already for collaborating amongst coders but having users just be able to do that or CLI push/pull would be awesome.

The versioning concerns for code written in method A vs method B are pretty much the same.

Python code in a packaged custom node should manage the same as Python code for PyRevit or for loading into a Python node in Dynamo. If it works in 2020 but fails in 2023 for Dynamo the same should be true for PyRevit excepting the possible engine issue (which might actually be easier than installing PyRevit).

In borh cases the biggest blocker is distribution; how do you get the content from you to all the users. Which is a problem that has plagued AEC since antiquity. I am hopeful that some of the stuff on the Dynamo roadmap pans out to resolve this for us. I’m also contemplating some tools (what if a dyn could put it’s required packages into place for you?) to help with this, but that’ll have to wait until after I settle into life after my move next week.

2 Likes

Yes mainly what I mean there is custom extensions draw off one folder regardless of version, so there is some ease of deployment versus having to distribute packages to all Dynamo build folders (easy enough with a flexible searching rule in a deployment tool though). I still have to build my code with app version checks etc. to handle API deprecations just like custom packages of course, although there are some built-in tools that help prevent tools being run under the wrong document context too in that area.

If a Dyn had some means of either auto downloading what it needed dynamically when run, drawing its code off the web (e.g. storing Python code to draw into a Python block from a company location not stuck on a server) or being from from a web environment with immediate context available for related extensions then that would be awesome. In my mind I think dynamically accessing code via the web from a node not necessarily on the web would be an interesting idea, as it would potentially do away with the idea of package versions, the author could push their latest code to the web and the nodes could point to it as it stands.

The issue with that is executing code on the web; which is actually a primary way of attacking systems. It also runs into issues of fidelity with things like packet loss becoming a must, although most Python would be small enough that it may be a non-issue.

For most orgs keeping all executed code local is a preferable as it makes infosec easier. Distribution shouldn’t be hard either - putting the packaged code from the repository onto the local disc is somewhat trivial, and can actually part of the user login.

The hard part of distribution (and any system actually) is finding the time to build and maintain it. All of that work is another task for a group who’re already stretched thin. I had a call with users from a firm who’re still waiting on 2023 deployments this week - 2024 launched two weeks ago now. Sadly that is not a uncommon story either…

1 Like

I guess that forms a case against custom packages too though, technically - if we can’t trust distributed code from an external source. Appreciate that’s a whole new approach, just food for thought. I’m experimenting with scraping .py files from sharepoint currently to look at an internal but connected approach as I’ve found being online is more common than having a good server connection at least. If I could drag code off a very accessible source then I might find more ways to do away with mirroring the majority of my code, and can instead just distribute a skeleton toolbar/setup to machines on a far less regular basis.

1 Like

The thought there is that the content you’re distributing is reviewable before it hits the users. With code executed from the web content can be modified without anyone’s knowledge. You’ve already got to be ‘in the system’ to modify stuff being pushed from a server. Using a closed web portal (ie: ACC; teams; Sharepoint; etc.) is less of a concern as writing to such sources requires access to the system, but even still there are a LOT of orgs out there who’d avoid if not outright block this.

1 Like