I spoke a bit about why the CPython interaction is so different from the IronPython one today in the office hour. Basically youāre at a different level of the machine - running closer to the hardware is it were, and as a result things which were directly translated now need to be explicitly stated. One example of this is CPython interface classes needing a namespace assigned to be called from Revit. This will be odd for a bit. To quote a very skilled Python user I work with: āI canāt say I have ever seen a technical reason you would want to do that before.ā
And so the specifics of how you interact with Revit, Civil 3D, Excel and all the rest will need to be different then it was in IronPython. And for awhile some things may not be feasible as a result. Hence the IronPython2 package.
The team is aware that this is a lot of work; and so an IronPython 3 implementation is likely something theyāll try and put together. This could likely be distributed and implemented like the IronPython 2 package was, meaning it could be run side by side with your other Python builds.
This is really the trigger for the situation. The team could have stuck with 1 or 2 updates a year and have been spending time issuing hot foxes to get old builds working in new hosts, or they could be giving us the features we were asking for and the features they had to provide from a security standpoint. By breaking into itās own cycle we got a span of just over 13 months which saw the release of versions 2.13 - 2.17, which has a LOT of good stuff in it. Obviously not all of these are picked up by any one host application, and we may see some host applications lag behind others, but the overall ecosystem is much more advanced than it was before.
But now as a result the integration specifics will vary from host build to host build. Not ideal from a āone graph runs in all parts of the ecosystemā aspect.
Too many things are changing on the host application side, the Dynamo side, and even the Microsoft side to assume you can have one version of a graph on all years without excessive work and multiple rounds of ātry/exceptā to deal with version changes (or better yet some if statement triggers based on the Dynamo/Host version). And so for now my advice with dealing with the changes continues to be this:
Do a save as at each update, and test for the breaking changes. If nothing fails, back to the library for office wide use. If something goes wrong fix it in ~5 minutes if you can, or save it to a new location to fix later.
The entirety of the testing process less the decision to save to directory A or B can be automated without too much work, as can most of the āthat will impact all of my graphsā issues which you may come across (ie: bulk changing all Python in a package to CPython 3 to see which automatically pass the upgrade).
Now rather than focusing on all the other stuff going on in the world of Dynamo (Iām happy to set up a time to chat on such topics separately - just DM me) try to pivot back to the issue Ben hasā¦
I do think the open xml is likely the most stable in the long term (managing the deployment of that dependency shouldnāt be that hard, right? - cue the video of the PM crying while every block goes in the square hole).
Seeing the size of the dataset and knowing speed is the issue, I can think of a few possible other workarounds for long term use which would simplify the need for being a part time software developer focused on excel interopā¦
Perhaps defining the read and write range as an added overload method on the current nodes would suffice? You are basically increasing the speed by slashing the scope of the data being read by 50% or more. Assuming this is passed into Dynamo directly as lists, that could reduce scope a fair bit. Altering the structure from a list to a dictionary could also help in that respect, as dictionaries are significantly faster to serialize between nodes, but that would remove the capacity to use any form of iteration in Dynamo without reintroducing the list.
For now the methods shown by @c.poupin are promising - I will try to have a look at those next week (tied up today and this weekend Iāll be away from my CPU).