Get Structural Column Start Point

Hello,

I’m trying to get the Start Point of a Structural Column to allow the placement of a Structural Connection.

The problem I’m having is in relation to the getting the correct Start Point regardless of Vertical or Slanted.

If I use the Element.Geometry–>Solid.Centroid it places the point in the middle of the geometry which is wrong.

If I use the StructuralFraming.Location this only works for my slanted columns.

If I use Face.Vertices–>Vertex.PointGeometry I don’t get accurate X,Y,Z coordinates to be able to place a family without adjusting the coordinate system or something.

 

There must be an easier way to accomplish this, any suggestions?

Solid Centroid

Matt-

Since it works for slanted columns, use that to your advantage. Use “SetParameterValueByName” to set all the columns to slanted (don’t worry, vertical columns won’t actually slant - they will remain vertical). Then get the start point however you wish. Afterwards you can switch the vertical columns back to “vertical” setting.

I have a python script that does all of this within a sub-transaction to get the points, then rolls the transaction back to essentially reset the vertical columns. I’ll try to remember to share a copy when I’m back in the office on Monday.

Hi Ben,

The method you described worked well. While attempting to build a custom node based on core nodes,It doesn’t allow me to set the parameter back to Vertical. I’m assuming it’s because it’s considered a single operation and that is probably why you had to create a python script?

If you can share your python script that would be great!.

DynamoColStart

You could try putting a transaction.end after you set to slanted, then start another transaction before you set back to vertical.

 

Matt-

Here you go.

Ben,

Changing the column style for a few columns is fine, but I wouldn’t recommend it for a larger project. Starting and rolling back transactions is a bit expensive. You could instead simply use clockwork’s “Element.Location” node. It not only processes both line and based instances but gives you a bool filter that you can use later on. Simply extract the bottom elevation of the column and add it to the location point like so:

 

Dimitar-

What exactly do you mean by expensive? Added time? Extra memory? Something else?

What if both the start point, end point, and a curve representing the column are desired? Would the rollbacks still be the less preferable to getting the location, getting the base offset and subtracting, then finding the top level, getting its elevation, subtracting the element location elevation, adding the top offset, creating a point, the crating a curve between the bottom and top points? The python way seems more direct to me - but because it involves revit doing some thinking instead of Dynamo, is what you are saying is that the python way is slower? Or is it perhaps unstable? Element.Location only give the placement point of the column…

I have noticed in the past that some scripts I have that use the python way have a tendency to leave Dynamo/Revit “not responding” when dealing with a lot of columns rather than just a few. Would you recommend doing all the Dynamo calculations to get start/end instead?

Note this would all be easier if Revit API just treated slanted and vertical columns consistently and provided the access to the same info for both. I mean- who wouldn’t want to know what the top and bottom of a vertical column are (without jumping through a bunch of hoops)…

Dimitar-

Your post got me wondering what the differences might be, so I ran a comparison of 9 and 1500 columns in an otherwise empty file. I did both the python way and the Dynamo way to get the column tops, bottoms and a reference line for the column. I ran the list of colums from “AllElementsOfCategory” and the resulting curves after the two methods through Clockwork LapTime nodes, then compared the results.

Each time the Python Transaction Rollback method beat my Dynamo Node Method for getting the column references. So that tells me at least the Python Method seems to be faster. But does that mean that it is better? Is the Python method expense elsewhere (memory usage, etc)?

Attached is my script if you want to compare for yourself (or show an alternate method using DesignScript).

Column Refs Dynamo Vs Python

Colum Ref Comparison

Hi Ben,

First of all to properly test two similar solutions, you’ll need to execute them separately in isolated files. Once you split up the files (and simplify the non-transaction solution a bit), we get the following times for an empty (columns only) file:

 

Python(empty):

2016-03-15_09-54-53

 

Dynamo (empty):

 

 

 

2016-03-15_09-53-05

 

In this limited test, the transaction method is indeed faster by a few miliseconds. However you are forgetting two very important facts:

  1. Nobody is going to run this in an idealized file with just columns

  2. Revit’s internal database is total and utter crap ( sorry Autodesk but we have to be honest here)

These are very important because in an actual project, you’ll have walls and floors interacting with those columns and numerous beams connecting straight into them (also if it’s a structural model, you’ll get double the trouble because of all the analytical elements hiding in the background).

Plainly speaking, whenever you make a change to an element in Revit, the element’s entire entry inside the internal database gets rewritten and tagged for re-evaluation. Then all other elements interacting with the changed element also get tagged for re-evaluation, because that single change in the original element could possibly affect them. By not committing the transaction, you spare yourself the re-evaluation phase at the end but all of those changes that you did to the database now have to be undone again to bring the project file to its original state.

So let’s run this on a ~150MB file with about ~1000 column instances:

Python (real project):

 

2016-03-15_10-16-10

 

Dynamo(real project):

 

 

The execution times appear virtually the same but the transaction rollback made Revit unresponsive for over a minute. The above is based purely on my own experience and observations, so take all of this with a grain of salt. No two projects are the same and there’s no guarantee that the above will always be the case. Just try it out for yourself on a few actual projects and see for yourself.

1 Like

Thank you Ben & Dimitar.

In the end I went with the Element Location Node which I initial tried, but didn’t include the fix for the Z Point so this is great!

This will allow me to place items at either the xy or the True Column Start Point using the method in above regardless of being vertical vs slanted which is perfect!

Cheers!

1 Like

Dimitar-

Thanks for the explanation. What you outlined matches my experience (files locking up), I just never thought it was the rollbacks. I was actually surprised when I ran the test on the isolated file and the python ran faster. Understanding that should help us all build better Dynamo scripts.

I am in total agreement on the mess that is the way Revit works under the hood (especially the analytical). They at least need to put in some way to allow us to temporarily disable analytical calcs during any API interaction (let the script do its thing before trying to look at the analytical implications…wait your turn…)

Like the solutions/debugging above.

To gets over the analytical issue couldn’t you disable the analytical lines or better is to disable the structural setting for analytical check then re-enable once the above has been completed.

If you toggle the line on/off it may reset it’s position to centre of element geometry and therfore loose any offsets depending on how it is implemented.

I wouldn’t classify analytical elements as much worse. What I meant to say it’s that a lot of people forget they’re there and they almost double the number of model elements in your project. Thus also the number of elements that have to be reviewed after a change.

They definitely affect performance as well. Tho, I wouldn’t be surprised if they actually have a lower impact than standard elements due to the fact that they’re just simple lines and surfaces without much parameters.

Actually- Analytical Model checking and updating does seem to cripple a model at times. See this video I shared with some of the Revit development team a while back. (FWIW - I replicated this on the machine of one of the QA team at the Lounge at AU after they swore it did not work that way…they were surprised…)

https://www.dropbox.com/s/s4yo05bipz0c5xq/Structural%20Analytical%20Toggle%20Revit%20vs%20Revit%20Structure.mp4?dl=0

Brendan - if you disable analytical on the element level, do stuff, then re-activate the analytical - Revit remembers any analytical edits you made to the analytical column, beam etc - which may or may not be the behavior you want depending on what you are doing. As an alternative you can disable analytical on a per-user basis if (and only if) you are using Revit (not Revit Structure) via Revit-Options. This appears to leave the analytical model intact for other users, but prevents your machine from doing any of the analytical evaluations and the resulting model slowdown.

The issue with Analytical performance seems to be that a change in one place literally has to trickle down through the whole model (moving one beam may ever so slightly change the load paths everywhere - so Revit must check that…)

The node of which package is ??

Could you share the Dyn and Python script for this?