So given a bulk list of walls for example, I need to create 2 lists, a list of walls (in this case generic models, dont ask lol) and a list of all walls that intersect. My graph reliably does that. on like 20 walls. if I run it on a model of any substantial size, I use all 32gb of my RAM for about 20 min. Not ideal obviously. I suspect this graph would likely bluescreen a computer at some point.
So the problem: 1) more effeciently create my list of elements and elements to join them to. or
- create some kind of loopwhile function that deals with the data in smaller portions but can safely be allowed to run all night.
A proposed 3rd option: run the tests on a bounding box (or series of bounding boxes) dividing your project up into NxNxN cuboids and test one cuboid at a time. The Revit API has a nice filter to quickly get at model elements which are in a bounding box vs those which are not. I haven’t seen a node for this, but it shouldn’t be too hard to implement.
A proposed 4th option: I can’t see your graph (tip use the camera icon in the top right to export an image of the entire workspace after zooming in so you can read the nodes), but it appears you are using dynamo geometry to filter out content and reduce check sets. The API is likely faster, and will allow a non-cross product means of looping (ie: list.pop).
I did use the camera and tried to get a good screenie for you. so I did the bounding boxes intersect before creating geometry in dynamo and checking it. the bounding box intersection was not giving me consistent results. I also tried elements intersect node from clockwork but still got inconsistent results. I think the best solution here is a loop, but while I know what to do I have no idea how to do it lol, my python skills are woefully lacking. I need to find a way to process the data slower instead of all at once. Maybe lists management is an option too?
You could do a geometry.union on the walls to bring the list down to a more manageable size. Even though it’s slow algorithm it should (eventually) be faster when working with larger lists. Can you chunk the comparison list by area or by floor?
I would export the the union geometry to SAT then run the list comparison against that geometry. It’ll save time because since you’ll only have to create the file once and can reuse it from then on. It depends on what params you are trying to pull out though. This is good if you’re only interested in rather collision occured, but won’t give you exact info if you are trying to pull info on the colliding object.
In theory you locate the collisions and then compare them back to the original pre-unioned list. But also in theory you could export your generic elements and wall elements to navisworks and get the info you need from the clash detector.
When you use the camera feature, zoom in first until you can see the node header text (it’s fine if the whole graph doesn’t fit on screen). When you take the camera shot it’ll properly capture the full graph and still maintain the node headers so that others can follow along.
It looks like you are making element combinations first, and then afterwards generating bounding boxes? That might be a contributing factor, see below:
For a list of size X you’re asking Dynamo to generate [X*(X-1)] bounding boxes. For a medium sized building with 100 walls that’s 9900 bounding boxes.
Edit- Another consideration for the Intersects node, you can just check for Intersections > 1 on a list that is compared to itself:
By any chance are the generic model category walls still line-based? Line comparison runs quicker than solids comparison.
so my generic model family is still line based. I did try using the bounding boxes to cut the list down but it was not giving reliable results, the graph was generating warnings. I considered living with that and removing the warnings after the fact. I also tried element.intersects but with a different issue, it was not reliably joining all of the elements. the only way I could get consistent results was by getting the geometry and checking it with geometry intersects.
time isnt really the issue its RAM, its alarming to see my RAM maxed out. So what I was thinking is doing a for loop and looping 100 elements at a time until finished. but Im a total noob in python sadly
What is your desired large sample size? 100 walls? 1000?
Also, have you frozen script sections to test what node is consuming your RAM?
If the generic model families are attributed properly to levels/planes and are line-based, you can try using Element.GetLocation to pull the line object that defines the family generation for pre-filter or final filter use.
I’ve got a working prototype to reduce an intersection test from 4153 elements to sets of elements between 1 and 106 in about 10 seconds. I have to imagine that is much faster than your bounding box conversions. This can go even smaller sets if you wanted, and things will reduce as we exclude annotation elements, undesirable classes, and the like.
Going to test it on a better data set (using 16 story hotel data set where all walls go from level 2 to the roof, slab or not as such have TONS of repeat items (there is a wall that is in 25% of all 72 boxes using one division method I had set up).
yeah so desired sample size is as large as possible lol. as Jacob said, want to be able to run this on 15 story hotels if needed.
As for freezing, it seems the ram is being consumed at the point where I am generating geometry and/ or checkiing for intersections.
Can you share a sample model? I think I can cut your tests down to seconds as the ram usage goes WAY down because my code is running low level element filters on the CPU instead of passing multiple nested lists from CPU to RAM repeatedly.
Yeah sure, here is a Onedrive link.