Generated the post partially with chatgpt, because I’m tired even to phrase this
I’m working with Dynamo in Civil 3D, and I’m trying to extrude polycurves or polygons into solids using Curve.ExtrudeAsSolid. However, the process becomes very slow, especially when handling multiple polylines.
What I’m Doing:
I have a set of zone boundaries from database, of which I create polycures or polygons.
Each curve needs to be extruded into one or more solids with different heights. (These are pavement zones with different pavement structures, every layer needs to be modelled)
The goal is to generate multiple solids efficiently from multiple polylines.
Boundaries represent the surface top as well so the curves are non-planar 3D polygons
The Issue:
When I use Curve.ExtrudeAsSolid, performance drops significantly.
It gets progressively worse as the number of polygons and solids increases.
Civil 3D/Dynamo sometimes even freezes when extruding multiple curves.
For example it did 20 in one batch, but 50 made it freeze, nothing happened after hours
Main issue is that there are about 1800 zones and 7000 solids to be created
What I’ve Tried:
Reducing the number of segments in the polylines to simplify geometry. But I can do that until only a certain point to not loose required precision.
Breaking the operation into smaller batches rather than extruding everything at once. That raises the question how to go through all the batches automatically. for eg. 180x batch of 10 curves. So how to set up dynamo to run automatically with new inputs until done.
**Using AutoCAD ** to create solids instead of Dynamo’s built-in extrusion. I thought native solid creation would be the fastest, but I don’t know why, maybe because it is non-planar Acad could not create solids only surfaces. So I tried this before using API, so I did not try API yet, I don’t want to spend time for nothing.
Tried solid.byloft, but got mixed results in performance. For some zones Curve.ExtrudeAsSolid was faster for some zones it was Solid.ByLoft.
Also it seems that for some reason extruding from polygon is a bit faster than from polycurve, but that doesn’t really help me.
Questions:
Is there a more efficient way?
Would it be better to handle extrusion directly via the AutoCAD API rather than Dynamo nodes? Why doesn’t it create solid if Object.ByGeometry node can do it?
Are there best practices for optimizing performance when working with Curve.ExtrudeAsSolid in Dynamo?
Also this is in large coordinate territory. What scale should I use? I tried to scale back the curves create solids and scale back, but also got mixed results.
Any help or suggestions would be greatly appreciated! Thanks in advance. @jacob.small@zachri.jensen you guys are more familiar with the stuff under the hood, any suggestions?
For the example I created the 3Dpolylines, but ignore that, I want to do the extrusion directly from the database boundary points then polygons.
What is interesting, that this curve has four time more verices than the second one and finishes faster
Without the graph and dataset we can’t really help beyond some basic recommendations.
Never ‘split’ a track to use two copies of the same node. Leverage list lacing and levels intead.
Reduce the number of ‘send’ applications by flattening your list first.
Consider using a direct Civil 3D method, or unioning solids into one if discrete parts are not required (looks like GIS context and therefore usually combined is fine if you’re only using it for display purposes).
Confirm which part is ‘slow’ by checking execution times with the TuneUp view extension.
With the image I wanted to demonstrate a test that compares the different solid creation methods in Dynamo. Showing that they have different performace for creating the same result.
Sadly, I cannot share the the dataset, but the part we are talking about looks somthing like this
My main issue is that for planar 3D curves (all points are on the same plane) Solid extrusion would be managable, but in this case performance drop badly.
And despite the huge size of this dataset everything goes smoothly up until the solid extrusion node (whichever method I use from the above example)
Happily, you can share fake data which is similar in nature and size to your data.
If I had time to re-type the image into a CSV/JSON/Text document and a larger sample from the dataset I would, but I’m strapped for time today.
If it’s ‘quick enough’ with a small subset of the data you are likely running into a situation where Dynamo needs more memory than you have readily available. This can be reduced by moving from many nodes to one. Remember each node’s output needs to be cached in memory, and this is not just the bytes that make up the thing but something you can read when Dynamo displays a preview. As such my experience is that reductions in nodes on canvas is the usual easiest way to reduce memory consumption.
As far as how to reduce nodes… you have options. What I see (and some thoughts on each without putting in any effort) are as follow:
Convert most (all but the input?) of the graph to a custom node and thereby reduce the memory consumption.
This will be the easiest to build - taking about 2 minutes total.
It will not share as easily if you mean to send this to others for re-use, as the definition will be local to you and will need to be shared with the graph- best accomplished via a package.
This is the most stable over time (there are .dyf nodes dating to 2015 which haven’t had to be touched in any way over 8 years).
This is within the execution time of the Python node - speed impacted mostly by a margin of error in most of my testing (it’s been awhile so that may be worth revisiting), though your mileage may vary here as loading DYFs or any external resource is impacted by your system’s settings and configurations.
Convert most (all but the initial input?) to a designscript definition.
This is the second easiest to build if you don’t know Python, taking about 15 minutes to build.
This will share readily as the “code” is in the .dyn itself, however it won’t work in other .dyns without some work moving the definition to the new graph.
This will work well enough over time, but if it ever breaks you’ll have to spend time debugging line by line in a new definition… prey it doesn’t break.
This will be either the 2nd or third fastest if written well, and slowest if written poorly. Remember you want to take the list not the items.
Convert most (all but the initial input?) to a Python node.
Easiest to build if you know Python and the Dynamo API. Slowest if you don’t.
Accessing Civil 3D via Python is not for the faint of heart - there is a reason no major package has gone that route and it isn’t the Python engine issues as much as it’s stability.
This is somewhat stable over time, and when there are breaks you’re good to go right away.
This will be the second or third fastest, depending on how you write other stuff. The biggest hits will actually be spinning up the Python engine which has to happen once per Python node on canvas. If you test speed be sure to do so in the context of ‘first run’ not back to back runs.
Convert most (all but the initial input?) to C#.
This will take the longest to build, and has the highest degree of difficulty.
This will be the most stable for use, and likely scale second best (after .dyfs) between builds.
This will be the fastest by a mile; usually including if you write it poorly.
AI assistants are WONDERFUL here - usually you can write three lines and it’ll gvie you the next 12.
You’ll need to invest in an additional tool to compile the DLL which produces this.
thanks @jacob.small
I can try all above except C#. I’ll see what I have time for.
As for the Acad API, I found that Acad extrudes Region objects to solid, so if the input is anything else it converts it to region, but Region object is planar…
So I think solving this within the API would be to complicated for me…although I wanted to do it natively
How many points do you have in total, and how many shapes are you attempting to build? The last row in excel should have this readily available.
I’m wondering if maybe the ‘slowdown’ is because the incoming geometry is not functionally clean, as I just ran a dataset of 64 random shapes with between 3 and 12 points in them and it finished in under 30 seconds.
1828 polygons from 223 000 points to about 7000 solids
So maybe it is a memory issue after a certain number of objects fed into the node. I have 32GB DDR5.
Now i selected 120 at random and is running for a while now. before about 20 was quite fast.
And it does not look like working at all, somtimes processor goes up to 100% for a few secs
I get the 1828 polygons generated by 223000 points means you have about 122 points per polygon. But how are you converting 1828 polygons into 7000 solids?
every polygon has a different number of points, some have hundreds some have dozens.
every zone has diffent pavement structure, so it can be 2 layers (0.025,0.05 m thickness) or 5 layers etc.
so the input setup is:
geometry: list of polygons on list level one
vector: Z
height: list of thicknesses corresponding to to every polygon on list level2
that returns the solids per zone for further attribute addition
Well yes and no.
The whole road is so segmented (1800 zones on 100km, 6 lanes, zones in neighbouring lanes not necesseraly start/end at the same chaingage, every zone different pavement structure, etc) that setting up a corridor (even with dynamo) would be a nightmare.
So I thought workflow wise this is the most simple one, but I didn’t account for the performance issues.
This would be the base model for a digital twin of an existing highway, so I am looking for the optimal and an easily reproducible/reusable way to carry out the modeling part. Luckily, the whole road must be done once, and than the plan is to only modify the reconstructed parts later.