Hi guys,
This got me very curious and I decided to dissect the work flow and managed to pinpoint a few bottlenecks along the way.
I generated a topo with > 40k points. At that moment Revit itself became quite the bottleneck and started to lag severely on any further attempt to modify the topo element. If I have to put an upper bound to a topo by points, I’d say you generally don’t want to go above 10k points.
First of all you don’t need to split your topo in separate files, you can simply use Massing & Site > Midify Site > Subregion and split the topo into more manageable parts.
DISCLAIMER: don’t try any of the below on your work project. I bear no responsibility for any loss of information or damage incurred in the process of doing so.
The first part of the work flow involved extracting the topo geometry, vertex positions, face indices and index groups. All of that works in just a few seconds.

The next step involves extracting the vertices belonging to each index group (A, B, C). This is where I found our first culprit - “List.GetItemAtIndex”. That node seems to have a serious memory leak/bottle neck and usually leads to a system crash. Luckily we can avoid using it with some DS syntax.

Extracting all three groups takes about 10-15 seconds in total. The next challenge is to join and transpose the three lists. The standard List.Transpose function manages that in about 25 ~30 seconds. At this point we’re looking at about 240k points. With some very simple DS syntax, we can cut that time down in half.

At this point I had do save my process and restart Dynamo and Revit, because memory was not being released after execution of the graph. After reloading. I proceeded to generate the individual surfaces. That took about 100-110 seconds for the 80k faces or roughly 800 faces per second.

The final challenge was to merge all the faces together. “PolySurface.ByJoinedSurfaces” was the next huge slow-down. It seems to be experiencing a memory leak as well. It usually starts becoming unbearably slow after 100 faces. Going higher than that usually leads to crashes. At this point I decided to get sneaky. I wrapped everything up in a single function and started chopping up the faces to more manageable chunks.

Ten faces/ polysurfaces at a time seemed to be a decent chunk for the “PolySurface.ByJoinedSurfaces”. About 2 minutes later I ended up with a single PS consisting of ~80k faces. The process peaked at about 6.8GB of memory usage( Revit and Dynamo ate up about 5GB of that), so you’ll want a system with at least 8GB of ram before trying anything like this.

My conclusion from the above exercise is as follows:
-
Dynamo has an incredibly robust geometry processing engine and can handle tens of thousands of lists, points and surfaces at the same time.
-
Some list management functions need to be reviewed and optimized.
-
Surface / Polysurface creation might need to be further optimized at some point.
-
Memory management should probably be reviewed. What portion of the memory, if any should be released upon completion of the graph?
-
Design script syntax seems to be measurably more powerful, efficient and stable than graphical nodes. Is there a way to transfer some of that back to the graphical nodes?