Importing analytical surfaces into RSA is very slow

I try to import 23760 surfaces into RSA 2022 by using AnalyticalPanel.BySurface, the divisions is 1 for each surface. All the surfaces are at 60 different Z levels. Each level has 396 surfaces. In Dynamo, There are 60 sublists represent 60 levels, each sublists has 396 surfaces.

If I connect any one sublist into AnalyticalPanel.BySurface, RSA will take a little time to show the panels, Then I connect all the sublist into AnalyticalPanel.BySurface, I have been waiting for more than 60 min, it still running.

I notice that at beginning, Dynamo tooks about 10%-20% CPU, and later on RSA is taking about 10%-20% CPU.

i7-8700K CPU, 32GB memory, 1TB SSD.

Sounds like you have exceeded your available RAM and as a result Dynamo, Revit, and Robot are spending more time moving data from memory to HDD and back than they are calculating the results. Try sending the panels in batches by level instead.

Thanks Jacobs.

I have more than 15GB free memory when I was running the Dynamo and RSA.

Let’s say each surface is defined by only 4 points. That’s 16 doubles. Now multiply that by 23,760 surfaces, you get 380,160 doubles to remember. Now multiply by 3 because there are three applications in play here, and you get 1,140,480 doubles to remember. Each double takes up more than 1 bit of memory, but exactly how much isn’t clear among the three apps. Safe to assume 8 or 16 bit though.

And this is before you get into the memory required for the lines which make up the edges of the surfaces; or the surfaces themselves; or the actual application which is in use (ie: Revit uses memory when you use the API to ask for stuff, Robot will need more memory when you create new elements), or the other info which you’re passing around between applications.
Suffice to say at a certain point in any application the amount of RAM available is going to be too little to use it effectively, and you get into page fault territory. This awesome website has some great info on what happens to your application at that point, albeit from a different perspective: page faults (wizardzines.com).

I’d argue that data set this big will hit that page fault point eventually, even with perfectly written Dynamo graphs.

You could customize the code base to pass each surface from A to B to C and dispose of the memory on each completed surface, but that would likely take longer than running in sections.

Keep in mind that while your CPU may say you have 15gb free, that doesn’t mean that Revit/Dynamo/Robot have access to that bit of info; if any of the above are capped for RAM then all three will be. So are all 3 applications allowed to access more than the default memory allocation provided by windows (usually 16gb if memory serves)? The amount available to the application and the amount available are two different things.