While loop for finding the appropriate surface area

I have been developing a script to reconstruct the geometry from the Estonian digital twin system. Along the way, I have learned a lot and my previous posts here have received beneficial responses. As I am still relatively new to the dynamo world, I am having trouble figuring out the [Imperative] block for the while loop.

The problem I am trying to fix is related to matching the surface area of the original geometry to the threshold area by offsetting the original polycurve. That is, I want to decrease the area by offsetting the original polycurve until it meats the threshold area.

Maybe I do not need a code block for this, but this seems to be the most apparent approach to me now.

The problem with offsets is that as some point you lose the fidelity of the shape.

Why not use the Geometry.Scale node instead?

1 Like

Depending on the accuracy needs, you could make a python loop with a desired number of iterations (accuracy) that re-calcs the area from the offset. I prefer not to use a while loop. If you use it, cap it so that any mistakes (like briefly clicking away before finishing) will not freeze Dynamo.

Below is a sample, that can be really improved for speed:

  • can add a way to offset in either direction
  • can be optimized for speed (super rough logic…)

These settings are not too slow for this single surface example…

2022-03-22 - 20-01-46

Python Node Code

# EvolveLAB | https://www.evolvelab.io
# 2022-03-22

# Load the Python Standard and DesignScript Libraries
import sys
import clr
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *

# inputs
polyCurveOrig = IN[0][0]
targetArea = IN[1]
offsetStepDist = IN[2]

def areaOfPolyCurve(pCurve):
    area = Surface.ByPatch(pCurve).Area
    return area
    
# vars
polyCurveNew = polyCurveOrig
offsetDist = 0
newArea = areaOfPolyCurve(polyCurveOrig)

# assumes that original shape is larger than the target
for step in range(100): # 100 max tries
	if newArea < targetArea:
		offsetDist = offsetDist - offsetStepDist # undo last offset
		offsetStepDist = offsetStepDist / 2 # gradually increase the resoution if went over
	offsetDist = offsetDist + offsetStepDist
	polyCurveNew = Curve.Offset(polyCurveOrig, offsetDist)
	newArea = areaOfPolyCurve(polyCurveNew)

OUT = [offsetDist, newArea, polyCurveNew, step]
4 Likes

Careful with this, for not just the well discussed while loop concerns, but also the changes in proportions - notice how the offset shape eventually becomes a triangle. Data can be lost with offsets like this.

2 Likes

Thank you @JacobSmall and @BenGuler for your responses. Until now I have managed to do everything with the Code Blocks and avoided Python. But I will give it a try.

1 Like

@JacobSmall right on! I was expecting the offset to actually break, so it was a nice surprise that it did create the triangle. There could be additional logic to check if the offset was destructive, by using the edge count. There should also be an exception handler…

A similar logic can be used with scaling instead: substitute the offset step with a scale step.

@ergo.pikas I think the Python node could be reworked to use out of the box nodes, but it could take a significant performance hit, as each trial (offset surface) would get generated and rendered to Dynamo. The Python node “hides” them and only outputs the final useful geometry.

The above statement is only true as of Dynamo 2.5.0. Yes, you will gain some improvement pre-2.5 by not rendering the geometry in the viewport. But the memory is still in use. More info on Github

This is of course not an issue if you are disposing of the geometry. which, in the sample .py above, you are not.

If OP is in a version greater than 2.5 (which they probably are), then no worries. But, I thought it was worth mentioning because I have seen quite a few people make that (incorrect) assumption about python “always” being faster. And I do think ideas like “if I can’t see the geometry it must be faster” contribute to that.

2 Likes

Ahh yes!

What I meant was with OOTB nodes where each step from the py code produces new geometry rendered, you could have 100 surfaces and 100 polycurves rendered to the helix.toolkit 3d viewer (unless you use a different logic with the nodes, not the py one). You could turn the node visibility off, but they’re still in memory for 3d preview (not proto geom) as far as I know. Because the geometry is renderable, it takes additional memory footprint and when turned on slower FPS & dynamo script panning / zooming.

Cool to know that the geometry is auto disposed from 2.5.0 onwards! Thanks for sharing the link.
For the above, Dynamo 2.10.1 was used, so a dispose would not be needed here.

1 Like

The shape scaling method which I mentioned is demonstrated here for reference (produced in Dynamo 2.13 so the UX may look different from what you have):
image

And the graph itself:

This might not be what you were after, but my experience with a few quick digital twin support efforts has taught me that the shapes derived from the RVT directly are usually what people want, with scaling and transformation issues applied in the DT environment.

2 Likes

Beautiful!!! I’d mark @JacobSmall’s graph above as the actual solution, especially if the shape needs to maintain integrity + no python :smiley: (I think… a bit hard to read the nodes)

:sparkles: The new UI is beautiful! :sparkles:

Incase it’s hard to read:
scale_shape_to_desired_area.dyn (42.8 KB)

Meant to post it before but hard to move files between systems sometimes, and Ben’s response beat me to the edit. :slight_smile:

If Geometry.Scale is moving stuff around too much you could build a coordinate system instead and use that to locate, orient, and scale the form.

Sorry for the delay. First, thank you all for your comments and input. I have now tried both options, and in my case, both are feasible due to the given context. The level of offset or scaling is typically very small. It is rather about the fine-tuning of geometry.

2 Likes

I’m glad it worked for you. Future users will now have two solutions for reference, if they are interested in solving this problem. I can see how one is more appropriate over the other depending on the use case.