Hi All,
I am trying to create a routine for auto tagging of generic family assemblies that are always linear, the views are always rectangular with the longer side of the crop frame being horizontal on the sheet. I am pretty happy with how the Archi-Lab node “Elements.GetCentroid” works to locate the end location of each leader. I would also like to create points for the leader shoulder and the tag location. I managed to create a python node that copies the input points (the COG of the families I am annotating) up to create the tag shoulder points and then to the right for the tag locations, but this ends up being a different direction in each view, depending on its real world coordinate system orientation (these views are only horizontally placed on sheets, in the model they are located more or less randomly).
Is there a way to differentiate between the coordinate system of a view vs the world coordinates?
Does deriving the transformation vectors from the view crop frame curves (which is always consistent with the directions I need the transformations to happen) make sense?
If I managed to do this - use shorter sides of the crop frame as the vertical vectors, how to make sure that the vector is pointing “up” and not down the view?
Attached is a sample view, roughly showing the tag distribution I am aiming for, and a CAD sketch explaining the location of the input points/ tag shoulder/ tag location points.
My python code for the shoulder/tag location points:
Blockquote
import clr
clr.AddReference(‘ProtoGeometry’)
from Autodesk.DesignScript.Geometry import Point, Vector
def transform_points(input_points, distance_A, distance_B, distance_C, distance_D):
points_S =
points_T =
current_A = distance_A
previous_point = None
debug_info =
for i, point in enumerate(input_points):
debug_info.append(f"Processing point {i}: {point.X}, {point.Y}, {point.Z}")
# Check distance from previous point
if previous_point:
distance = point.DistanceTo(previous_point)
debug_info.append(f"Distance from previous point: {distance}")
if distance >= distance_B:
current_A = distance_A # Reset A if distance >= B
debug_info.append("Distance >= B, resetting A")
else:
current_A = max(0, current_A - distance_D) # Reduce A, but not below 0
debug_info.append(f"Distance < B, reducing A to {current_A}")
else:
debug_info.append("First point, using initial A")
# Create point S
vector_up = Vector.ByCoordinates(1, 0, 0) # "Up" is now along X-axis
point_S = point.Add(vector_up.Scale(current_A))
points_S.append(point_S)
debug_info.append(f"Created point S: {point_S.X}, {point_S.Y}, {point_S.Z}")
# Create point T
if i == 0 or (previous_point and distance >= distance_B):
# If it's the first point or distance >= B, create T above S
vector_up = Vector.ByCoordinates(1, 0, 0) # "Up" is along X-axis
point_T = point_S.Add(vector_up.Scale(distance_C))
debug_info.append("Created T above S")
else:
# Otherwise, place T in line with the previous T
prev_T = points_T[-1]
point_T = Point.ByCoordinates(prev_T.X, point_S.Y, point_S.Z)
debug_info.append("Created T in line with previous T")
points_T.append(point_T)
debug_info.append(f"Created point T: {point_T.X}, {point_T.Y}, {point_T.Z}")
previous_point = point
return points_S, points_T, debug_info
Blockquote
Input parameters
input_points = IN[0] # List of input points
distance_A = IN[1] # Distance value A (now used for horizontal offset)
distance_B = IN[2] # Distance value B (threshold for resetting A)
distance_C = IN[3] # Distance value C (vertical offset between S and T)
distance_D = IN[4] # Distance value D (reduction of A when points are close)
Run the transformation
output_S, output_T, debug_info = transform_points(input_points, distance_A, distance_B, distance_C, distance_D)
Assign outputs
OUT = output_S, output_T, debug_info
Thank you for all the help!