I would approach this a bit differently.
Attempting to convert a point cloud to design data requires a degree of flexibility on the initial inputs as any point cloud data set will vary from the ‘ideal’ shape for design due to the nature of laser scanners and construction.
- Relative accuracy is good, but specific accuracy won’t necessarily be
- You so you can’t assume the scan will have found the corner where you’ve pulled the profiles
- We conceputalize things as ‘straight’ but they aren’t, and lasers see those ‘bows’.
As such a ‘guided’ tool is likely better than a ‘do the work for me’ tool.
From your point 2, I would build a SERIOUSLY NOT GOOD™ polycurve though the points. Then get the angle between each line and the line which proceeds it, and compare that to a tolerance. If that angle is more than your desired tolerance, keep the point; otherwise filter it out. This can take some really awkward profiles and straighten them out without too much compute, and can be iterated (inside the node or by placing it in sequence with increasing tolerance control) as desired. You can then incorporate a Point.PruneDuplicates or a Point.Average method to narrow closer in on the corners, or allow the user to make additional modifications as needed. This should get you GOOD ENOUGH TO MAKE DECISIONS™ polycurves, which can be further refined by subsequent processing.
I’ll take a pass at this tomorrow on a real dataset if you can provide the RCP, or later this week /sometime next week after I get recap set up, but initial testing seems promising (1000 point profiles to ~150 in under a half second with a Python node).