I am interested in implementing the idea of creating 2D elevation from a point cloud. I have chosen a path to solve this problem. I would like to hear your opinion. Perhaps someone has researched this field and will share their experience.
My idea is this:
to take and select the necessary points with a section box. This idea has often been discussed in the forum and I have managed to deal with this part.
Revit can itself divide points by normals. And I think this can be used. But how to extract this information? Or how to divide the points by normals myself?
In a .pts file the first 3 digits are X, Y, and Z values to encode location. The next digit is the intensity to encode ‘reflectance’. The last three digits are Red, Green, and Blue values to encode color. More info here: pts - Laser scan plain data format
Now this doesn’t mean that someone couldn’t encode the XYZ of a normal vector into those RGB spots, but that data would be lost by anyone using the pts file as a standard file.
I’ll try saying what I indicated before another way: to get the direction which is perpendicular to the face at a point, you need to define the face. Point clouds don’t have this data by default - all they know is the location, intensity, and color. So you have to mesh the whole thing. There are various ways to do so; however none of them really scale well in comparison to the Recap tool and others.
What was insufficient with the Recap mesh? It may be that you just need to re-work that mesh (a better use of Dynamo) rather than trying to develop something from scratch.
Not a format I’ve seen; again it could be utilizd in a different method, but I haven’t see any official documentation indicating that’s a valid use. if it is I guess you could try grouping by common normal as a start, but not sure it’d get us very far. Would need that pts file to text - feel free to post.
As per Jacob’s initial guide, trying to mesh/wrap the points to get a convex hull or similar is a good starting point. From there look into such algorithms such as ‘rolling ball’ as a means to better connect/simplify the resultant surfaces.
Technology is still not completely caught up to an easy/reliable way to achieve what you are doing, but there are algorithms and approaches that can either work with partially informed data (e.g. the users draw rough faces, then you can limit points to those within x distance of a face), or you can cull points using rays and once you hit a ray, extend it further and cull all points within x distance of those rays.
Dynamo is probably going to struggle with those types of algorithms. I’ve had some luck with Rhino/Grasshopper but at some point it makes sense to jump into more optimal languages for processing such as C#.
Here’s an example of a custom algorithm I made in Grasshopper to demonstrate an approach (not necessarily the best/optimal) to identify estimated surface points. I take a cloud of points, make a projection surface, construct a small surface at each and shoot mesh rays at them to find the first plane that registers a hit. This allows both an equidistant reconstruction of the face (as I divide the projection surface), and for the most part yields outermost points.
This is it running in realtime, as you can see they are possible to make quite quickly depending on number of points. I begin with 2000 points and take it down to 500 afterwards, then reveal the planes more clearly at the end. The rays and intersect points are in blue. The creation, meshing and joining of the planes is the major bottleneck, all other steps are quite efficient.
I decided to use NurbsCurve. In most places it goes right, but sometimes not right away from the points. I want to analyze the seed from NurbsCurve to the points and if it is large - make a gap in this place and conduct a separate NurbsCurve. But somehow the distance doesn’t work.