Creating a building elevation from a point cloud

I am interested in implementing the idea of creating 2D elevation from a point cloud. I have chosen a path to solve this problem. I would like to hear your opinion. Perhaps someone has researched this field and will share their experience.

My idea is this:

  1. to take and select the necessary points with a section box. This idea has often been discussed in the forum and I have managed to deal with this part.

  2. Revit can itself divide points by normals. And I think this can be used. But how to extract this information? Or how to divide the points by normals myself?

  3. analyze where the normal changes sharply and accordingly there is an edge.

  4. project the edges onto a plane and a facade should appear.

Does anyone have any thoughts? Many thanks for the help.

Point’s themselves don’t have a concept of adjacent normal, and I doubt the scanner information has it. You’ll have to create a mesh of the points to pull this data. Dynamo currently doesn’t have the tools to do this; the Recap API might (Revit 2022 and up allows access), but I am not sure that’s the right point of entry. Doing the work in Recap directly is likely faster - https://videos.autodesk.com/zencoder/content/dam/autodesk/www/products/recap/fy23/features/new-features/scan-to-mesh-video-1920x1080.mp4

Could there be a mathematical method “to divide an array of points into normals”?

And also saw on the forum that when exporting to . pts at the end of the coordinates there are some vectors. These aren’t normal vectors?

I tried “Recap to mesh” but the result was very bad. I refused to use it. And I would like to develop a separate software in dynamo.

In a .pts file the first 3 digits are X, Y, and Z values to encode location. The next digit is the intensity to encode ‘reflectance’. The last three digits are Red, Green, and Blue values to encode color. More info here: pts - Laser scan plain data format

Now this doesn’t mean that someone couldn’t encode the XYZ of a normal vector into those RGB spots, but that data would be lost by anyone using the pts file as a standard file.

I’ll try saying what I indicated before another way: to get the direction which is perpendicular to the face at a point, you need to define the face. Point clouds don’t have this data by default - all they know is the location, intensity, and color. So you have to mesh the whole thing. There are various ways to do so; however none of them really scale well in comparison to the Recap tool and others.

What was insufficient with the Recap mesh? It may be that you just need to re-work that mesh (a better use of Dynamo) rather than trying to develop something from scratch.

I have *.pts looks different:

Not a format I’ve seen; again it could be utilizd in a different method, but I haven’t see any official documentation indicating that’s a valid use. if it is I guess you could try grouping by common normal as a start, but not sure it’d get us very far. Would need that pts file to text - feel free to post.

I came up with another method, and to try it I decided to take a small part of the cloud:


but even remote points are chosen when selecting the points. How to remove them permanently in the recap? so that Revit did not even know about them?

As per Jacob’s initial guide, trying to mesh/wrap the points to get a convex hull or similar is a good starting point. From there look into such algorithms such as ‘rolling ball’ as a means to better connect/simplify the resultant surfaces.

Technology is still not completely caught up to an easy/reliable way to achieve what you are doing, but there are algorithms and approaches that can either work with partially informed data (e.g. the users draw rough faces, then you can limit points to those within x distance of a face), or you can cull points using rays and once you hit a ray, extend it further and cull all points within x distance of those rays.

Dynamo is probably going to struggle with those types of algorithms. I’ve had some luck with Rhino/Grasshopper but at some point it makes sense to jump into more optimal languages for processing such as C#.

Here’s an example of a custom algorithm I made in Grasshopper to demonstrate an approach (not necessarily the best/optimal) to identify estimated surface points. I take a cloud of points, make a projection surface, construct a small surface at each and shoot mesh rays at them to find the first plane that registers a hit. This allows both an equidistant reconstruction of the face (as I divide the projection surface), and for the most part yields outermost points.

This is it running in realtime, as you can see they are possible to make quite quickly depending on number of points. I begin with 2000 points and take it down to 500 afterwards, then reveal the planes more clearly at the end. The rays and intersect points are in blue. The creation, meshing and joining of the planes is the major bottleneck, all other steps are quite efficient.

2 Likes

I changed the idea a little bit, I didn’t think about “shelling points” but using sections.

Now I have a 2D section with points. If the algorithm recognizes shapes on the cross section and leads everything to lines?

Translation:

I changed this idea a little, I thought not about shelling points, but using sections.

Now I have a flat 2D section with points. Is it possible for the algorithm to recognize the figures on the section and bring everything to lines?

Most things are possible, but what have you tried? Nothing is possible if you don’t try :wink:

That sort of problem may be solved using a combination of a wandering algorithm and progressive lines of best fit through wandered points I think.

I decided to use NurbsCurve. In most places it goes right, but sometimes not right away from the points. I want to analyze the seed from NurbsCurve to the points and if it is large - make a gap in this place and conduct a separate NurbsCurve. But somehow the distance doesn’t work.

And one more observation:

For some reason, Dynamo NurbsCurve looks different than Revit. Where is the correct drawing?