How does Revit handle point cloud files? How does it optimize it? I understand that some dumbing down of the dataset is done but how does it average it out and where does it cap it? Is it at 1% of the total dataset? 2%? Here’s why I ask:
I’m working on a very large adaptive reuse project for a 1800ish old factory building that has about 30 rpc files linked in. Floors are not leveled and there are differences of up to 1’ +/- in certain areas and they’re not stepped, they’re very much gradual and topo-like. My interest is not in conveying the existing conditions but rather what is going to happen during construction, mainly the floors will be levelled with geofoam/gypcrete or other means and documentation will be important in a cut/fill approach. Anyway, I’m using RhinoInside to do all my geometry building in grasshopper, but I’m noticing that I am only able to extract a very small amount of points from the linked rpc file (up to 1 million points as opposed to 20+ million from the original rpc file in recap). I’m not sure if this route is worth it given the loss of “resolution” or if I should clean my data in Recap and then mesh there… Any ideas?
Recap to mesh is a good start.
Installing MeshToolkit is a good plan.
Installing the Sastrugi Package another one.
If you know how to build your Families you can do a lot.
I learned lots when i was working on Notre Dame.
Endless scrolling here: http://grevity.blogspot.com/
We started this project about a month after the fire, so scroll back to May 2019.
By the sounds of it you’ll be best suited to manipulate the points file directly, extract the data you want, save as a new file (.pts or similar), and build your geometry or do the analysis from there. One thing to note is that the scan will always be a higher resolution than construction tolerances; you seem to have hit the nail on the head with how to work with that info in the concept of “what needs to happen in construction” standpoint, but remember that even in calculating the volume of foam/concrete you’ll want to apply standard estimation practices rather than attempting to count the number of atoms which need to be added.
It’s also worth noting that there many pointcloud file types are really just dumb text, and as such can be VERY quickly processed with Python to get to the info you’re after. There are several posts on the forum showing how to downsample such files, or you could look into utilizing VASA to build out voxels showing where fill would be needed - that would be a great use actually since you’re building up to a leveled surface.
Interesting. I’ve been using a similar approach of voxelizing the points to simplify the cloud, did not know there was such a thing in Dynamo. To your point about tolerances and construction precision, I think this is where a post demo point cloud would be inevitable. I forgot to mention the building is FULL of crap (heavy crap) and unnecessary layers of material so it’s safe to assume the structure would lose some of the deflection and flex back to a more uniform state once all this dead weight is lifted across it’s 80K+ floor plate. The same documentation process would apply then and now however.
I’m going to give this a shot in Dynamo and see where it takes me. Been trying not to go back to Recap as @Marcel_Rijsmus pointed out as this would mean re-referencing and manually cleaning all these files (not in our fee) and is something that the scanning consultant should’ve done. Alas things change and it’s best to adapt. Thanks for the suggestions
I can think of a few ways to do this, utilising a few different software solutions and Dynamo. Send me a DM next week if you are still struggling @carlosguzman
Thanks will do. I’m also using CloudCompare to subsample before attempting to voxelize via Volvox in GH. May try VASA as well. Then the tough part would be figuring out how to select the floor and ceiling/plank with minimal manual labor…
You should be able to identify a maximum height for the floor.
And you should be able to identify a minimum height as well.
As such you can read in the contents of the pts file, filter out any Z value greater than the maximum and less then the minimum removing 90% of the points in one go.
Next up: convert each point to 9 numbers to use in generating the voxels, then invert the model to pull the ‘air’ between the high point which you’ll level to.
I’ll see if I can share a bit of Python to call all of that in the context of one node to simplify the larger effort. My sample point cloud will likely be a LOT less complex than yours though.
Exported .e57 format out of Recap (data was submitted as .rcs)
Imported, Subsampled and cleaned the junk on the floors using CloudCompare CSF filters and other cleaning tools @Ewan_Opie this was an absolutely amazing tool to learn. I deal with a lot of LiDAR datasets with GIS and terrain modelling so this opened up a whole new world of knowledge for me, so thanks a lot.
Brought back the e57 and re-exported as .rcs and back to Revit
Subsampled again and delaunay triangulated floor geometry (very heavy computation but doable and manageable)
Generated contours (1" increments) as generic model curves for fast viewing and spot coordination for development and Refinement of revit model.
Still trying to wrap my head around VASA and creating voxels instead to create my floor geometry. I’d imagine this is more stable and faster. Need to spend more time with the example files to get there…
The challenge with filtering my points is that some of my point cloud files have chunks of points from floors below and since the whole floor is stepping/sloping its hard to find what is the “minimum” or threshold for my Z values.