A modified flood fill is the way I would tackle this.
Use a 1.8 x 1.8m ‘pixel’ and it would probably have to check for wall angles and rotate the whole algorithm to suit.
Scan across from a start corner at 1.8m jumps, once you have a collision bisect back and forth until it aligns with the opposite wall. Keep going, unifying the area as you go.
Hi all, I have managed to solve my problem. Here’s how I did it:
- I updated my Dynamo.
- I got all the room boundaries and extended them on both ends to the maximum length.
- Then I grouped them into horizontal and vertical lines.
- I extruded the lines, created polysurfaces from them, and used that to cut up the room floor surface.
- After that, I had to use bounding boxes because the cutting process also split the borders of the individual surfaces, so I had to fix that for the next step.
- I checked if there was an edge smaller than 1.8 meters, and if so, excluded those areas.
And this is the result:
The next step is to make it non-orthogonal proof, but I’m really happy I finally figured it out!
here’s an alternative idea using a pseudo scan
the process
- obtain room orientation
- according to the 2 vectors (orientation), perform a scan by ray (Unbound Line Intersection with room limits)
- group the lines obtained (less than 1.8m) by distance
- for each group, get the aligned boudingbox and extract perimeter curves
the result only with the point 2
here is the python code for point 2 (CPython3)
import clr
import sys
import System
#
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *
import Autodesk.DesignScript.Geometry as DS
#import Revit API
clr.AddReference('RevitAPI')
import Autodesk
from Autodesk.Revit.DB import *
import Autodesk.Revit.DB as DB
clr.AddReference('RevitNodes')
import Revit
clr.ImportExtensions(Revit.GeometryConversion)
clr.AddReference('RevitServices')
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
doc = DocumentManager.Instance.CurrentDBDocument
import numpy as np
def cosine_similarity(point, vector):
dot_prod = point.DotProduct(vector)
mag_point = point.GetLength()
mag_vector = vector.GetLength()
# avoid division by zero if the magnitude is zero
if mag_point == 0 or mag_vector == 0:
return 0
return dot_prod / (mag_point * mag_vector)
#Preparing input from dynamo to revit
room = UnwrapElement(IN[0])
out = []
lst_debug = []
DEBUG = False
bound_opt = SpatialElementBoundaryOptions()
bound_opt.SpatialElementBoundaryLocation = SpatialElementBoundaryLocation.Finish
room_curves = [s.GetCurve() for lst_segments in room.GetBoundarySegments(bound_opt) for s in lst_segments]
ds_room_curves = [c.ToProtoType() for c in room_curves]
# coordinates System (vectors) orientation of the room
lst_vector_ray = [XYZ.BasisX, XYZ.BasisY]
# get min point start
ptstart = room.get_BoundingBox(None).Min
ptstart = XYZ(ptstart.X, ptstart.Y, room_curves[0].GetEndPoint(0).Z)
ptend = room.get_BoundingBox(None).Max
ptend = XYZ(ptend.X, ptend.Y, room_curves[0].GetEndPoint(0).Z)
max_size = max([ptend.Y - ptstart.Y, ptend.X - ptstart.X])
# scan with vector_ray_A and then vector_ray_B
for idx, vector_ray in enumerate(lst_vector_ray):
rayA = DB.Line.CreateUnbound(ptstart, vector_ray)
# offset ray each scan iteration
for step in np.arange(0, max_size, 0.05):
perpendicular_vector = lst_vector_ray[abs(idx-1)] # auto-switch beetwen the inverse of idx
ray_curve = rayA.CreateOffset(step, perpendicular_vector.CrossProduct(vector_ray))
# DEBUG to ensure is the ray location is correct drawing a small line of ray
if DEBUG:
line_bound = ray_curve.Clone()
line_bound.MakeBound(0.1, 3.9)
ds_line_bound = line_bound.ToProtoType()
lst_debug.append(ds_line_bound)
temp = []
# try to find room boundaries intersect
for bound_curve in room_curves:
outResultArray = IntersectionResultArray()
comparisonResult, outResultArray = ray_curve.Intersect(bound_curve, outResultArray)
if comparisonResult == SetComparisonResult.Overlap :
lst_intersect_pts = [i.XYZPoint for i in outResultArray]
temp.extend(lst_intersect_pts)
# sort points by vector
sorted_points = sorted(temp, key=lambda p: cosine_similarity(p, vector_ray))
# iterate over pair points
sorted_points = iter(sorted_points)
for pta, ptb in zip(sorted_points, sorted_points):
if 0.1 < pta.DistanceTo(ptb) <= 1.8 * 3.281: # if distance beetwen point is under 1.8m
line_bad_part_area = DB.Line.CreateBound(pta, ptb)
#
out.append(line_bad_part_area.ToProtoType())
OUT = out
Wow, that is amazing!
Is it possible to also do this with nodes instead of Python code? Personally, I have a hard time understanding it when it’s in code. ![]()
How does it behave with rooms where one of the walls is at an angle, like the picture I drew earlier?
Here are some examples of the results.
But I can’t guarantee that it will work in every case, depending on the orientation of the room.
I have an additional question: how can I group the lines to create a surface? I’ve tried a few methods, but they aren’t solid and don’t work well in other situations.
I successfully did it, but there are still some areas that need to be excluded. Both the red areas and the orange ones are smaller than 1.8m x 1.8m. What would be a good way to tackle this? The raycasting works, but it demands a lot from my office laptop. If I want to process the entire floor, I need to find a more efficient method. For now, I’m thinking of just running it a second time.
2 ideas
- group by distance (takes time)
- use clustering like for example with sklearn.cluster DBSCAN
probably I will do the same thing.
After thinking about it, I wonder if this problem could not be solved with machine learning. It would probably take thousands of examples to constitute a solid learning, @Durmus_Cesur
what do you think?
I solved the clustering problem using the Traveling Salesman algorithm to order the points. Then, I connected the points with a polyline and checked which lines were longer than 50mm. I removed those and cut up the list of points accordingly. It’s a bit creative, but it works for now. ![]()
Machine learning would be an interesting option, yes. However, the result of the first calculation gives me polysurfaces because the removed parts split the total surface into four. You can think of these four as separate ‘rooms,’ I can repeat the process on those. If I’m correct, the same function applies but on a smaller ‘room,’ so it might be faster since fewer lines are created.
C. Poupin, I have a problem: the geometry.intersect node isn’t processing more than 20,000 lines. I’m trying to make this script for an entire floor. Do you have any suggestions on how to solve this?
I don’t know your use for the node geometry.intersect
but you can try:
- try integrating this into a Python node
- split or group input data
- change algorithm



