Instead of manually reviewing the warning list, why not do the following with Dynamo:
- Create a new y/n project parameter called “warning review” and assign it to all categories.
- Get the warnings from the model.
- Get all the elements from the associated warnings.
- Clear out any duplicated elements.
- Set the “warning review” value to ‘true’ for all elements in the list.
- Generate a new 3D view called “warning review”
- Set a filter to halftone/transparent all elements with a ‘warning review’ value not equal to true (so null or false).
Anyone can now open that view, see an element which needs fixing, update it, and adjust accordingly. Further, you can schedule this for easier review (it’d catch the view specific elements as well). The schedule is super nice as you can set up a ‘warning review day’ and assign users groups of elements.
Again by Dynamo -
- Write up a list of users
- Get list of warnings,
- Shuffle list of warnings to spread the pain and learning
- Divide warning list into even lists by number of users
- Assign user name to a ‘responsible to clean’ parameter
- Set up a schedule view for each user, filtering each schedule by user name and grouping by ‘resolved’ status.
- Have users make comments as to why in a ‘resolution comment’ field if they can’t resolve something completely.
Personally I like manually pulling my warning reports and exporting to html instead of pulling it live, even though it has a degree of manual labor to it. The HTML file can be easily edited to focus on areas of concern or what to ignore before I start diving into the work, and Dynamo can pull elements from it just fine I so can still schedule it. Plus this way I have a snapshot of what was broken on a given date allowing me to track warnings and infer information about user habits over time (John joined the team in March, left in April, and while he was on the team we got 200 ‘off axis warnings’ a week, but since he left we only have a few - there may be a correlation there).