Python for Model information extraction for Power BI Dashboard

Hi community,
recently my boss wants us to create a power BI Dashboard for all our Revit projects in order to check the completion and advancement of the model.
The idea would be to run the script periodically(e.g. once a week)
For starters, I was able to quickly extract all the schedules with a little python script to already test how this would look in Power BI.
i guess all information could be extracted in order to control

  • correct naming
    • of views, sheets, Schedules, view templates, filters,…
    • correct placements of views in the project browser
  • count model elements
  • check if all elements have all the demanded parameters filled out
  • Worksets
  • errors

  • the list is pretty endless.

the idea is also to open a part of the overview to the companies in charge of creating the model, so a lot of problems would already solve itself

The question is:
what approach would be best to attack this subject.

I have some experience in Python and I guess it’s the fastest solution concerning runtime.
I still am having problems to get to know the Revit API and spent a crazy lot of time to extract all views with some of their parameters for example. ( just to write the script)

Any ideas, or links or small scripts you could share to be able to faster diving in to the subject and getting productive?
OR:
Am i mistaken and there is a better approach?

thanks in advance

I would recommend you start small, don’t begin with 100 audit criteria to review. Focus first on the structure of your data and how it can be expanded in future once more criteria come into play. Pick maybe 5-10 aspects that can be compared easily and understood by non-users at face value in Power BI reports.

I would suggest these targets to begin with (all possible in Dynamo+Python, minimal API);

  1. Model name
  2. Model file size
  3. Model warning count
  4. Group count
  5. Workset count

Most people begin this workflow in Dynamo, then move onto developing their own custom nodes with Python that can be used at a company wide level. At some point, the speed becomes an issue and most people move onto developing an application to be run across models (either by the user, or automatically when a user opens a model).

It’s important to determine how your reports are collated and compared to previous versions. Most people begin with excel or csv dump files that Power BI can draw values from. Eventually this tends to prove ineffective and most people seem to explore sending data from an app or Dynamo to an SQL database instead.

I would begin by looking at how the interoperability tools handle this process as a starting point - it’s already pretty decent, and will give you an idea of where you can begin or expand upon;

1 Like