Civil 3D’s API Is Holding Back AI-Driven Engineering

I wish Autodesk would focus more on improving the Civil 3D API.

Right now, I have AI agents built into my hydrology workflow, but I had to reverse engineer .gpw and .stm files just to make it work. Autodesk Civil 3D’s Dynamo has been helpful for accessing catchment data, but I still had to build a separate .exe just to compute curve numbers. On top of that, I have to manually open AutoCAD just to run Hydraflow extensions.

This shouldn’t be this hard.

Give us a real Python API—or at least a CLI we can hook into. Right now, integrating with modern AI workflows feels like working around the platform instead of with it.

I think AutoDesk needs to take this serious, or it will not keep up with whats comming. The pace which workflows are about to change warrants drastic action in my opinion.

This may be because the tools are built as a addon to Civil 3d/Autocad, therefore you have to run these applications from within these applications.

On AI have you seen the following? - Autodesk Civil 3D 2027 Help | Autodesk Assistant | Autodesk

1 Like

Hydra flow extensions isn’t something I am familiar with. @dkyle.willis can you provide a link to the Autodesk offering? Or is it 3rd party? If it is a 3rd party the issue may be with the extension’s authoring not C3D.

There are also a lot of great new AI powered features coming - the assistant is just the start.

That said, it seems you’re not as much asking for an AI integration as you are a Python hook for whatever tooling you are building. Dynamo’s Python nodes are capable of running pretty much anything you might want they just have to be configured to do such. The built in editor would be the first place to start, incorporating the Dynamo assistant (available as an alpha - there is a pinned post on the forum if you’re curious) if you want an AI assistant to help author the code. If you’re happy with your Claude setup or other tooling just have it save the scripts as text files and load those into Dynamo via the read text node and then leverage the Python Script from String node.

Jacob,

Here’s a link to the help page for the Hydraflow extensions: Help

Workflows are being reimagined and rebuilt around tools that AI agents can interact with. At this point, LLMs are capable enough that a simple command-line interface is all they need.

Dynamo falls short because its interpreter is packaged inside AutoCAD. That’s a major limitation—I can’t install third-party libraries that are required for even basic workflows. It also doesn’t allow scripts to run outside of the Dynamo environment, which makes it nearly impossible to interface with external LLM APIs or build anything that operates as part of a larger system.

Have you looked at installing or using the Dynamo sandbox? That version is separate from any of the Autodesk product platforms and might let you do a little more without the limitations of the application platforms that run their own Dynamo versions.

You’d have to also hook into Civil 3D separately from the sandbox, but I know it’s doable since the Generative Design workflows available in Revit aren’t available in Civil 3D but I’ve seen this same connectivity workflow used to gain access to those workflows across to Civil 3D drawings.

Hydraflow extensions

Looks like this is to ensuring licensing, but it isn’t something I have ever used before - thanks for educating me! Sadly it’s not something which those of us in the Dynamo forum are able to address - best to post to the official forum on the Autodesk domain instead.


I’m going to start by picking apart the reality of your statement here - not to attack but just to cast a few bits of reality into the “AI will make it all better” mindset, as it should be “LLMs might make some stuff butter in the future, but they don’t really do so today”.

I was on the phone with an engineer last week who found that Open AI and their custom MCP pushed a TON of incorrect data into a project; it certainly looked good but it was wrong when looked at in detail. They were stuck ‘undoing’ all the work ahead of the easter weekend - not a great way to spend the Thursday before a holiday. At the time of my time writing this there isn’t anything LLM powered on the market which meets the historic expectations for accuracy and consistency that the AEC industry expects. I think back to all the times someone has complained about a catchment area being off due to not having perfectly vertical lines on curbs, or when rounding of dimensions to the nearest mm and 0.000001 degrees for a foundation plan causing the polycurve to be unclosed by 0.00000037 millimeters (god i wish I was making that last one up - two weeks of my life explaining that and doing trig by hand).

Longer term there’s also the cost of repeatedly running LLM calls to an MCP instead of building a deterministic workflow. Open AI is losing $83,000,000 each day at the moment, processing 6 million tokens a minute - so they’re only going to have to raise your cost by ~67% to break even with their current expenses - never mind when they have to start paying back the investors instead of getting yet another funding round (each funding round delays the inevitable burst while growing the bubble).

Now with that out of the way, we can get back to helping you out as this tech is absolutely worth exploring. Just proceed with caution as what looks to be a good deal today might not be tomorrow (check out the horror stories on Reddit and other social media platforms and GitHub around token consumption jumps about a week ago is one such example).


As far as the Dynamo interaction goes… part of this sounds as if you’re not well informed on the advanced uses of the Python node - not entirely uncommon as this stuff pushes the limit of the supported use cases.

Dynamo falls short because its interpreter is packaged inside AutoCAD

This is the only way to access the AutoCAD .NET api; you’re limited to COM API otherwise which is both unstable and not recommended by even Microsoft (who produce both environments) from a stability standpoint.

I can’t install third-party libraries that are required for even basic workflows

You can customize your Python3 library - how will vary by your Development environment, but this is a good place to start: Customizing Dynamo's Python 3 installation · DynamoDS/Dynamo Wiki · GitHub.

It also doesn’t allow scripts to run outside of the Dynamo environment, which makes it nearly impossible to interface with external LLM APIs or build anything that operates as part of a larger system.

Entirely doable - you just have to build the external tool as a server and then have the Python node in Dynamo make the call thereto; that said this likely wants to be an extension like the agentic AI alpha project which has been provided by the Dynamo team. There is also a lot of work underway to allow such connections between tools within the various development teams at Autodesk. However the really easy use would be to have the external tool write the code to a file on disc, set a ‘read text’ node up and leave Dynamo in Automatic run mode. Each execution of the external tool would overwrite the file which will trigger Dynamo to re-execute the ‘read text’ node, and if you push that into the Python node you can trigger re-execution from outside Dynamo. The data can then be returned to the LLM by writing the output of the Python node to disc and telling the LLM to ‘go fetch the results when ready’.

4 Likes

Sadly it isn’t really the case for Civil 3d. The .NET api requires your application be hosted (launched from the Civil 3D thread). The COM api isn’t hosted, but it (1) it is slow as heck, (2) doesn’t have wrappers, (3) requires using the interop processes which have their own limitations, (4) is WAY less stable, and (5) are only included for legacy support so we can assume they’ll eventually die off (then again, AutoLisp still exists so perhaps not).

The Revit API also requires the calls are hosted by the application itself, not externally.

The way Generative Design works is actually be collecting the data which Dynamo will need and serializing it into the file by way of the Remember node. The serialized data can then be used at will by way of Dynamo Sandbox execution (well they’re headless not sandbox so technically Dynamo CLI, but that’s a story for another day).

1 Like

I completely agree with the limitations of AI you pointed out. But most of that comes from trying to use a generic chatbot for professional work—that’s just not the right tool.

If you build a proper “agent”—with system prompts, defined skills, tools, and the right context—most of those issues go away. Frameworks like OpenClaw make that pretty straightforward.

To give a real example, here’s how I see AI fitting into a typical hydrology workflow:

The hydrology data used in models is typically presented in Excel tables and basin maps. Our basin maps (usually an AutoCAD drawing) include almost everything needed for modeling—land uses, catchments/basin delineations, time of concentration paths, and labels with all the key inputs (Basin ID, CNs, areas, Tc, etc.).

The only things not typically included are stage-storage tables, outlet configurations, and rainfall data.

From there, we use Hydraflow to model and generate hydrographs, stage/storage, peak flow, pipe capacity, and so on.

I already have parts of this workflow automated. For example, I built a Python .exe using Shapely to calculate composite curve numbers. Dynamo pulls the geometry from AutoCAD, serializes it to JSON, and passes it into the executable. Sometimes I still have to clean up geometry in QGIS (using “fix geometry”), but it still saves a lot of time and reduces errors.

Where I would like to use AI is in taking the basin map and automatically building the model from that data.

With the right context, LLM-based agents are capable of interpreting a basin map and constructing a hydrology model correctly a large percentage of the time. I’ve tested this—when the process is clearly defined, the results are surprisingly consistent.

I think the ideal workflow is for the engineer to create the basin maps and all the linework, with all the calculations handled through scripts. Then AI agents step in to take that data, build the model, run it, and generate reports for the engineer of record to review.

The engineering doesn’t go away—the agent just handles the translation from data to model to report.

1 Like

I think you’re partially right in that an LLM will help here, but ideally it will be to build he deterministic outcome. Said an out her way, use the LLM to generate the DLLs which Dynamo and/or Civil 3D need to extract the outcome you need.

Benefits:

  • You get the same output every time. Even the best most tailored agents in the world cannot do that today as the underlying models change too often.
  • Your LLM costs are limited to the time it takes to build the tool. Thereafter all use is ‘free’ in the truest sense.
    *Your execution speed will be many orders of magnitude faster as there is no need to tokenize anything - there is a reason LLMs are not used in real time applications.
  • The output data will always be correct, as the tool can only do the math on the static and consistent input types given.

The simplified analogy is that using an AI tool as a calculator to solve 1+1 is no where near as effective as the calculator built into windows. Unlike the calculator the AI tools are good at extracting the right numbers from the word problem, but your use case doesn’t seem to involve that. So I recommend you use the AI tool to build the calculator.

The possible exception to the calculator analogy is if you will only ever need to do the math this way once, and you don’t mind manually reviewing the process produced and the outcomes thereof.

3 Likes

I would highly suggest you work with your IT team on this because if it is anything like the changes happening within the IT sections of firms where they are bringing in additional security locks to not allow any application not approved and/or been properly vetted from being loaded(eg no running of any exe unless on approved list within Microsoft AppLocker).

This can also include never allowing a “unsigned” file from being able to be loaded, which your exe will fall into.

This could scupper any scalability of anything your are developing for your company if you do not adapt accordingly into your process adequate QA before getting it signed internally.

I agree. I think it could be made completely deterministic with strict adherence to naming conventions. It’s still annoying that I have to reverse engineer a Hydraflow .gpw file just to integrate this workflow.

Might be that you could work out a deterministic method without the naming conventions - ask the user to select the objects which make up selection 1, then selection 2, etc.. Or tell them to put stuff into the layer named ____ before they run the automation (“couldn’t find layer called ___. Layers were generated - move objects to their associated layer as per the help document.”

As far as having to reverse engineer a GPW file… yeah I can imagine that would suck. Ideally this wouldn’t have to happen - have you tried to call the Python library from Dynamo’s Python node directly instead? Might be bit late, but you can give it a shot the next time you need to review this (see the link above on customizing Dynamo’s Python 3 instance) and start a new thread here if you get stuck.

1 Like

That’s similar to how I’m approaching it. Currently, I have a separate layer for each land use type, but I keep all basin delineations on a single layer. The basin IDs are stored as Object Data attributes.

This setup works well when exporting to a shapefile using MapExport and bringing it into QGIS, but it becomes difficult to work with in Autodesk Dynamo.

That’s the main reason I’ve started using catchment objects for basin delineations instead of standard polylines. Still, I find polylines easier to work with overall and would prefer to stick with them if possible. I just don’t like putting each basin delineation on its own layer—it creates other issues that make it not worth it.

Do you know if there’s a way to read Object Data directly with Dynamo?

The image below shows the Object Data attribute I’m talking about.

Yes - but the methods for such are not readily exposed. I believe it requires a private API (meaning not intended for use by people outside the project, and not documented), which means you will have to do some more extensive testing. I will try to dig it up tomorrow, but I have a long day ahead of me.

Edit: Found this which looks like it’ll get you going - Accessing Object Data from AutoCAD Map 3D within the Civil 3D API. The link I really wanted was lost in the great TypePad apocalypse from last fall (they closed down all blogs with ~4 weeks worth of notice) - I’ll keep digging though.

1 Like

Jacob, thank you so much for your help! Using this article as context, Opus 4.6 was able to crack the code in about 1.5 hours. I tried to get this to work a few months back with no luck! Here is the working code, hopefully it can help someone else.

import sys
import clr
import System

clr.AddReference(‘AcMgd’)
clr.AddReference(‘AcCoreMgd’)
clr.AddReference(‘AcDbMgd’)

from Autodesk.AutoCAD.ApplicationServices import *
from Autodesk.AutoCAD.DatabaseServices import *

clr.AddReference(‘Autodesk.Map.Platform’)
clr.AddReference(‘ManagedMapApi’)

from Autodesk.Gis.Map import HostMapApplicationServices
from Autodesk.Gis.Map.ObjectData import *
from Autodesk.Gis.Map import Constants as MapConstants

project = HostMapApplicationServices.Application.ActiveProject
odTables = project.ODTables
table_names = list(odTables.GetTableNames())

objects = IN[0] if isinstance(IN[0], list) else [IN[0]]

def get_map_value(mv):
“”“Read MapValue based on its Type property.
Type 0 = Int16, 1 = Int32, 2 = Double, 3 = String, 4 = Point
“””
t = mv.Type
if t == 3:
return mv.StrValue
elif t == 2:
return mv.DoubleValue
elif t == 1:
return mv.Int32Value
elif t == 0:
return mv.Int16Value
else:
return str(mv.StrValue)

results =

for obj in objects:
try:
oid = obj.InternalObjectId
obj_data = {}

    for tableName in table_names:
        try:
            tbl = odTables[tableName]
            defs = tbl.FieldDefinitions
            
            recs = tbl.GetObjectTableRecords(
                System.UInt32(0),
                oid,
                MapConstants.OpenMode.OpenForRead,
                True
            )
            
            enum = recs.GetEnumerator()
            while enum.MoveNext():
                rec = enum.Current
                for i in range(defs.Count):
                    fd = defs[i]
                    mv = rec[i]
                    key = "{}.{}".format(tableName, fd.Name)
                    obj_data[key] = get_map_value(mv)
            
            recs.Dispose()
            
        except System.Exception:
            pass
    
    results.append(obj_data)

except System.Exception:
    results.append({})

OUT = results

I think I understand what you’re saying now.

To be honest, I don’t really know any programming languages besides Python, which is why I’ve always tried to do everything through the Python node. But after trying to understand your comment, I’m starting to see that there may be a lot more functionality available by writing custom DLLs.

If I understand correctly, I can create custom DLLs and then use those within Dynamo to expose features that aren’t currently available through the built-in nodes. That sounds like exactly what I’ve been looking for.

That said, the learning curve seems pretty steep. Where would you recommend starting?

It’s shockingly not - the process is about as straightforward as I have seen for something that offers this level of functionality, and the calls used by Dynamo can becomes the basis of calls made from your own apps and add-ins (just don’t use any Dynamo calls in the code).

Start, as with so many learning exercises in Dynamo, with the primer. Specifically here: Developing for Dynamo | Dynamo

Skip nothing and do that particular exercise not something for Civil or other use case yet.

After that I recommend looking over this very old but still VERY useful tutorial from Matteo Cominetti: Dynamo Unchained 1: Learn how to develop Zero Touch Nodes in C#

The methods for generating the pkg and associating the debugger are the root for what I do in my own development today, albeit with some added tweaks for help documentation and such.

1 Like

There does not appear to be an online source for the Map API
You can download the API reference here.
As with AI coding sometimes it get things just so… not quite right.

You probably don’t need to import any AutoCAD assemblies and Autodesk.Map.Platform as well

It is better to use the built in python Exception class and try without importing the System all together, and before having to convert unsigned long integers with System.UInt32(0) as IronPython / Pythonnet is reasonably good at type conversions (although there are exceptions)

DataType is an Enum which has the following values

0 UnknownType
1 Integer
2 Real
3 Character
4 Point

So if you are on a more recent version of python (3.10+)

from Autodesk.Gis.Map.Constants import DataType, OpenMode
...
def get_map_value(map_value):
    match map_value.Type:
        case DataType.UnknownType:
            return map_value.Int16Value
        case DataType.Integer:
            return map_value.Int32Value
        case DataType.Real:
            return map_value.DoubleValue
        case _:
            return map_value.StrValue

Or before 3.10

def get_map_value(map_value):
    if map_value.Type == DataType.UnknownType:
        return map_value.Int16Value
    if map_value.Type == DataType.Integer:
        return map_value.Int32Value
    if map_value.Type == DataType.Real:
        return map_value.DoubleValue
    
    return map_value.StrValue

The easy way is using The Civil Nodes package, which has nodes to read/write Object Data.

1 Like