Thnx
You mean how do we use it?
Well ChatGPT is a bit rubbish for Dynamo. It makes up nodes.
Itâs a lot better that the Python stuff in Dynamo than it was a year ago though.
A year ago I found it unusable for anything Dynamo related but recently Iâve tried again and (with the Python) itâs pretty good now.
TL;DR >> I get it to write little chunks of Python (not more than 100 lines at a time though).
Now in DS (for geometry)
I never really liked design script.
I only use a tiny, tiny bit of it.
So my job canât be done by AI ![]()
yet
But how can we be sure that your posts are not written by a chatbot? ![]()
ChatGPT (Grok is better for python codes in dynamo imo) has hindered my ability to learn or create a code myself. I know the python as a language but I want to be able to create code solutions like some legends here. (Mike, Cyril, Soren). Generally when Iâm creating a code it is on urgent requirement so I end up using AI.
The only approach I believe that works for abstract code is to write specific data for the AI to use as reference, otherwise as others mentioned it just makes stuff up, and rather badly at that.
Visual programming is not as literal as written code, which comprises most of its effective learning data.
I recently tested (experimentally) with an AI agent that uses DynamoPythonNet3 (with WPF), which includes:
- OpenAI REST API;
- an MCP server on n8n (with Supabase for RAG);
For geometry generation by LLMs, I find that it lacks precision. Furthermore, object classification by geometry gives good results (as does machine learning) .
Iâve had really good luck using CoPilot to write Python code for Python script nodes. The key is being very descriptive in what you are trying to do. One of our user needed to know which Revit links were visible in all of the views on sheets, and export that data to Excel. I was able to get close using Dyanmo nodes, but the results werenât accurate. With the help of CoPilot, I was able to use a single Python script node to accomoplish this task. I honestly question if most Dyanamo nodes and custom package nodes are needed any more.
Ok⌠So I said I would drop some thoughts if @Marcel_Rijsmus would start a thread, so lesson learned: donât commit to topics when something is too deep to cover in a quick go around!
How it works
The easiest way to get value from technology is to learn how it works so Iâll start there. This is a VERY simplified summary based on what I know about the tools today - I encourage everyone to do their own research though.
All the tools getting buzz today are Generative Pretrained Transformers (GPT). As a simplified statement on how they manage to doThey work by taking MASSIVE datasets, converting them to a database of vectors, and then common vector sequences are pulled. When you ask a question each part thereof is converted to matching vectors, and then the sequence is passed into the database to pull a series of response vectors which are mixed with some randomization and then returned as the root words that assemble the phrase. The conversion to vectors is called âtokenizationâ and is how most of the tools charge for their costs. Itâs not a direct 1:1 word to token value either - syllable count is closer but even that is off. OpenAI has a good demo tool you can find on your own if youâre interested.
Each tool (Claud, ChatGPT, copilot, etc.) has slightly different databases (some even allow changing which database you want to query). This means that ChatGPT might be best for Marcel due to how he types his thoughts and Claud might be better for Gavin based on how he types his. Similarly CoPilot might be better suited for getting action items out of video calls than reviewing your construction drawings and producing a valid specification. Knowing which tool works best for you and the task at hand is almost a must.
The databases arenât static either - information is added and removed all the time, resulting in different results day to day. My favorite model from November is now all but useless for me on a project which I had already used it on for while, meanwhile a colleague who previously hated I using it on his project thinks it is the best by a large margin.
The last thing to note is that the results always return an answer, as the input vectors always land somewhere in the DB so a chain is always returned. The GPT doesnât know if it is a good result, and there is no âvalidateâ method here; it just gives you the result good bad or otherwise. Sometimes the modelâs gaps or randomness will push things into the area of make believe - often called hallucinations (a term I hate).
Working inside the limits
Now that we have some background we can start to look at some guidelines for using these tools well.
- Select a mode which is aimed at your use case. Writing C# code? Copilot in Visual Studio is a pretty good choice as itâs powered mostly by the TONS of working C# projects on GitHub. But I wouldnât use it to identify is the submitted paint is equivalent to the one you specified - ChatGPT is likely a better for there, and even that may come up short as this is an area of expertise that isnât well encoded on the data sources used for these models. For Dynamo graph authoring the Node Autocomplete and Autodesk Assistant are your best beats.
- Because each input vector has a random factor applied, the more inputs the more likely the randomization reinforces itself, and thereby the more likely you get into the land of wrong answers pretending to be right. Keep prompts small and use reinforcing language as you go. Instead of asking for a complete program, ask for a small segment, or a single method, which you can then chain into then larger tool.
- If you canât find a good model for your use, consider building your own. This isnât rocket science, and it doesnât take a super computer or a PHD level computer scientist either. You will need to spend a good bit of time, and you might not have enough data initially, but you can get good results here.
- Common needs are more likely to have good results, so sway towards the masses. Image generation is needed in ALL industries; spatial relationships are quite niche by comparison. So rather than trying to generate floor plans with the image producing GPTs, have them stylize and flesh out your massing models and take the time savings and spend more time with the planning or training a custom planning model. Similarly user interfaces are easy; scaling anything else is hard as scaled code bases are not publicly exposed and therefore not in the models.
- Donât fall into the overuse trap. These tools all cost significant sums to run, and NONE of them have turned anything close to a profit. Open AI had 20 BILLION in profit last year and some analysts are warning they could be entirely out of cash for operating expenses next year. Operating at a loss is only viable for so long before an investor forced your hand, and the token costs jump from the current $1 price to $10, and suddenly hiring an intern makes more sense than footing your GPT bill.
- Leverage the common basis of the AI to explain results you donât understand. They are all pretty good at base language, and as such it can be like asking an expert to âexplain the previous reply like Iâm fiveâ. The result can be like having a teacher whoâs an expert in the content to help you make sense of the outcome.
- Remember that in the end all of these systems are just aggregated information (what AI likely should have stood for in 2025), and that the systems return results even when the mesh is insufficient to have complete information. The systems are also not capable of being held accountable. As such any negative consequences from running the result will fall squarely on you. This means you should proceed with caution, work in detached datasets, and implement in slow increments.
How convenient there is an AI summarize button in this topic ![]()
[quote]
⌠jacob.small provides a detailed breakdown of how GPTs work, emphasizing tokenization, model variability, and randomness. He advises users to: 1) use AI for common tasks like UIs or image stylization, not niche geometry; 2) keep prompts small and iterative; 3) consider building custom models if needed; 4) avoid overreliance due to high costs and unsustainable business models; and 5) use AI to explain complex outputs in simpler terms.
[/quote]
@jacob.small nailed it on the technicals. The context window is our most precious resource, and we waste so much of it trying to explain things the model should already know.
@c.poupin is spot on with the MCP approach. Copy-pasting snippets into a chat window is already feeling obsolete.
For Dynamo specifically, this is critical because every node is essentially a tool. We donât just need the AI to write code; we need it to understand the graph logic itself. MCP gives it that direct line to the toolset/schema rather than us trying to describe geometry with text.
That is the difference between a novelty act and a production-ready workflow. We need to stop hand-holding the AI and start integrating it.
Users often expect a direct answer from AI to exactly what they are looking for. However, no model in the world can give a âdefinitiveâ answer to your question unless it has been specifically trained on that exact problem. All it can do is interpret the question and offer a perspective. And in my opinion, that is precisely where the real value lies: in making interpretations.
Iâd like to show what I mean by simplifying things as much as possible.
K-Means, You simply provide it with a âdata list,â and it asks you, âHow many groups should I create?â But how many groups should there be and why?
Preprocessing:
Data is the machineâs eyesight. First, we need to clearly explainâor even showâwhat we actually want.
Feature Extractor: Creates a representation from geometries. It looks at geometric data and generates a list of features for each geometry.
Scaler: The Feature Extractor can sometimes produce numerically very large values, so the Scaler simply normalizes (or minimizes) this data.
At this point, the machine now understands the size of the geometries we have, the area they occupy, and what they look like. However, it has no idea about their identities. In Revit, on the other hand, each piece of data has an associated identity : name, category, mark, labelsâŚ
For this, we use an
Encoder: It converts string-based data into a numerical representation.
In this way, the model has now learned both the geometries themselves and their identities.
Dimensionality Reduction:
It reduces the complex numerical data we have (geometry and identity data) into a format we can understandâ(X, Y, Z). In this way, we can âsimplyâ see what all this complex numerical data actually looks like, and by the end of the day, we have an answer to the question the machine asks us: âHow many groups should I create?â
Our answer: âCreate 3 Groups.â
Graph:





