When people think of usecases for Large Language Models (LLMs), they usually focus on producing free form text, or maybe code. Yet by far the most useful application of LLMs in my experience is their ability, with the right scaffolding, to take squishy data, such as natural language or images, and use it to generate highly complex objects with predefined structure. Recent examples from my practice include producing complex chart configs from plain images, and precisely structured semantic layer queries from natural language questions by the user. Having the model return an exact, validated Pydantic object, maybe several levels deep, allows for a degree of control and reliability that is not achievable with naive approaches such as direct SQL generation - so much so that prompt tuning or the exact choice of model hardly even matter. I’ll explain how to do that reliably and easily with open source tools, using a combination of structured outputs and an agentic feedback loop.


Explore our collection of 200+ Premium Webflow Templates