Meetings are prompts now

AI / / 6 min read

Meetings are prompts now

Meetings aren't about efficiency anymore. They're the richest context you'll ever give an AI. The question is what happens after.

Harry Traynor Harry Traynor

Introduction

There's a familiar rhythm to how most teams think about meetings and AI. First came the transcripts - useful in theory, rarely touched in practice. Then the AI summarisers arrived, promising to extract actions and decisions so you didn't have to. And now we're in the era of AI-generated agendas, auto-formatted outputs, and tasks that raise themselves in your project board before anyone's left the room.

All of this is real progress. But I think it misses something bigger about what's actually happening to the meeting itself.

The transcript era solved the wrong problem

For years, transcripts sat in a strange no man's land. They captured everything, but nobody used them at scale. The data was there — every decision, every caveat, every "let's come back to that" - but it was unstructured, long, and rarely connected to anything downstream.

The problem wasn't capture. It was processing. We had the raw material but no pipeline to turn it into anything useful. Most transcripts were insurance policies: you probably wouldn't read them, but you felt better knowing they existed.

When AI meeting tools started showing up, they targeted the obvious pain points first. Summarise the transcript. Pull out the action items. Generate a follow-up email. These are genuinely useful features, and they've saved people real time. But they're still operating on the assumption that the meeting's job is to produce a tidy set of outputs - and that "better meeting" means "more efficient meeting."

I think that assumption is worth questioning.

Efficiency is the wrong goal

The instinct to make meetings shorter and tighter comes from a good place. Nobody wants to waste time. But there's a subtle trap in optimising for speed: you start cutting the very things that make meetings valuable.

A meeting isn't just a decision-making machine. It's a space where people think out loud, surface context that doesn't fit neatly into a document, challenge assumptions in real time, and reach shared understanding through the messy process of talking things through. The tangents, the "actually, let me reframe that," the five-minute detour where someone explains why the last time they tried this approach it fell apart - that's not waste. That's context.

And context, it turns out, is the most valuable thing you can give an AI.

The meeting is the prompt

Here's the shift I keep coming back to: a well-run, one-hour meeting is probably the richest, most complete piece of context you'll produce all week. It has the problem statement, the constraints, the disagreements, the decisions, the edge cases somebody raised at minute 43, and the reasoning behind why option B was chosen over option A.

That's not a transcript. That's a prompt.

Not in the literal sense of typing it into a chat window, but in the functional sense. It's a dense, contextual input that - if processed well - can drive genuinely high-quality outputs. Structured agendas. Detailed specifications. Task breakdowns. Risk registers. Decision logs. All the things we currently write by hand after the meeting, badly, three days later when we've already forgotten half the nuance.

Once you see meetings this way, the priorities flip. The goal isn't to keep things tight and efficient. The goal is to make sure every topic gets proper airtime, every perspective surfaces, every decision gets its reasoning stated out loud. You're not trying to minimise the meeting — you're trying to maximise the quality of the input, because you know the output pipeline can handle the rest.

We're essentially talking as a prompt. The discussion itself is the specification.

The post-processing quality gap

This is where things get interesting - and where most teams are currently stuck.

The AI tools that process meetings today are good at extraction: pull out the actions, summarise the key points, list who said what. But extraction isn't the same as transformation. Turning a rich, hour-long discussion into a properly structured design specification, a set of weighted decisions with rationale, or a technically sound requirements document - that's a fundamentally harder problem.

And it's a problem that matters, because the quality ceiling of the post-processing determines whether the meeting-as-prompt idea actually works. If your AI just spits out a bulleted list of next steps, you've traded one kind of manual work for another. You still need someone to sit down and turn those bullets into something an engineer, a designer, or another AI can actually act on.

The curious question is: could we build that post-processing layer to match the standard a capable human would set? Not the standard we typically achieve - half-remembered notes turned into a loose spec on a Friday afternoon - but the standard we're actually capable of when we sit down and do it properly. The complete decision log. The structured specification. The task breakdown with context and acceptance criteria.

Humans can do that work. We just rarely do, because it takes hours and the next meeting started ten minutes ago.

Meetings as inputs to agents

This is where the current generation of AI agents changes the equation.

Think about what's possible now with a well-configured agent - whether that's a GitHub Copilot agent, a Copilot Studio agent, a Claude project, or whatever your stack looks like. You can give these agents complex system prompts that define exactly what kind of artefact you want, how it should be structured, what sections it needs, what level of detail is expected, and what standards it should meet. You can give them reference documents, templates, and examples of what good looks like. You can build agents whose entire job is to take a rich, unstructured input and produce a specific, structured output.

Now pair that with a meeting transcript.

Run a design review and feed the full discussion into an agent configured to produce design specifications - complete with decisions, constraints, open questions, and technical rationale. Hold a requirements workshop and let an agent transform two hours of stakeholder conversation into a structured requirements document, properly categorised and cross-referenced. Do a sprint retrospective and have an agent produce not just a list of actions but a proper improvement plan with owners, timelines, and links to the specific moments in the discussion where each issue was raised.

The meeting becomes the input. The agent becomes the pipeline. The output is the artefact you'd have written yourself - if you'd had three uninterrupted hours and perfect recall of everything that was said.

What makes this different from a simple "summarise my meeting" feature is the depth of the transformation. A summary compresses. An agent configured with the right prompt, the right structure, and the right context translates - from the loose, human language of a conversation into the precise, structured language of a deliverable. The difference between a meeting summary and a design spec generated from a meeting transcript is the same difference between a paragraph description and a proper architectural document. They contain the same information. One of them is actually useful.

The prompt behind the prompt

There's an irony here worth naming. To make this work - to turn meetings into high-quality artefacts through AI agents - you need to invest serious effort in the agent's system prompt. The instructions that tell the agent what a good design spec looks like, what sections to include, how to handle ambiguity, when to flag gaps rather than fill them with assumptions. That prompt engineering is real, skilled work.

So the chain looks like this: a carefully designed agent prompt processes the meeting (which is itself a prompt) and produces an artefact that might feed into yet another agent downstream. It's prompts all the way down. But the human effort has shifted. Instead of spending hours after every meeting manually writing specs and chasing actions, you invest once in building the agent properly - and then every meeting that follows benefits from that investment.

This is why I think the role of meetings in an AI-native workflow is fundamentally different from what most people assume. The meeting isn't overhead. It isn't the thing you tolerate so you can get back to real work. It's the primary context-generation mechanism for everything that follows. And the agents you build around it are what determine whether that context gets wasted or transformed into something genuinely valuable.

What this changes about how we meet

If you take this seriously, it changes how you think about facilitation, agendas, and what "a good meeting" even means.

A good meeting, in this framing, is one that produces a complete, high-quality input for downstream processing. That means making sure decisions are stated clearly, not just implied. It means naming constraints out loud, even when everyone in the room already knows them - because the agent processing the transcript doesn't have that shared context. It means circling back to unresolved points rather than letting them drift, because a gap in the discussion becomes a gap in the output.

It doesn't mean meetings need to be longer. It means they need to be thorough. Every topic covered. Every decision landed. Every piece of reasoning spoken, not assumed.

The AI can't compensate for a thin input. A poor discussion processed through a brilliant agent still produces a poor specification. The meeting is the prompt - and like any prompt, the quality of the output depends entirely on the quality of the input.

So the next time you're tempted to cut a meeting short for the sake of everyone's calendar, consider what you're actually optimising for. The hour you save might cost you days of rework downstream - not because the AI failed, but because it never had the context it needed in the first place.