AI automation for media is what I do in my day job. Specifically audience trends and research, with the goal of enhancing editorial through higher quality and cadence. Agentic has built a lot of workflows this year, but the best ones consist of three interlocking components.

When I first started building workflows and agents, everything was horrendously unscalable and mess of multitasking. I relied on OpenAI Assistants - which are now near obsolete, having been replaced by simple API calls and native agents in platforms like n8n and Make.

It was roughly March when I actually built a workflow, but it took until June to get something operational. This worked for one brand in one company - so it didn’t really prove much other than a productivity enhancement in a very specific way.

The penny that dropped around this time is that workflows are best managed by variables that are stored in some sort of database. ‘Hard writing’ every prompt to instruct multiple agents is a huge time sap. Worse still is creating numerous copies of a workflow to satisfy mildly variant use cases.

This is the easy way, then it becomes the hard way very quickly. It has led to many days leaving the office with my mind in a stew.

So instead, we built a no code system that could be used for one company, across many brands. So long as those brands largely needed the same tasks via a workflow, the system could scale.

The key components of AI workflows

The three key components of building an AI system are:

A workflow platform: We tend to use Make, as it’s flexible and presentable. n8n becomes more preferential for developers due to its costs. We’ve also used Power Automate because its native to the Microsoft ecosystem, and OpenAI have also just released their own agent platform (note: I found this pretty uninspiring as it’s a pretty technical platform).

Prompts for agents: A workflow platform enables lots of different API calls in sequence, some of which can be AI. In the core workflows we’ve built, AI tends to account for about 20% of the steps in a workflow. Ie, you’re getting an agent to decide on some data or fulfil a task. These rely on system prompts (instructions) and user prompts (which function like chat).

The key to unlocking the workflow + agent combination is storing the variables of the automation in a database like Airtable. The automation can call these variables and insert them as prompts, meaning you can have one workflow for multiple use cases.

This workflow contains a series of variable look ups before the AI phase (in green) writes key parts to a document. The final part handles some extra scraping required.

If I click into an AI step, the prompts are completely made up of variables, not hard written in. These variables are stored in Airtable where team members can edit them:

The system prompt contains the variables of BrandInstructions, KeyPoints and BrandGuidelines, which are all long strings of text that combine to form a prompt.

The user prompt then includes variable data from an input form (blue - which triggers the automation) and some content other AI steps have written (green). I also inserted some small variables into the workflow to handle the outputs better (purple - also controlled by the purple spanner icon in the diagram).

This system is expected to have a total of 126 different input variables across a company, with the majority of that coming from the system prompt library. But it could potentially have infinite variables due to the diversity of possible inputs on the form!

So basically, once you’re running your workflows via variables stored in a database, everything becomes much more scalable. Of course, if you’ve ever done much development previously, this will be completely obvious. But for the majority that haven’t, it’s an incredibly useful thing to know.

Reply

or to participate