Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Follow publication

Content Strategy for AI: Three Things I got Wrong

Collage of robotic hands writing on sticky notes, symbolizing AI’s role in creative and planning tasks. (Alt text written by Bing’s COPILOT AI, checked by a human).
Midjourney’s attempts at a headline image for this article. The prompt: “/imagine a robot hand writing out sticky notes in blue, yellow, and pink with a sharpie marker in the style of a 3d cinematic animation. use this image: https://s.mj.run/rlRvrXeLEuI to get colors and layout”

Podcast transcripts. Articles. Newsletters. Video Transcripts. Email templates. Years of them.

My first thought after OpenAI introduced Custom GPTs was: “IMMORTALITY!!!! I’ll shove everything I’ve ever created into a Custom GPT and create a smarter-than-me digital twin.”

And within a few weeks of going down the AI rabbit hole, I realized that it is not so simple. And although that’s frustrating to me as a content creator, as an OOUXer (an information architect / content strategist / business analyst / UXer) — this not-so-simpleness points to a massive opportunity.

If you are a UX designer or digital product person and have not yet been tasked with designing some sort of customized generative AI solution that “sits on top” (still figuring out the lingo of this space, feel free to correct me) of a large language model — it’s just a matter of time. More and more of us will start to take on a Doctor Frankenstein-like role: cobbling together inanimate parts to design a simulation of a human entity. And those who know how to wrangle content, data, and words are uniquely positioned to do so. Hello, Object-Oriented UX.

For the past two months (admittedly not a long time), I’ve been taking on this role as my full time job, exploring how my OOUX skills translate to “AI Doctor-Frankensteining” — specifically in creating custom GPTs on OpenAI’s platform.

Here are three big mistakes I’ve made and what I’ve learned.

(I’ll be talking in more depth on all of these points and more on May 16th, 2024. Register for the event or catch the recording.)

ONE: I’m not “training” the model. So what am I doing?

Before we go further, let me clear up some lingo that I was getting wrong for a few weeks and that I still hear designers and folks on YouTube get wrong.

If you are customizing a chatbot that is based on an LLM like GPT-4, you are not training it, as apt as that word feels. Training an LLM costs millions and millions of dollars and is done only yearly or bi-yearly, maybe, depending on the company and the LLM. Pre-training, training, and then refining LLMs are activities for machine-learning engineers and data scientists.

If we stick to the Dr Frankenstein metaphor, we might say that the LLM is the lifeblood of an AI agent (or what I will call a “bot”). So while the language training and background information flows through a bot, us designers are the ones that determine the shape of the bot. And to do that, we figure out the goals of the users and the business, model the problem domain, and design the interface to be as valuable as possible to both the users and the business. Sound familiar?

When it comes to determining the shape of the bot, it took me a while to wrap my head around not just the lingo, but what the hell I am actually doing. Seriously, I was that lost. So, I’m writing this because there’s a chance you might be, too.

As of this writing, I can simplify “what the hell I am doing” to two main parts:

1.1: Knowledge Base Design

What information will the bot “know” and prioritize over the LLM? Designers will need to answer this question by crafting a custom, curated knowledge base. The practice of doing this, according to ChatGPT itself, is called Knowledge Engineering and/or…

wait for it…Information Architecture.

But, really, what it feels like is Content Strategy. And Content Strategy for AI is a whole new ballgame. Just like content strategy for a marketing website needs to be approached differently than content strategy for an enterprise SAAS application. More on this later.

1.2: AI UI Design

How will the bot prompt the user into an effective prompt? What is the format, voice, tone, and requirements of that output? How can users react to the output of the prompt?

In other words…what’s the user interface???

As of now, in the custom bots that we are building atop existing LLMs, that user-interface is mostly text-based. Crafting that user-interface involves writing really clear instructions on how the bot should deliver the fruits of the knowledge base:

  • How the bot should prompt the user to prompt the interface.
  • How the bot should respond to prompts (with answers, follow-up questions, or handoffs to other resources).

“Prompt Engineering,” is being touted as the new power-skill of the future, a skill required as a user of generative AI. Right now, without a good prompt, a user of AI won’t get great results. Thus, prompt engineering is a thing. For now. My theory is that this is only emerging as a fad-skill because most AI agents haven’t yet been designed for easy user-interfacing. Users currently have to contort themselves with prompt formulas to get good output.

So what happens when UXers come in and design a great prompt for the user’s prompt? 🤯 Stay with me for an example.

One of the most interesting “prompt designs” I’ve seen is from Krea.AI, an image-generation, narrow AI. The user crafts their prompt via an interface of shapes, colors, squiggles, and text.

A split-screen image from KREA AI contrasting a rough, digital illustration and a more realistic, AI-generated version of the same illustration: on the left, a stylized pink mug with blue contents beside a green plant on a yellow background; on the right, a realistic pink mug with blue liquid on a white table with a green plant, reflecting the first, rough illustration. (Alt-text written by Bing COPILOT AI and edited by a human).

My prompt consists of:

  • the sketch on the left panel
  • The text at the bottom: “a pink retro coffee mug with blue liquid inside a yellow room on a white table with a bright green plant in the background”
  • The style selection on the right for “photo”

And the prompt design (input design?) that Krea came up with is a canvas UI, text field, and a bank of fancy radio buttons for style selection.

🙄Side note for the OOUXers: As amazing as this is, do you see the interface problem with this? Scroll to the bottom to hear my OOUXy griping. 🙄

I predict we designers will start designing more and more intuitive prompt-and-feedback interfaces.

We will design modular building blocks of interface that the bot can deliver to users as needed, to help prompt them into amazing prompts for the bot. We’ve already seen this in non-LLM chatbots with “quick response buttons” — and Krea is a great example of the kind of UI creativity that we will be seeing more and more of.

My hot take? Prompt “engineering” placed on the shoulders of the AI user is a stop gap as UX catches up with figuring out what it means to design for AI.

But coming back to the here and now (May 2024) and the act of UXing a custom text-based GPT, designing the user interface means a ton of writing.

Writing instructions. And currently, at least in Chat GPT’s interface, adding these instructions into the create panel means those instructions disappear session-to-session.

A screenshot of a chat interface where a user configures and previews a digital assistant’s behavior. The left side shows the configuration with ‘GPT Builder,’ and the right side displays the assistant ‘Zoe’ responding with album, playlist, and track details, all highlighted with emojis to indicate objects, core content, and metadata. (Alt-text written by Bing COPILOT AI and edited by a human).

For example, while testing Zoe, my soon-to-be available OOUX Assistant (sign up to get notified), I noticed that she was getting OOUX-color-coding wrong. So, I reiterated the importance of this color-coding with some instructions on the Create panel. Unfortunately, when I go back to this interface later, those instructions won’t be there. If I want to reuse those instructions (say for another version of Zoe on a different platform or for another bot), I need to save those instructions somewhere else.

A screenshot of an Airtable entry highlighting an AI instruction called ‘Core Content Clarification’ with details on using a yellow emoji to mark unique identifiers such as usernames and phone numbers, alongside tags showing which AI bots this instruction has been used on. In this exampled, indicating ‘Zoe-instructed.’ (Alt-text written by Bing COPILOT AI and edited by a human.)

Hence, a database of instructions housed in Notion. Mapped to the various bots that need those instructions. And status tags.

Interestingly, because of the written nature of this kind of UI design…the saving, managing, and maintaining of these instructional snippets sure does feel like content strategy. Almost — in a way — a new flavor of content governance

So in summary, I’m not “training” a Custom GPT. It took me a beat to figure out what I am actually doing, which is knowledge base design, plus some flavor of instructional design that results in the AI’s UI. Together, I’m heavily leveraging an OOUXy Content Strategy skill set. Phew.

TWO: Database over PDFs

In the beginning, my mental model of a custom GPT’s knowledge base was…naive. I basically thought that I was creating a mini language model atop the large language model. I understood that LLMs were trained on massive raw data sets — basically “the internet” and with that massive amount of information, an LLM could get uncannily good at predicting the next-best word in a series of words. So…I thought I just needed to create, like… a cute little custom language model…with my content. 🤣

So, I set out to download TXT files from podcast and video transcripts. I started curating my newsletters and articles and outputting them as PDFs, and then combining those into larger PDFs (to get around ChatGPTs 20 file limit). I created a little system for myself in Notion to track my progress in exporting and combining and uploading my bot’s knowledge base. But the results were not great. And after spending a week with some AI experts based in Mexico City (Hi Caro and Micho!) — I finally started to understand why this is no bueno.

A custom knowledge base works differently than a LLM. A bot uses a custom knowledge base with RAG (retrieval augmented generation). So, unlike the fodder for an LLM, which can be large and messy and redundant, you want your knowledge base to be succinct, tidy, and as non-duplicative as possible.

Instead of exporting all my old newsletters and podcast transcripts to a massive PDF…I ended up creating a Notion database, giving structure to unstructured content.

Yes, I am OOUXing my OOUX content to create a better OOUX bot.

This structuring of concepts into tidy databases is critical for a bot to “retrieve” quality information for its “augmented generation.”

Screenshot of a digital interface titled ‘KNOWLEDGE BASE IDEAS’ with a list of topics about OOUX design, including challenges, applications, and comparisons with UX approaches. (Alt-text written by Bing COPILOT AI and edited by a human).
A succinct database of concepts is taking the place of this “curated” PDF which took the place of hundreds of uncurated articles and newsletters.

Check out the event on May16th for a deeper look under the hood of this structure (and get my Notion template for planning your AI ecosystem!)

THREE: Planning a Bot ecosystem requires serious OOUXing (and a very important junction object)

Most organizations will require several bots, or “Narrow AIs”, designed for specific tasks and audiences.

Rewired is a small business, and I’m working on four bots. FOUR!

😀 Christine: A free Q&A plus sales agent

🤓 Zoe: A paid OOUX assistant

💪 Vana: An internal virtual assistant

📚 Libra: An OOUX Masterclass librarian

These bots will live in different places, have various levels of custom development, and serve different audiences. They will have different instructions and their knowledge bases will consist of different sources.

If we begin to OOUX this (objects in all caps!), we have BOT as an object (with instances of Christine, Zoe, etc). We also have KNOWLEDGE SOURCE as an object (with instances like the OOUX Newsletter, the OOUX Podcast, the OOUX Masterclass, etc).

At first I was trying to track status, craft the maintenance plan, and even assign tasks on the KNOWLEDGE SOURCES. But the problem was: all of these attributes are specific to the BOT. The maintenance plan of the “OOUX Podcast” is different for Zoe and Libra.

The status is different for each bot… I spent hours in a Notion system that was inherently flawed: I needed a (quite literal) junction object that represented the CONNECTION between a SOURCE and a BOT.

On the left, my original single-object object map. On the right is the 3-object system that evolved through lots of trial and error. Objects in blue, core content in yellow, metadata in pink.
On the left was my original single-object object map. On the right is the 3-object system that evolved through lots of trial and error.

Oversimplifying the organization and tracking of KNOWLEDGE SOURCES and their CONNECTIONS to BOTS was only one of my many mistakes in planning my AI content strategy. Currently, my Bot Ecosystem Notion Hub has grown to nine objects! Like I said at the beginning…customizing a really effective, narrow AI turned out to be not so simple. A headache for me as a business owner, but a massive opportunity for me as an OOUX-nerd who can help business owners alleviate this headache.

Relinquishing control and leveling-up

As information went from printed documents to the internet, designers had to relinquish control. And we also gained power — our ‘designs’ could be accessed all over the world and updated instantly for everyone.

In 2012, we had to relinquish more control of pixel-perfection with the rise of responsive design. And yet again, we gained power — the ability for our designs to adapt across devices.

Recently we relinquished even more control with the popularity of design systems. And again, we gained power in the ability to move our efforts from repetitive interface design to higher-order problem solving.

With AI, we will need to relinquish more control. Every user interaction with our designs will be personalized to the end user and their unique prompts. Predicting every output will be impossible. But like every loss of control before, this coming evolution of our practice will also give us exciting new power and help us move onto even more exciting problems to solve.

Let me know in the comments how these initial thoughts on design and content strategy for AI landed with you. And I hope to see you on May 16th for OOUX for AI and AI for OOUX! Remember, if you are reading this after the event, the recording will be available!

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Sophia V Prater
Sophia V Prater

Written by Sophia V Prater

UX designer, OOUX Instructor, and Chief Evangelist for Object-Oriented UX | Download the OOUX Launch Guide! OOUX.com/resources/launchguide

Responses (5)

Write a response