top of page

From Vibes to Results: What AI Prompting Can Teach PMs About Better Requirements

  • Writer: Andrew A. Rosado Hartline
    Andrew A. Rosado Hartline
  • Aug 5
  • 9 min read
ree

Image created using Imgflip.com. From The Devil Wears Prada (2006). Satirical use for commentary under fair use.


Because yes, someone will ask ChatGPT to “make a full product roadmap” in one sentence. And they’ll be completely shocked when it doesn’t go well!



Why Data Feels Familiar

The reason I’m writing this isn’t because I’m an AI expert or some Oracle of Requirements. It’s because I keep seeing the same thing happen—whether it’s a PMO rollout or a ChatGPT experiment. People want and expect magic to happen every single day.

But they send me a Slack message and say:

“Build the thing.”

Sound familiar? Because the evolution we’re witnessing in AI from vibe coding to context engineering mirrors exactly what we’ve been trying to fix in project management for years: people don’t know what they want, they don’t communicate it clearly, and then get frustrated when the results don’t match the fantasy or what they saw someone do on LinkedIn.

This isn’t just an abstract parallel—it’s a straight-up reality check. Let's get into it.


What the Heck is “Vibe Coding”?

Coined by Andrej Karpathy, vibe coding is the practice of giving vague prompts like:


“Build me an app that makes a million dollars.”

…and expecting a full-stack miracle and everybody claps.


It’s fast. It’s dirty. It’s kind of thrilling. And sometimes, it works just like duct tape on a pipe burst.


We’ve all done it. I’ve done it. I vibe-coded my way into fun HTML mini games for my Dungeons & Dragons campaigns (Shocking, I know, I'm a Game Master). It worked. It also broke when I asked it to do something slightly outside the first prompt. It’s all vibes until you want structure. (Shameless plug: Check out my GitHub to see what I’ve been vibe coding)


As Karpathy himself put it:

“Prompting is becoming the new programming. And right now, people are hacking their way through with vibes.” —Andrej Karpathy, 2023

I've ridden those vibes and it's been a learning curve. I think the most important lesson I learned was back in March 2023 when I made my first python script for work to clean up data and concatenate multiple.csv data files. While it's not groundbreaking nor rocket science, it saved me many hours of work and lots of frustration. And that lesson is, the AI doesn't know what you don't know, but you can figure it out by asking the right questions if it has the right context.


The Shift to Context Engineering

Nowadays, the approach is slightly different but better.


Context engineering means layering structured, specific, and intentional information into your prompt. It’s about setting the scene, providing expectations, and naming what matters.

One of the best AI prompting guides out there is Anthropic’s Prompting 101 because it's essentially a requirements gathering template in disguise. They just use AI-native terms. But if you squint, you’ll recognize them:


Prompting Technique

PM Equivalent

Role + Task Description

Project Charter / Use Case

Input Context

Stakeholder Notes / Background Docs

Detailed Instructions

Work Breakdown Structure / User Stories

Output Examples

Wireframes / Mockups / Past Projects

XML or Markdown Delimiters

Traceability Matrix / Tagging

Step-by-Step Reasoning

Implementation Plan

Output Format Specs

Deliverable Requirements

Final Reminder / QA Check

Acceptance Criteria

As PMs, we've have been doing this all along, we just call it something else. So here's my invitation to you, dear reader:


  • if you struggle with requirements gathering, to view it from a different angle, like training a LLM.

  • if you use AI for your day to day and struggle to obtain desired outcomes, to treat it like a project.


This isn’t just about AI. If you’ve ever had to write a requirements doc, define an MVP, or gather stakeholder needs, you’ve already been "prompt engineering". This article is about making those practices more repeatable, more structured, and more outcomes-focused.


Real Talk—The Project With No Data

Let me tell you about one of my projects.


The client wanted a project management platform configured with dashboards, custom workflows, integrations—the works. They’d seen our demo, fell in love, signed the contract. The typical life cycle.


Then they handed me… five projects. Just five. No metadata. No status updates. The templates were broken (and incomplete) Excel sheets. And they asked why their dashboards didn’t look like the one we showed them.


My response?

“Your dashboard is empty because your work is undocumented.”

That was a hard conversation. But it was necessary. At times, it seems like the expectation is that dashboards (or most projects) are built from slide decks and optimism, not actual data. And this is where your coaching skills as a PM are put to the test. You need to bring people along with you so they understand what must be done and what you need and what they need to "do the thing".


If your source data looks like it was cobbled together during a fire drill, don’t expect polished insight. You can’t optimize what doesn’t exist.


Same goes for AI. People prompt a chatbot, get a bad answer, and blame the model. But like with dashboards, what you feed the system matters. Garbage in, garbage out. And yet we're surprised when the insights don't sparkle ✨


XML for Humans (Yes, You Too)

Let’s get technical but just for a moment.


In Prompting 101, Anthropic recommends using XML or markdown to clearly structure inputs. Before you panic: you’re not coding. You’re labeling.


Stay with me. I know ‘XML’ makes most non-devs want to crawl under their desks and weep into their Asana boards. But this isn’t coding, it’s just telling the model what’s what.

Think of it like naming your folders so your desktop doesn’t turn into an archaeological dig site. So when you're prompting your AI


Instead of saying: “What did it say?”


Try instead: What did <q3summary> say?


Because you already provided the necessary context. For example below you have what the Q3 marketing report would look like in the chat if you were to have copy-pasted it.

<q3summary>
The Q3 marketing report...
</q3summary>

Now the model knows what part is the summary. You can reference it later without pasting it again, and it won’t get confused and start summarizing the wrong thing. This dramatically reduces hallucinations, boosts precision, and saves tokens. Especially when you have status updates, meeting notes, action logs, and other reports in the same chat and they would all be labeled accordingly.


This works by explicitly labeling content this way you anchor that information within the model’s working context. This clarity enables the model to recognize and reliably reference the correct portion of the conversation in subsequent steps, without needing to repeat it. Which explains the aforementioned benefits. Check out the Further Reading section at the end for more information on what I read to write these last six sentences.


You don't have to code, but you do have to structure your information, your context, and your data. You're not a developer. But you are an architect of context. And by providing a structured prompt, you're increasing clarity, much like how it would for a Requirements Document or a User Story.


The Prompting Checklist for PMs


Prompting, like scoping a deliverable, works best when you follow a repeatable process. Here’s how I now treat prompting, exactly like building a requirements doc:


✅ Project Manager’s Prompting Checklist


  • Define the Goal: What do you want the AI to do?

  • Include Inputs: Attach the relevant files or text.

  • Provide Instructions: Bullet or number them.

  • Add Examples: What should the output look like?

  • Use Delimiters: Wrap key sections in XML or markdown.

  • Request Step-by-Step Output: Make the logic transparent.

  • Specify Format: Table? JSON? Paragraph?

  • Confirm Completion: Add a final instruction like “Confirm when done.”


If you wouldn’t send it to your dev team without clarity, don’t send it to your LLM with vibes. Structure sets the turn, and that how you turn vibes into method.


When we talk about context, it’s not just about background info or setting the AI’s “role.” True contextualization includes how the output will be used. It’s like designing for a persona in Agile, you're not just building software, you're tailoring it for someone’s specific experience and goals. Context engineering works the same way.


So beyond data and instructions, you should also include usage expectations. Are you asking for research, a draft, a presentation-ready slide, or data transformation? Templates and examples aren’t fluff, they’re usage signals. They help the model understand what you're trying to build, not just what you're asking.


And that design lens, that product thinking, is what reduces hallucinations and improves relevance. You're not just prompting a chatbot, you're scaffolding a result that’s going to live somewhere real.


The Rise of Data Debt

Just like we talk about technical debt, we need to start addressing data debt.

All of this missing context? That's data debt, what you accrue when you rely on vague assumptions, incomplete data, or “we’ll figure it out later.” It kills velocity. It breeds scope creep. And in AI, it leads to hallucinations and failures.


You clear context debt by:


  • Clarifying the purpose of your prompt or deliverable

  • Structuring and labeling inputs

  • Reiterating goals midstream

  • Updating and refining templates over time


Data debt is sneaky. You don’t feel it until the fourth meeting where everyone’s pretending they understand the slide deck no one contributed to or when the "shiny new tool" effect beings to lose its luster. Worse even, systemic issues often discourage people from engaging at all, so they retreat to their own spreadsheets, their old tools, or simpler habits that ‘just work.’ That's why context and process must be shared, supported, and sustained.


While Organizational Change Management and People Management are not strictly addressed in this post, it goes back to it. When there is no shared vision that people have not bought into and/or when people don't have the tools, skills, or support to do their job, the last thing they will do is input data and simply get by and make sure things don't fall apart.


Local LLMs and the PM’s Playground

Of course, all this talk of clean inputs and structured context isn’t just theory. Here’s what it looks like in my world: Yes, I’m that guy using an offline GPT to refactor requirement templates on the weekend. Nerds contain multitudes. And no, I still don’t know how to code.


Let’s say I need to draft a work breakdown structure for a feature rollout. I’ll use my local assistant trained on our SOPs, templates, and past docs to generate a first draft. Then I’ll refine. This is the Human in the Loop concept that many experts suggest. It keeps me in control of quality and ensures the output aligns with actual team norms and expectations.

No data leaks. No hallucinations. Just structured, iterative thinking.And for the record, it’s rescued more than one Monday morning from turning into a status report disaster.


AI is a Project. Prompt Like a PM

No PM would tell their team ‘Just make it intuitive’ and expect usable results, so why prompt that way with AI? If you do… Please reconsider.


When you prompt like a PM, you unlock far more than clever answers—you build toward reliable, reusable outputs. Treat your prompt like a backlog item. Refine it. Groom it. Test it.

You wouldn’t let a team build a product from a single vague sentence. So don’t prompt like that either.


Instead:

  • Think like a requirements analyst

  • Prompt like a product owner

  • Build like an engineer who knows bad data means broken systems


The future isn’t just AI-first. It’s context-first. And if you can master that whether in a meeting or a prompt you’ll get results that actually make sense.


From Prompts to Process to Success

There’s a YouTube Short by albertatech I saw that reminded me about why AI tools fail and users get frustrated, and the answer:


“Because you used it like a magic wand instead of a framework.”

There is no AI model that will save you from your own bad documentation. And no, threatening your AI is not the solution, no matter how many times people suggest it. There are more constructive ways to achieve results! Which brings us back to the real points: tools only work as well as the process they serve.


Even the most advanced AI tools can’t overcome the limitations of rigid workflows or outdated formats. If the final deliverable has to follow a pre-existing structure, like a mandatory wall of text slide, a screenshot from a spreadsheet, or the poorly formatted Power Point template your manager forces everyone to use or else! I can assure you AI is not equipped to output like that. (And it shouldn't, we mustn't go back!) But that doesn’t mean AI can’t help. In fact, this is where it shines helping you work faster and think more clearly, even inside legacy constraints.


The value of AI isn’t in bypassing these constraints, but in helping you work more efficiently within them, while still elevating the clarity, consistency, and quality of your message.

The truth is: the same muscle you use to gather project requirements is the one you use to prompt an LLM. Train it.


Use your frameworks. Label your inputs. Give examples. Create context.

Great results don’t come from vibes. They come from context, structure, and clarity.


Because in tech and in teamwork, success isn’t about vibes. It’s about the data you shaped, the questions you asked, and the clarity you created along the way.

📚 Further Reading

Comments


© 2025 PM Lifehacks. All rights reserved. Planned and executed with passion.

bottom of page