Getting Good at AI: A Marketer’s Guide to Prompt Engineering

Summary

AI is everywhere—and marketers are moving beyond public tools like ChatGPT and Gemini. Martech platforms are now embedding generative AI directly into their systems, enabling more advanced analysis using a brand’s own data. But great results still depend on asking the right questions. That’s where prompt engineering comes in—helping marketers guide AI with clarity, structure, and purpose to unlock insights that actually drive decisions.

More than 57% of advertisers now trust AI for tasks like ad investment and optimization—up from just 33% last year. The shift is happening fast. Teams aren’t just testing the waters anymore—they’re deploying generative tools across content, media, and measurement.

But flipping the AI switch doesn’t mean it all works out of the box.

It’s one thing to trust AI. It’s another to make it useful.

That gap between AI optimism and AI output? It’s where most marketers are sitting right now. The tools are in place. The ambition is there. But the muscle memory? Not yet. And while AI fluency is going to require a mix of skills—data literacy, workflow integration, creative augmentation—the first one that actually gets used day to day is prompting.

If you can’t ask it well, you won’t get anything worth using.

And we’re already seeing this in action. As marketers start adopting decision-support tools like Celeste AI—Skai’s insight-focused agent designed to help diagnose performance shifts and surface campaign learnings—they’re quickly learning that even the smartest systems still rely on clear direction. Prompting isn’t just helpful. It’s required.

This post is about getting good at that part.

What prompt engineering really is—and why it matters

Prompt engineering isn’t some new technical specialty. It’s just the practice of writing clear, specific, and structured instructions to help AI produce useful, accurate, and relevant outputs. Sounds simple enough. But as anyone who’s tried can tell you—it’s a craft.

Only 13% of marketing teams feel fully equipped with the skills needed to operate AI tools effectively. Meanwhile, 96% of marketers say they have generative AI in place or plan to roll it out within 18 months. There’s a wide gap between adoption and impact—and that gap often comes down to how the AI is being used.

What makes prompting especially tricky is that it doesn’t feel like a technical problem. It feels like a communication one. Which is exactly what it is.

Done well, prompting saves time, sharpens focus, and helps teams get to better outputs faster. Done poorly, it leads to vague analysis, regurgitated responses, or—worse—confidently wrong conclusions. And that’s not just an annoyance. It’s a real risk.

So how do you get better?

Let’s walk through two proven approaches that help.

The TRIM method: structure your ask

Let’s be honest—when marketers first start using gen AI, most prompts sound like they’re making small talk with a smart intern.

“Can you give me some insights on my campaigns?”
“What’s going on with Sponsored Products?”
“Help me figure out what to do next.”

Those kinds of prompts might work if you’re lucky. But they’re not clear. They’re not structured. And more often than not, they’ll return an avalanche of vaguely relevant information that sounds like a regurgitated dashboard.

And this is exactly what Skai clients are discovering as they explore tools like Celeste. It was built to support marketers with one of the hardest, most overloaded parts of the job: decision-making. Campaign diagnostics, performance investigation, insight workflows—those are the things it’s really good at. 

But even with a purpose-built system like Celeste, great results don’t come from “just asking.” They come from guiding the AI with clarity and intent.

That’s where the TRIM method comes in. It helps transform vague queries into structured requests that actually match what you’re trying to get done.

Here’s the breakdown:

  • Task-oriented. What are you trying to accomplish? Analysis? Prioritization? Suggestions? Be explicit.
  • Relevant context. Don’t assume the AI knows what matters. Add brand names, date ranges, engine types, or dimensions that narrow the scope.
  • Intent explicit. Are you trying to investigate a drop? Flag top performers? Set up a next-step plan? Say that.
  • Measurable criteria. What’s the threshold for action? A 10% drop? Below average ROAS? Share of Voice = 0? Be specific.

Here’s how that plays out:

“Give me campaign insights”
✅ “Summarize Sponsored Products performance for the past 30 days by product category. Highlight campaigns where ROAS dropped more than 15% compared to the prior 30 days.”

Now we’re getting somewhere. That kind of clarity allows an AI tool like Celeste to dig into relevant dimensions, frame comparisons properly, and structure the output with actual decision value.

And it’s not just helpful for you—it reduces hallucinations and ambiguity in the model’s response. As Grewal notes in the same HBR piece, one of the most effective ways to improve generative AI output is by using structured prompts and augmenting models with clearly framed instructions.

Putting TRIM into practice

Here are three ways to refine prompts using the TRIM method using some popular analyses Skai clients are using with Celeste but can be certainly applied to any marketing AI tool you are using: 

Do this: “Review CTR trends for Amazon campaigns in the vitamins category over the past 30 days. Highlight any campaigns with 20%+ week-over-week growth.”
Not this: “What’s performing best right now?”

Do this: “Compare ROAS and spend for my top five Walmart campaigns tagged ‘Back to School’ versus the previous 30 days.”
Not this: “How are Walmart campaigns doing?”

Do this: “Look at CVR for Sponsored Products across Amazon. Call out anything more than 10% below our brand average.”
Not this: “Anything weird going on with my conversion rate?”

Bottom line: If your goal is precision, the TRIM method gives you a map. Without it, you’re just hoping the AI fills in the blanks the way you would—which it won’t.

    The Pyramid method: build the context layer by layer

    Here’s something marketers know intuitively, but often forget when prompting: how you ask matters just as much as what you ask.

    Most AI tools will do their best to respond to a broad request. But without specificity, you’ll end up with results that are either too obvious or too chaotic. You’ll get a recitation of averages, or an unfiltered dump of data trends that may or may not help.

    The Pyramid method helps you fix that.

    Instead of asking the perfect prompt right away, the Pyramid encourages you to build up to it—starting with a broad idea and layering in details that guide the AI toward a meaningful response. It’s especially useful for investigative work: diagnosing what changed, understanding outliers, and surfacing why things aren’t behaving the way they used to.

    That’s exactly the kind of work Skai designed Celeste to support. Campaign managers digging into grids, spotting anomalies, asking things like:

    “Why did ROAS dip in Q4 but bounce back in January?”
    “Why is this product’s ad spend down 86% even though no one paused it?”
    “Why is conversion rate dropping when strategy hasn’t changed?”

    These aren’t hypothetical questions—they’re straight from prompt examples shared in Celeste enablement training. But they only work well because they’re built on the Pyramid.

    Let’s break that down:

    1. Start broad: “Show me performance trends”
    2. Add timeframe: “Show me performance trends for the last 30 days”
    3. Add key metrics: “Show me revenue and ROAS trends for the last 30 days”
    4. Add breakdowns: “…broken down by campaign and product category”
    5. Add comparisons and thresholds: “…and highlight campaigns where ROAS dropped more than 20% compared to the previous 30 days”

    Each layer makes the response more targeted—and more actionable.

    And this isn’t just theory. As Harvard Business Review explains, companies like Colgate-Palmolive have adopted this layered, prompt-driven approach to guide AI tools with more control and reduce error-prone outputs. It works. And it scales.

    Putting the Pyramid into practice

    Here are three examples that show how to climb the Pyramid:

    Do this: “Show Sponsored Products ROAS trends for the last 30 days for Amazon campaigns in the beauty category. Flag any campaigns that dropped more than 10% vs. the prior 30 days.”
    Not this: “What’s going on in beauty?”

    Do this: “Compare CVR and spend for Walmart campaigns tagged ‘Holiday’ between November and December. Highlight anything with a sharp increase or drop.”
    Not this: “Did any holiday stuff change?”

    Do this: “Summarize performance for my branded campaigns on Amazon over the past 30 days. Segment by campaign objective and call out any sub-2% CVR.”
    Not this: “Why is performance down?”

    Bottom line: The Pyramid method helps you guide the AI into smarter territory—one layer at a time. It’s not about complexity. It’s about building context that leads to clarity.

      Final thoughts: the only way out is through

      If there’s one truth that’s emerged from marketers using Celeste—and every other AI tool trying to move beyond generic output—it’s this: you can’t outsource the thinking. You can only support it.

      Frameworks like TRIM and Pyramid don’t just help you get better answers. They help you ask better questions. And that’s what separates marketers who are actually accelerating with AI from those still feeling stuck in testing mode.

      Prompting isn’t a phase. It’s a foundational capability. And the sooner you start practicing it with purpose, the sooner tools like Celeste start returning the kind of insights you actually want to act on.

      Because the best prompts don’t sound like magic. They sound like strategy.