Introduction: Why this guide & how it helps you

AI makes building fast—and copying even quicker. This guide helps you pick and defend a wedge that customers love and competitors struggle to clone. We’ll translate expert advice into plain actions: where to hunt for ideas that aren’t “GPT wrappers,” what moats actually work for AI apps, and a simple framework to choose the right moats for your business stage. You’ll ship faster now and compound defensibility as traction grows.

The problem: Competing in AI is brutal

  • “Wrappers” vs reality. Dismissing products as “just GPT wrappers” misses the point: great UX and tight “jobs-to-be-done” still win—models are components, not the product.
  • Tarpit ideas. Shiny but vague categories (e.g., “build an AI copilot for everything”) attract crowds and stall; usage lags because buyers don’t know what the job is.
  • Where the wins are. Mundane, back-office workflows—document triage, form filling, searching—map perfectly to LLMs and are oddly underserved.
  • UI matters. Chat UIs can be the wrong default. Often, the win is embedding LLM capability into familiar workflows so the software quietly does the work.

The main AI moats (and how they help)

Hamilton Helmer’s Seven Powers still applies—just updated for agents, data, and eval loops. Add one more that AI founders keep proving: speed.

  1. Speed (execution)Not in the book, but the day-one moat. Early on, speed to ship and iterate beats everything—“one-day sprints” outmanoeuvre big companies encumbered by process. Use it to find PMF before incumbents mobilise.
  2. Counter-positioning – Win by doing what incumbents won’t. Example: incumbents priced per seat; automation reduces seats, so you price by work completed. You’ll be aligned with customer value while they’re stuck cannibalising their model.
  3. Switching costs – It becomes hard to rip out. In enterprise, forward-deployed engineers tailor agents to messy, proprietary workflows during long pilots; once embedded, no one wants a re-bake-off. In consumers, memory & personalisation build attachment over time.
  4. Network economies (data flywheel) – More usage → better evals → smarter prompts/tuning → better outcomes → more usage. Make evals a first-class system to compound quality.
  5. Process power – Years of edge-case handling and safety rails that actually work in production are hard to copy. Weekend demos ≠ mission-critical agents that banks or courts can trust.
  6. Cornered resources – Exclusive or hard-to-obtain assets: private datasets, privileged channels (e.g., regulated/government environments), or bespoke models that hit 10× cost/perf for your domain.
  7. Scale economies – Some wins require significant fixed costs (training frontier models; crawling a large slice of the web). If you can amortise that cost over many customers, your per-unit cost sinks as you grow.
  8. Brand – Slow to build but powerful. Even with comparable models, the brand that becomes “the default app” captures demand (see consumer AI). Plan for the brand as you scale.

A practical framework to choose your essential moats

Stage A — 0→1 (first 3–12 months): “Find the painful job.”

  • Pick a specific, high-stakes workflow (not an all-purpose copilot). Reimagine today’s software with AI doing the work behind familiar UI.
  • Optimise for speed: ship thin slices weekly (or daily), instrument outcomes, and talk to users constantly. Don’t over-optimise for theoretical moats yet.

Stage B — Wedge→Product (first paying logos): “Embed & learn.”

  • Run forward-deployed pilots to wire agents into real systems; capture ground truth and failure modes. This builds switching costs and seeds your evals library.
  • Choose value-aligned pricing (tasks/outcomes) to counter-position against per-seat incumbents.

Stage C — Product→Moat: “Lock in the flywheel.”

  • Treat evals as an internal product (coverage, thresholds, pass/fail dashboards). Tie roadmap to eval-proven deltas, not vibes.
  • Systematically accrue cornered resources: exclusive data partnerships, certifications, or domain-specific models that are “good enough” but 10× cheaper/faster for your niche.
  • Decide where scale helps (e.g., your own crawl or retrieval corpus) and invest once the reuse is clear.

Stage D — Defend & expand:

  • Build brand via reliability (SLAs, audits), not just demos. Keep the UI familiar and the work invisible.

Extra guidelines you’ll thank yourself (and your investors) for

Avoid tarpit ideas. If dozens of teams pitch the same “AI copilot” and can’t articulate daily active use, move on or niche down to a painful, measurable task.

“Not a chatbox.” Investors will ask how you avoid being a commodity. Show how your UI bakes AI into the workflow and how your evals and data loops raise quality over time.

Cheaper isn’t a moat. “We fine-tune open-source for less” won’t hold as model costs fall. Win on outcomes, privacy, or domain specialisation (smaller purpose-trained models, on-prem/local).

Security & privacy as a product. Enterprise buyers care who sees what. Control model/data permissions and guard against fine-tuning leakage; this is an emerging category.

Pricing that signals value. Use work completed or cases resolved to align incentives and exploit counter-positioning vs. per-seat incumbents.

Answering the classic investor questions (short, crisp, defensible):

  • “Why won’t the labs crush you?” Because we win on counter-positioning (value-aligned pricing), embedded workflows (switching costs), and a proprietary eval/data flywheel that compounds in this niche.
  • “Isn’t this a wrapper?” The advantage is UX + outcomes; models are replaceable, our eval-proven workflow quality and data loops aren’t.
  • “What’s your moat today?” Speed; following is switching costs from embedded pilots and outcome-based contracts.

“How defensible in 24 months?” Exclusive data/process integrations (cornered resources), plus domain models or corpora that reduce unit costs (scale) as we grow.

Your 7-day action plan

  • Day 1–2: Shadow a target user; write the “boring but painful” SOP your agent will replace.
  • Day 3: Ship a thin vertical slice without chat—make the UI do the work.
  • Day 4: Stand up a minimal evals harness (golden sets, pass/fail gates) and wire it to CI.
  • Day 5: Offer a work-delivered pilot to one design partner; embed a forward-deployed engineer.
  • Day 6–7: Instrument usage, measure outcomes, and iterate—speed is your moat this week.

References:

https://www.amazon.ie/7-Powers-Foundations-Business-Strategy/dp/0998116319

https://www.ycombinator.com/companies/industry/ai

https://allenai.org