What investors are actually looking for when you say "we use AI"
Category: Business owner / Investor attraction / AI strategy
"We use AI" has become the startup equivalent of "we have a website." It signals nothing. Investors at every level, angels through Series B, have been trained by the last few years of overpromising to hear "we use AI" and immediately think: it's a wrapper. A thin layer on top of someone else's model, with no real defensibility and a margin profile that looks worse the more you grow.
If you're a founder who uses AI seriously, that assumption is costing you. Here's how to change it.
Three types of AI claims investors see
Every pitch that mentions AI falls into one of three categories, whether the founder knows it or not.
Cosmetic. AI is used somewhere in the product or operation, but removing it wouldn't materially change the business. A marketing agency that runs copy through ChatGPT before sending. A SaaS tool with an AI-generated summary in the corner nobody reads. Investors who ask two questions will find this out. They always ask two questions.
Operational. AI meaningfully speeds up or reduces the cost of something the business already does. Customer support with a 60% deflection rate. Sales prospecting that takes two hours instead of twenty. This is real value, and investors will credit it, but they'll also note that any competitor can replicate it in a quarter, because the inputs (a model, a prompt, some integrations) are commodities nowadays.
Structural. AI is embedded in the product in a way that gets harder to replicate over time, not easier. The model improves and evolves because of data your company generates that nobody else has. The workflow is so embedded in customer operations that ripping it out has real cost. The AI makes decisions that improve measurably with every customer added. This is what investors are actually looking for. Most pitches claim structural when they're operational. Some don't even know the difference.
What makes AI defensible: the moat question
Every serious investor at the table is running some version of this check: if a well-funded competitor pointed the same models at this problem tomorrow, how long before they caught up?
For cosmetic or operational AI usage, the answer is often three to six months. That's not a moat; that's a head start.
Structural defensibility comes from a handful of things that are genuinely hard to replicate fast.
Proprietary data. Not data you bought or licensed. Data your product generates that nobody else has, because nobody else has your customers doing your workflows. Legora, the Stockholm-founded legal AI platform that recently hit a $5.55B valuation after its Series D, is an instructive case. The company built its platform directly into how lawyers work on complex matters: research, review, drafting, client communication. The longer a law firm uses it, the more the platform learns about that firm's clients, preferred styles, risk tolerances, and precedents. That layer of firm-specific knowledge doesn't exist anywhere else. Competitors can build a general legal AI, they cannot replicate that firm-specific history without the firm starting over. That's a data moat.
Note: I've tried to look at Legora's site twice now and it has been showing a blank page. Do not do that to your venture.
An accumulating feedback loop. Every output the system produces, corrected or approved by a human, becomes training signal. If you've built a process where human review genuinely improves the model's performance on your specific domain, you have a flywheel. If the AI just runs and outputs and nothing feeds back, you have a feature. Sana Labs, the Stockholm-founded enterprise knowledge platform acquired by Workday for $1.1B in late 2025, built exactly this: an AI-native learning system that got demonstrably better at surfacing the right knowledge to the right person as more of an organisation's documents, conversations, and workflows flowed through it. The data that made Sana valuable was the data its customers generated inside it.
Workflow lock-in. The most underrated moat is switching cost. Not contractual switching cost, the real kind, where the AI has become so embedded in a team's actual daily process that removing it would require retraining the humans, not just replacing the software. This happens when the AI is in the flow of work, not adjacent to it.
The pitch mistakes that flag you as cosmetic
These are the specific things that lower an investor's estimate of your AI's defensibility, even if they don't say it out loud:
Leading with the model name. "We're built on Claude / GPT-4o / Gemini" tells an investor what you're renting, not what you own. The model name belongs in the technical appendix, not the pitch narrative. You should run an AI container.
No answer to "what happens if the API gets 10x more expensive?" If the only honest answer is "our margins collapse," you've described a cost structure, not a business.
"We prompt-engineer for quality." Prompt engineering is a skill, not a barrier. If that's where the differentiation lives, a competitor with the same skill set is three months behind.
No mention of data provenance. Where does your training or fine-tuning data come from? Who generates it? Does it accumulate? How? If you haven't thought through this, investors will notice.
The AI does something to data you got from somewhere else. Versus: the AI does something that generates data that only you now have. That you can now sell. And then sell again.
Questions investors should actually ask
Not the softballs. The ones worth preparing for:
"Walk me through what happens when a new customer starts using the product. What data do you capture, where does it go, and how does it change the model's behaviour for that customer?"
"If your top competitor had unlimited budget and access to the same foundation models, what would it take them to get to feature parity?"
"What percentage of your AI outputs get reviewed by a human, and what happens to that review signal?"
"Show me a customer who's been on the platform for twelve months. What can the AI do for them now that it couldn't do on day one?"
"What's your position on GDPR and EU AI Act compliance?" This one should come up in European pitches, and having a clear answer signals operational maturity.
That last point matters more here than it does in a US-only pitch. European investors and enterprise customers are factoring regulatory compliance in earlier than they were 18 months ago. Founders who have thought through data residency, model provenance, and AI Act obligations before they're required to, come across as significantly more credible.
The credibility signal or building vs. buying
There's a less obvious factor that experienced investors pick up on quickly: whether your team builds AI infrastructure or consumes it.
A team that has fine-tuned a model, built a RAG pipeline on proprietary documents, or designed an evaluation framework for their domain reads very differently to a team that has assembled third-party API calls in n8n. Both can produce good products. However, the first team has demonstrated they understand the machine that builds the machine, which they're depending on. They can improve it, debug it, and make architecture decisions when the landscape shifts.
This doesn't mean you need a research team per sé. It means having at least one person on the founding team who can speak credibly about embeddings, retrieval quality, context window trade-offs and evaluation methodology, changes the room. Investors betting on AI-native companies want to know that the team could adapt if a model they rely on changes its pricing, degrades in quality, or gets superseded.
The positioning that actually lands
The founders who move investors with AI aren't the ones who list AI features. They're the ones who can answer this clearly:
Our AI gets better the longer a customer uses us, because [specific mechanism]. A competitor starting today would need [specific time or asset] to get to the same point, because [specific reason they can't shortcut it].
That's not a product statement. It's a strategy statement. And it's the difference between a pitch that gets nodded at politely and one that gets a term sheet.
The EU AI landscape is moving fast: €21.7B invested in European AI startups in 2025 alone, with another €9B+ in the first two months of 2026. Investors in this market are not naive. They've seen the full cycle from hype to scrutiny once already. The founders who show up prepared for scrutiny are the ones who stand out.
We write our articles on this platform, which we own. Not in social media littered with ads we can not control. You're welcome.