Your AI intern lies

Last week, I caught my AI intern in a lie.

I was doing some business development planning—asking for a list of likely companies in San Diego I could pitch consulting or project work to. That’s it. No special parameters. Just a general, reasonable prompt.

And one entry jumped out:

  • Terra

  • San Diego-based

  • Climate tech SaaS

  • Recently raised $15M Series A

  • CEO active on LinkedIn

  • No CMO listed

I hadn’t asked for any of that detail. But it was so specific. It sounded legit. Like the model had gone above and beyond.

So I flagged it to investigate further.

But when I looked?

Nothing.

No Terra. No funding news. No Crunchbase listing. No domain. No trace of a climate tech SaaS company with that name or profile. I double-checked spelling, variations, founders. Still nothing.

So I asked the model directly:

“Does this company actually exist?”

And it responded—without hesitation:

“No. Sorry. I made it up.”

Not a glitch. Not a fluke. Just a system doing what it was designed to do: sound right, whether or not it is.

Yes, This Is a Real AI Term

The word for this is hallucination—and no, I’m not being clever. That’s the actual technical term. In AI, a hallucination is when a large language model (LLM) generates output that sounds factual but isn’t grounded in any real data.

It doesn’t mean the system is broken. It means it’s doing what it’s built to do: predict the next likely word or phrase based on patterns, not truth.

The problem is, those patterns can be incredibly convincing. Especially when you didn’t ask for specifics—but the model gives them to you anyway, confidently and without hesitation.

Why Hallucinations Happen

LLMs don’t search the internet. They don’t fact-check. They don’t flag uncertainty unless prompted.

They’re not thinking. They’re completing.

So when you ask a general question—like “who should I reach out to for consulting work in San Diego?”—the model will respond with what sounds right based on patterns it’s seen:

  • Climate tech is hot

  • San Diego has a startup scene

  • $15M is a believable Series A

  • Founders are often vocal on LinkedIn

  • Early-stage teams might not have a CMO yet

Put those elements together and boom—Terra. A company that doesn’t exist, but should, according to startup logic. It sounded helpful. It was pure fiction.

Your Prompts Might Be the Problem

Hallucinations don’t just happen because the model is flawed. They happen because the prompt leaves room for invention.

Common trigger scenarios:

  • Overly broad asks. “Give me a list of ideal companies” = wide-open fiction zone.

  • No source requirement. If you don’t ask for citations or verification, it won’t give any.

  • Confident tone. The model mirrors you—ask with certainty, and it responds with authority, even when guessing.

How to reduce hallucinations:

  • Ask for sources. “Where did this come from?” is always fair game.

  • Set tighter constraints. Add timeframes, locations, or known databases.

  • Use retrieval tools. When truth matters, use a model with live search or connect your own data.

  • Train the model to verify, not just generate.
    Use prompts like “Is this accurate?”, “Can you verify this?”, or “Does this actually exist?” These phrases often trigger a self-check the model won’t initiate on its own.

The Intern Metaphor (But Let’s Keep It Real)

I like to think of an LLM like an intern with a 1600 SAT:

  • Brilliant

  • Overconfident

  • Fast

  • And totally unbothered by whether something is true

The intern metaphor works—but only if you remember they’re not a person. They’re code. Predictive text. A system trained to sound right, not be right.

So don’t personify them. Supervise them.

Because here’s the thing: Terra didn’t look like a hallucination. It looked like a lead. It looked like progress. It looked like a good day’s work—until I wasted real time on something that didn’t exist.

And the model never would have told me—unless I asked.

A Final Cheat Sheet: How to Spot a Hallucination

Before you act on anything an LLM gives you, run it through this filter:

  • Does it sound too perfect?
    If it checks every box with suspicious precision, dig deeper.

  • Did you ask for sources?
    If not, you won’t get them.

  • Is it presenting plausible detail you didn’t request?
    That’s a red flag. That detail may be filler, not fact.

  • Use trigger phrases like “is this accurate?”
    Asking the model to assess its own output often reveals hallucinations you’d otherwise miss.

  • Can you verify it outside the model?
    If it matters—always double-check.

Hallucinations aren’t a bug. They’re a side effect of how this technology works.

So if you’re using AI to support strategy, research, or business development, don’t fall for the voice of authority. Train yourself to ask one more question, verify one more detail, and remember what you’re really working with:

Not an oracle.
Not an expert.
Just code.

Smart, fast, and occasionally full of it.

Previous
Previous

Bright is not a compliment

Next
Next

Foundations founders forget