The Fine Print

What AI Can and Cannot Do

You've met your thought partner.

You know how it thinks. You know how to talk to it.
Now the question: how much can you trust it?

The empty chair is powerful — but it has blind spots. Understanding them is the difference between a helpful partner and a dangerous one.

Let's think about some personalities that this empty chair can take.

The Brilliant Chameleon

Your new partner can be a chameleon.

The Problem: It has no spine. It will shift its position based on how you frame the question.
The Fix: Don't ask leading questions. Present both sides and ask for analysis, not validation.

The "Yes Man" Trap (Sycophancy)

The most dangerous thing about the Empty Chair is its desire to agree with you.

If you ask, "Why is this project a good idea?" it will give you 10 reasons it’s brilliant. If you ask, "Why is this project a disaster?" it will give you 10 reasons to cancel it.

The Problem: It’s a mirror. If you aren't careful, you’ll end up in an echo chamber of your own making.

The Fix: Tell the chair to be your critic, not your cheerleader.

The Map from Last Year (Static Knowledge)

Remember the Three-Layer Cake. The layer 1 (The Foundation) is frozen.

Most models are trained on a snapshot of the internet from months or years ago. It’s like a partner who has a world-class library but hasn't seen a newspaper since last year. It knows the "shape" of the world, but it doesn't know the "news" of the day.

The Problem: It will confidently give you outdated tax laws or expired medical guidelines.

The Fix: For anything time-sensitive, use a model with web search enabled — or verify against current sources yourself. Never trust AI for today's news, recent regulations, or live data.

The Confident Guess (Hallucination)

AI is a Predictive Engine, not a Search Engine. Its job is to predict the next word.

Sometimes, the most "statistically probable" word is a complete lie. This is the "Confident Guess." It won't say "I don't know." Instead, it will invent a legal case or a medical study that sounds perfect but doesn't exist.

The Problem: It’s a "Brilliant Liar." It doesn't know it's lying because it doesn't know what "truth" is.

The Fix: Never use the AI as your final source. Use it to build the structure; you provide the verification.

The Short-Term Memory (Context Drift)

The "Empty Chair" has a limited attention span.

In a long conversation, the AI eventually "runs out of room." It starts to forget the instructions you gave it at the beginning. It loses the plot. We call this the "Context Window."

The Problem: By the 20th message, your "Senior Auditor" might start acting like a generic chatbot again.

The Fix: You have to "re-brief" your partner periodically to keep them on TRACE.

Trust, but Verify

Your partner is brilliant, but flawed. It will agree too easily, forget too quickly, and guess too confidently.

Your job: lead the conversation, verify the facts, and re-brief when it drifts.

Trust the chair. But check its work.



Back to AI 101