caalley logoThe alley for Indian Chartered Accountants

Series: Why AI Doesn’t Think Like Humans

Part 3 — The Illusion of Reasoning
(Series Conclusion)

Imagine this scene in a CA firm at 7:30 pm:

Partner (exhausted): “Tell me quickly — is this treatment correct under GST?”

AI (in its usual calm, confident voice): “Absolutely. This is clearly covered under Notification No. 12/2017. Here is a beautifully structured 5-point reasoning with cross-references and logical flow confirming it is 100% correct.”

Partner (reading): “Hmm… sounds very sure.”

Junior (whispering from the side): “Sir, should we maybe double-check? It sounded *too* confident.”

Partner: “Arre, why doubt it? Even the AI is saying ‘absolutely’ and ‘100% correct’!”

If only the AI had added in small letters at the bottom: 
(Confidence level: 100% tone, 62% probability, 0% self-doubt.)

Welcome to the final and perhaps most deceptive habit of AI — the illusion of reasoning.

If this already feels convincing, that’s exactly the point.

This is the final part of a 3-part series — and the illusion becomes much clearer when you see how it builds step by step:

Part 1 — The Question Trap
Why AI rarely questions your assumptions

Part 2 — The Backward Logic Problem
How AI can arrive at an answer first and then build reasoning around it

(Quick reads. Worth it before this — otherwise AI will still feel smarter than it actually is.)

 What’s Really Happening Behind the Curtain

When AI gives you a neat, well-structured answer, it feels like it has carefully thought through the problem.

But it hasn’t.

What’s actually happening is much simpler — and a bit funny when you realise it.

AI doesn’t “think” or “reason” the way we do.
It doesn’t form ideas first and then write them down.

Instead, it does something much smaller, thousands of times:

> “What is the most likely next token?”

A token is the smallest unit of text the model works with. It can be:

- a full word (“GST”)
- part of a word (“depre” + “ciation”)
- a number, punctuation mark, or even a space

For example, the phrase “depreciation rate under Schedule II” might be broken into 6–8 tokens depending on the model. The AI doesn’t understand the *meaning* of “depreciation” the way a CA does. It only knows that, in its training data, these tokens usually appear together in certain patterns.

Every response you read is built one tiny token at a time. The model looks at everything written so far and predicts what token should come next based on patterns it has seen in millions of documents. That process repeats thousands of times until a full answer appears.

  

Why It Sounds So Confident (Even When It’s Not)

Because AI has read enormous amounts of professional writing (tax notes, judgments, circulars, audit reports, etc.), it has become extremely good at imitating the style of reasoning.

It knows exactly how a confident GST note or depreciation explanation should look and sound. So it reproduces that pattern beautifully.

But internally, it is not verifying logic — it is simply predicting the most plausible next token.

This is where temperature comes in.

Temperature is a setting that controls how “creative” or “safe” the AI behaves while choosing tokens. It usually ranges from 0 to 1 (some tools allow up to 2):

- Low temperature (0.0 – 0.3): The model plays it very safe. It almost always picks the highest-probability (most likely) token. The output is realistic, consistent, factual, and repetitive. This is the best setting for tax, GST, compliance, or audit-related work where accuracy matters more than creativity.

- Medium temperature (0.4 – 0.7): A balanced middle ground. The answers are still reasonably reliable but have some natural variation.

- High temperature (0.8 – 1.0 or above): The model becomes more adventurous. It is willing to pick lower-probability tokens, leading to more imaginative, creative, or even surprising responses. This is useful for brainstorming ideas, drafting client emails, or writing articles, but it also increases the chance of mistakes or hallucinations.

In short: low temperature = more realistic and dependable; high temperature = more imaginative but riskier.

  

How All Three Parts Are Connected

Now that you understand the token-by-token mechanism, the first two parts suddenly make complete sense:

- In Part 1, AI rarely questions your assumptions because it simply continues the prompt as given — predicting the next plausible tokens.
  
- In Part 2, AI often builds reasoning backwards because it quickly settles on a probable conclusion and then generates tokens that justify it.
  
- In Part 3, the entire explanation feels deeply reasoned because thousands of token predictions have been stitched together to create the *illusion* of careful thinking.

All three behaviours come from the same root: AI is not reasoning. It is predicting the most plausible continuation of text, one token at a time.

That is why an answer can look perfectly logical, sound extremely confident, and still be quietly wrong.

  

The Real Risk — and the Real Opportunity

The danger is not that AI is stupid.
The danger is that it is extremely good at sounding smart.

But here’s the empowering part:

Once you understand how AI actually works, you stop being a passive user and become a smart director of the tool.

  

How to Get Much Better Results from AI

Here are practical ways to break the illusion:

- Ask it to **reason forward** explicitly:
“Do not assume any conclusion. Start only from the bare facts and relevant sections. Reason strictly step-by-step before giving your final view.”

- Force multiple possibilities:
“List 3 possible GST treatments with supporting notifications. Analyse the pros and cons of each before recommending one.”

- Reduce false confidence:
“Be cautious. If you are not 100% sure, clearly mention the areas of uncertainty and what needs to be verified. Use low temperature.”

- When you need maximum reliability (tax advice, compliance, audit), set temperature low (0.2 or lower).

  

Consolidated Key Takeaways from the Series

Part 1 – Why AI Rarely Questions Your Question
AI accepts the premises you give it and continues from there. Always verify the assumption first.

Part 2 – When the Reasoning Is Built Backwards
AI often picks a conclusion early and reverse-engineers supporting logic. Force it to reason forward.

Part 3 – The Illusion of Reasoning
AI doesn’t truly reason — it predicts one token at a time. What looks like deep thinking is often just very convincing pattern matching.

The Big Lesson:
AI creates a powerful **illusion of reasoning**. The better you understand this illusion, the better you can use the tool without being fooled by it.

  

Final Word

AI is genuinely one of the most wonderful inventions of our time.

It can draft notes in minutes, organise complex ideas, spot patterns in data, and help you think faster than ever before.

Its greatest strength — sounding incredibly intelligent — is also its greatest limitation.

The CAs who will benefit most from AI in the coming years won’t be the ones who trust it blindly.

They will be the ones who have understood how it actually works, who know when to rely on it, when to question it, and when to apply their own professional judgment.

Because in the end, the most powerful combination is not AI alone…
It is a sharp Chartered Accountant who knows exactly how to direct this powerful tool.

 Explore "Tech Zone" 

Important Updates