caalley logoThe alley for Indian Chartered Accountants

Series: Why AI Doesn’t Think Like Humans

 Read: Part 1 — Why AI Rarely Questions Your Question

Part 2 — When the Reasoning Is Built Backwards

Imagine this scene in a CA firm:

Partner: “Why have we applied 18% GST on this service?”

Junior (very confidently): “Because it’s an intermediate service. Here’s a four-page note with notifications, SAC codes, and step-by-step logic proving why 18% is correct.”

Partner: “Okay ... But did you first check whether 18% is actually the right rate?”

Junior: “Of course! That’s why I wrote four pages explaining why it’s correct.”

Partner (sighing): “…Beta, you decided the answer was 18% first and then built the entire reasoning backwards, didn’t you?”

 

If this happened in real life, someone would eventually joke: “Arre, tu answer pehle fix kar ke baad mein reasoning bana raha hai!”

Welcome to one of AI’s strangest habits.

 

What Reverse-Engineering Means in AI

Reverse-engineering typically means starting from a known outcome and working backwards to construct a path.

AI can sometimes show a similar pattern — not in the strict technical sense, but in an “answer-first, reasoning-later” way.

It can appear to settle on a conclusion early in the response and then build reasoning that aligns with it. What looks like careful step-by-step logic may not always be true discovery — it can be a structured justification built around an early assumption.

The output sounds professional.
But the logic may have been shaped to fit the answer.

 

How This Appears in CA Practice

This tendency becomes risky in tax, accounting, and compliance work:

You ask why a service should attract 18% GST → AI leans toward 18% early and builds supporting logic

You ask why depreciation should be 15% → AI favours 15% and constructs useful life and Schedule II justification

You ask for reasons why Form 15CA is mandatory → AI assumes it is and builds a polished explanation without properly checking exceptions

In each case, the explanation looks convincing — but it may be defending a pre-aligned answer rather than objectively arriving at one.

 

A Simple Way to Detect It

A useful red flag:

If the answer is stated immediately and the rest of the response only supports that one position — without exploring alternatives or exceptions — it may indicate backward reasoning.

This doesn’t always mean the answer is wrong.
But it does mean the reasoning deserves closer scrutiny.

 

How to Make AI Reason Forward (and Get Much Better Results)

You can significantly reduce this behaviour with better prompts:

• “Do not assume any conclusion. Start only from the facts and relevant legal provisions. Reason strictly forward step-by-step and then give me the correct position.”

• “First list 3 possible outcomes. Analyse each one objectively with supporting rules and notifications before reaching any conclusion.”

• “Reason forward only. Do not work backwards from any assumed answer. Flag it if you catch yourself justifying a pre-decided position.”

These small changes often make a dramatic difference in the quality and reliability of AI’s output.

 

Key Takeaway

AI is very good at producing convincing explanations.

It is not always good at discovering the right answer through genuine forward reasoning.

It can appear to pick a direction early and then shape the reasoning around it.

Once you understand this, you stop blindly trusting AI’s explanations and start guiding them. That shift turns AI from a risky shortcut into a far more dependable tool for professional work.

 

 Read: Part 1 — Why AI Rarely Questions Your Question

 Explore "Tech Zone" 

Important Updates