caalley logoThe alley for Indian Chartered Accountants

Series: Why AI Doesn’t Think Like Humans

Part 1 — Why AI Rarely Questions Your Question

Imagine the following scene in a CA firm.

Partner: “Why are we depreciating this asset over five years?”

Junior (without hesitation): “Because five years is the correct useful life.”

Partner: “Who decided that?”

Junior: “You did. Just now.”

Partner: “…I was asking whether five years is correct.”

Junior: “Yes. And I have prepared a three-page explanation proving that it is.”

If a conversation like this happened in a real office, it would last about 15 seconds before someone interrupted and said: “First tell me whether five years is even the right number.”

Humans usually check the assumption inside a question before answering it.

AI systems often do not.

Instead, the AI version might generate that three-page justification in 3 seconds — without ever pausing to question the premise.

 

The Behaviour Most Users Don’t Notice

When professionals ask a question, they often assume the system will also evaluate whether the question itself makes sense.

Humans do this automatically.

If someone asks:

“Why should depreciation on this machine be calculated over five years?”

a human expert might respond with a few immediate checks:

• Why five years?

• Is that the correct useful life?

• What does the accounting policy say?

In other words, humans often challenge the premise before answering.

AI usually does the opposite.

 

What AI Typically Does Instead

When AI receives a question, it normally treats the information in the prompt as part of the working context

It does not independently verify the assumption.
It simply treats the statement as part of the context and continues from there.

So if a prompt says:

“Explain why depreciation should be calculated over five years.”

AI will often produce a detailed explanation supporting the five-year assumption.

The answer may include:

• technical reasoning

• accounting logic

• structured analysis

But the system may never ask the most basic question: “Is five years even the correct assumption?”

The model simply accepts the premise and begins justifying it.

 

Why This Happens

AI systems are designed to respond to prompts, not to challenge them.

The model’s objective is to produce a response that logically follows from the information given in the question.

If the prompt states something as a fact, the model usually treats it as a fact.

This behaviour is not a flaw in the system. It is simply how these models operate.

They are optimised to continue the conversation, not to question it.

In other words, the model’s task is not to audit the premise, but to continue the reasoning built on that premise.

Modern models can be prompted to show more skepticism (e.g., by instructing them to "first verify the premise" or "challenge any doubtful assumptions"), but this requires explicit direction from the user. Without such instructions, the default behavior remains acceptance and continuation.

 

How This Appears in Real AI Conversations

Consider a few examples from professional work.

Example 1

> “Why is GST on this service 12%?”

If the rate actually depends on specific conditions, the model may still generate a detailed explanation supporting 12%.

It may discuss:

• the nature of the service

• applicable notifications

• tax treatment logic

But the system may never ask:

• Is the service actually covered by that rate?

• Are there exceptions?

• Does the nature of the contract change the rate?

The explanation may look technically sound while quietly assuming that 12% was correct from the beginning.

 

Example 2

> “Explain why depreciation on this equipment should be 10%.”

If 10% is incorrect, the model may still produce a convincing justification involving:

• useful life assumptions

• wear patterns

• asset classification

The reasoning may sound professional, but it is built entirely on the assumption embedded in the question.

 

Example 3

> “Draft a reply confirming that Form 15CA is required for this remittance.”

Notice what the prompt already assumes: that Form 15CA is required.

A human professional might first check:

• whether Rule 37BB provides an exemption

• the nature of the remittance

• applicable thresholds

AI may instead proceed directly to drafting the confirmation, treating the requirement as already established.

 

Why Humans Catch This More Easily

Human professionals usually bring a natural skepticism into discussions.

If a question contains a doubtful assumption, someone often pauses and says something like:

> “Before answering that, let’s confirm the assumption.”

This small moment of doubt is an important part of professional judgment.

AI does not automatically apply that skepticism.

If the prompt contains a statement, the system generally accepts it and continues the analysis.

 

Why Humans vs AI Process Questions Differently

The difference can be simplified into two mental workflows.

 Step Human Professional (Skeptical) AI Language Model (Generative)
Input Receives the question Receives the prompt
Initial filter Audits the premise: “Is the GST rate really 12%?” Accepts the premise: “The user says the rate is 12%.”
Logic path If the premise is wrong, it stops and corrects Continues generating text consistent with the assumption
Output A correction or verified answer A fluent explanation based on the assumption

 

A Simple Way to Use AI More Safely

A small adjustment in how questions are asked can help.

Instead of asking only for an answer, it is often useful to ask the model to examine the assumption itself.

For example:

Instead of asking:

> “Explain why GST rate here is 12%.”

Try asking:

> “Is the assumption that GST rate is 12% correct?

> If not, explain what needs to be verified.”

Or:

> “List possible situations where this assumption may be wrong.”

These prompts encourage the model to analyse the premise instead of automatically accepting it.

 

A Useful Mental Model

One way to think about AI is this:

AI is very good at continuing the logic of a question.

But professionals often need something slightly different — someone who occasionally stops the conversation and says:

> “Wait. Are we even asking the right question?”

That pause is something humans bring naturally.

AI usually needs to be explicitly instructed to do it.

 

The Key Takeaway

AI can produce structured, persuasive answers in seconds.

But persuasion is not the same as verification.

If the premise inside the question is wrong, the model may still generate a confident explanation built entirely on that premise.

The result can be a very convincing answer built on a faulty starting point.

In professional fields such as taxation and financial reporting, testing assumptions remains an essential human responsibility. Without that step, a helpful explanation can quietly turn into confident but incorrect advice — and the error may only surface much later.

In professional work, verifying the *question itself* remains a human responsibility.

 

 Explore "Tech Zone" 

Important Updates