AI is not the God button

People from all industry sectors question the integrity of AI based on their own experience and use cases. However, with these opinions, there are so many different variables it is quite honestly disturbing.

For starters, AI is not one instance sitting in the hardware clouds like a mega all-encompassing brain. There are hundreds, if not thousands of models – some private, some commercial and, no doubt, some state – all tools, with astonishing capabilities. We’ve all experienced the indistinguishable from reality realistic images produced by Mid-journey, GPT and Gemini and now, movie like video from Higgsfield AI. But the word ‘Tool’ so often gets forgotten when discussing AI.

Most people can be forgiven for this because when AI was first unleashed on the public shortly after Covid back in November 2022, it was said to be the final answer on all matters. We all remember hearing ‘AI said this’ or ‘AI said that’ and therefore ‘it must be true’.

It wasn’t long before the active users who gave the AI models a thoroughly good query bashing, revealed that AI makes mistakes. ‘It hallucinates’, is the official line. But there are several sound reasons why these ‘hallucinations’ occurred. As usual, it’s due to human error misuse. Not technical, but Conceptual Misuse.

Even today, people still treat AI models like all seeing Gods and Oracles, rather than the Probability Language Systems they actually are. Here are several incorrect use cases:

Broad, open-ended prompts

Users ask questions such as:

  • “Summarise everything about Company X”
  • “Give me all risks in Southeast Asia”
  • “Explain the geopolitical situation in the Middle East”.

In these examples, the model has no scope (think scope creep), timeframe or source/reference base of information, outside of its own training data, or if selected, the vast internet. The AI will, in most cases, just fill gaps in its findings with plausible language, and that’s where hallucinations emerge. This is a by-product of ambiguity which, for the record, is AI’s worst enemy.

Remember, AI models predict the most likely next token, based on training data, not to retrieve verified facts, UNLESS EXPLICITLY CONSTRAINED to do so.

Requesting information that may not exist

Major triggers for this type of hallucination are prompts like:

  • “List all lawsuits involving [individual/company]”
  • “Provide internal strategy documents”
  • “Give sources for undisclosed transactions”.

If the model doesn’t know (or cannot verify), it will more than likely generate an answer that looks correct. This results in convincing fabricated citations, case names or entities, referred to in AI speak as Fabricated Specificity.

The world witnessed accounting firm Deloitte’s Australia running into this very issue late 2025, when it was forced to refund the Australian government AU$440,000. The global accounting and advisory firm admitted that it had produced a government report proved to contain fabricated academic references and false court quotes generated by AI. Deloitte’s human specialists later revised the document.

Delegating judgement

This issue is “God Button” behaviour. Examples include:

  • “Decide if this company is high risk”
  • “Tell me if this person is credible”
  • “Make an investment recommendation”.

When users lazily outsource an entire task front to back to an AI, the model again fills in the results with reasoning that looks the part but lacks any sound evidential grounding. These requests by users are in no way information tasks, they are simply crystal ball gazing analytical judgements.

Prompting for authority

This final prompt line example is where a user quite rightfully asks for validation, using terms such as:

  • “Give me sources”
  • “Cite references”.

In most cases – as with GPT – the AI model will return genuine sources. However, models have been known to fabricate these. If a selection of sources has not been presented to the AI in order for it to conduct its research, then users run the risk that references and citations produced will not be real. The lesson here is check and validate all AI produced material.

The majority have abandoned traditional browsing (Google, Explorer or Duck Duck Go) altogether, opting to dive straight into the nearest free or paid AI platforms. The idea being to save time and all the hassle of sieving through article links to find information. All well and good if you’re wanting to know how long to boil an egg, or help with a work-related task, such as explaining what formula is required for an Excel spreadsheet cell to produce a desired equation. However, the issue all industry now faces is that the majority of the general public is sat like a rabbit caught in the AI headlights. They confidently believe that with AI they can produce anything, and they are right. Whether they can produce results of value is the very thread of this article.

The word ‘Tool’ was mentioned earlier. This is important because AI is meant to augment existing skills, to enable, to facilitate and to make more efficient. When used in this manner, we see exponential growth.

AI companies are, to some degree, responsible for pushing ‘the God button’ AI product, typically seen in marketing phone applications and website builds. The message being ‘just tell it what you want and it will build it’. These are, of course, ‘off-the-shelf’ products that don’t stand in terms of quality. However, people in their droves are lured into the more technical ‘no-code required’ platforms. But there have been some interesting observations made here. Many comments complain about the AI not doing what users – with no coding experience – have asked. They have ploughed on and dug themselves into deep holes before finally realising irreparable mistakes have been made. This is because they don’t have the ability to recognise when the AI has got it wrong.

The thought of greenhorn Open-Source Intelligence outfits delivering business critical due diligence reports to clients should be ringing alarm bells.

Conclusion

The examples above can be applied across all industries. The evidence is resounding. There is no substitute for skillsets and experience. It is those that know their industry and subject that will be able to identify AI error, flag it with said AI, and move forward knowing progress is being made. This is where the value exists with AI, the ‘Tool’.

To download a copy of this article, please click here.

Scroll to Top