There’s a lot of excitement at the moment about what AI can do with financial information. If you feed it a set of accounts, in seconds it can summarise performance, highlight trends, flag anomalies and even generate a credit narrative. It works pretty well to be fair, but don’t automatically assume that because it can read the numbers, it understands your business.
It doesn’t.
AI is very good at analysing what’s in front of it, but it’s much less good at spotting what doesn’t quite make sense. And in commercial finance, that distinction matters more than most people realise.
One of the things I’m seeing more often is lenders and intermediaries relying heavily on AI-generated outputs without really questioning them. The system pulls ratios, compares year-on-year movement, and produces a view of risk that looks credible on paper. But credibility isn’t the same as accuracy – because the numbers rarely, if ever, tell the full story on their own.
I’ve lost count of the number of times I’ve looked at a set of accounts where everything technically stacks up, but something just feels off. Not dramatically wrong – just… odd. Margins that don’t quite align with the narrative, growth that’s too smooth or costs that behave in a way they shouldn’t for that type of business.
You need an experienced pair of human eyes to spot that stuff – and a computer is no match for me – at least not yet!
AI won’t tell you that the story doesn’t quite ring true, it’ll tell you the figures reconcile and the trends are positive. It might even tell you the risk score is acceptable, but what it won’t do is pause and say, ‘hang on – why would a business like this behave like that?’ The problem isn’t in the data, it’s with the judgement of the bot. And like I said, it’s no match for me.
There was a situation recently where, on paper, a borrower looked fine. The accounts were clean and the numbers worked – and an automated system would have had no issue progressing it. But once you started asking simple, human questions – like ‘talk me through this’ – the cracks appeared. The explanation didn’t quite match the numbers, and the timing didn’t make sense. The story shifted slightly each time it was told. AI would never have picked that up – but I did.
What worries me more is the growing tendency for less experienced people to trust the output without challenge. When you’re new to lending or finance, it’s very tempting to assume the system knows better than you do. After all, it’s clever, it’s fast, and it’s confident.
But confidence without judgement is dangerous.
AI is brilliant at pattern recognition, but it’s not great at understanding why a pattern exists, or whether it’s logical in the real world. It doesn’t know how businesses actually behave when they’re under pressure, growing quickly, or dealing with imperfect information. And in my experience, most businesses operate with imperfect information.
The other side of this is just as important. I also see good businesses being unfairly penalised because their numbers don’t look tidy enough for an algorithm. Real businesses are messy – their cash flow moves around, and one-off events happen. Growth definitely isn’t linear.
To an experienced eye, that’s normal, but to an automated system, it can look like risk.
This is where human credit judgement has quietly been eroded – not deliberately, but gradually. As systems get better, people are encouraged to rely on them more. Over time, the instinct to question, probe, and challenge gets dulled and that’s a big mistake.
AI should be a tool, not an arbiter, and it should help surface questions, not replace the act of asking them. Used properly, it makes good people better, but used blindly, it creates false confidence.
At Able, we’re not anti-AI – far from it – we use it where it adds value. But we don’t outsource judgement to it, because judgement is built from pattern recognition over time, from seeing what works, what fails, and where things usually go wrong. AI can analyse numbers, but it can’t tell when a story doesn’t add up.
And in commercial finance, that difference is often the one that matters most.




