AI opportunities — whether classical ML or GenAI — don’t announce themselves. Processes are entrenched, defaults are entrenched, and “this is how we’ve always done it” carries weight. A team that’s been running monthly cost accounting the same way for fifteen years isn’t waiting to hear that a model could help.
The opportunities are there. They’re hidden in places people don’t think to look.
Three places worth looking:
The human bottleneck.Where is someone reading, classifying, mapping, or translating by hand at scale? Six hundred new IT activities a year, each categorised against a complex rulebook by a finance analyst — that’s a classification problem hiding inside what looks like a finance task. The use case had to be found because nobody was looking for ML in Finance in 2022.
The “always skipped” work. What never gets done because no one has time? Cross-referencing every document against the rest of the corpus when a new one lands. Surfacing the data point that would strengthen an analysis. Quality-checking commentary before it goes to the next reviewer. Tasks that are valuable but expensive enough in time that they get deprioritised — those are AI candidates.
The conservative gap. In any environment with operational risk, people default to the safe answer — they avoid edge cases, skip the harder analysis, leave value on the table. Where caution is the dominant instinct, AI can do the analysis the human chose not to risk.
In every case, the question is the same: where are humans doing repeatable judgement work that would be better with a draft, a check, or a suggestion? Chase the non-obvious answers. The obvious ones are already crowded.