AI is useful, but not yet as useful as people think, and it's hard

We do need to use it, but the current widespread assumption in organisations that computers will always do what they are told when given a good set of instructions will no longer be true – and that requires a different way of thinking about the services we build.

But we are perhaps helped in this by the fact that the majority of users do not share that mistaken impression, finding that computers do the wrong thing, behave in unexpected ways and make unreasonably complicated demands all the time.

I think, and hope, that AI deployed well will reduce the extent to which users have that experience. But it will also increase the frequency with which organisations have to deal with unexpected things, like honouring wrong advice given by a chatbot, or rapidly admitting that a bad decision has been made by an AI and making it right, with all the implications of fallibility that carries. Our current generation of AI tools (and probably all of them, forever) are fundamentally probabilistic, and therefore, will always be wrong some amount of the time.

This isn’t a killer problem though. Because people are frequently wrong too. The same standard should apply as does to self-driving cars (coming any day now I’m sure 🙄): not “can AIs be wrong”, but “are AIs less wrong, less often than people doing the same job”? Or perhaps even just wrong at the same rate, but at less expense.

One of the complexities about all of this at the moment is that it’s a real mixed bag. In some cases, the AIs are pretty good. In others, they are terrible. And you can’t always tell without trying, and once you’ve tried and failed, it’s not always clear whether the AI just couldn’t do it, or whether the implementation was bad. And so the hype vs realism debate rumbles on, with both sides equally able to cite good examples for their cases.

Time will tell, but I’m betting on the robots to clinch it. Eventually.