Cloud-based chatbots are almost old technology now; they’ve been around for four years. And reception has been decidedly mixed.
Last week, researchers at several collaborating institutions unveiled the formal write-up of something called the “Second Conversational Intelligence Challenge,” which was a competition among chatbots that took place during December 2018’s NeurIPS artificial intelligence conference. During the competition, they engaged a person with a bot in a casual conversation about what they both like.
If you’re going to develop chatbots people won’t hate, here are a few rules to follow.
First, you really need to have a use for them. For example, building systems that are installed in cars and motorcycles that converse with people using voice interaction because the people are driving is a great use case.
But generally, talking to a chatbot instead of using a keyboard and screen, which is typically how we interact with applications, is not as productive as you might think. While it makes us feel very advanced, productivity may be tossed out the window.
Second, chatbots don’t always get things right. I would hesitate to tie some vital function such as braking to a chatbot. I’d live in fear that it would brake at the wrong time if I said something it misinterpreted, such as, “I need a break.”
Third, chatbots are costly to build and deploy, so they increase the budget for most application development projects.
Bottom line: If chatbots aren’t really needed, do not use them.
We’re at the hangover stage for chatbots, something we experience with any technology. The initial promise resulted in overuse, and users reacted negatively, creating a backlash. That’s a natural part of the adoption cycle and means we’re now coming to grips with the capabilities of the technology, as well as its limitations.