As artificial intelligence allows machines to talk more and more like humans, companies are increasingly turning to robots to answer calls from their customers.

It was only after Ruaraidh Menzies’ friend had left the club that Saturday night in Glasgow that he realised his jacket and wallet were missing. Menzies tried calling the venue on his phone, but could not get through to anyone.

“Then we tried Facebook,” he recalls.

The following morning, Menzies sent a message to the club’s page explaining that his friend had lost some things. He asked if they had been found. The response was immediate:: “We’ll have a look for your wallet.”

After several minutes, Menzies messaged again to see if they had had any luck – the reply came back, to his surprise, notifying him that he was actually chatting to a bot. In the end, Menzies’ friend found his possessions elsewhere, but Menzies still gets contacted by the bot today.

“They now just message me pretty much once a week giving me information about a nightclub that I will never go to ever again,” he says, pointing out he lives in a different part of the country. “It’s kind of amusing.”

More than 2.5 billion mobile phone users have at least one messaging app installed

While Menzies was initially impressed by the quick response, and the apparent comprehension of his query, he was disappointed that the conversation didn’t actually seem to go anywhere. To his knowledge, no human ever followed it up.

Chatbots just like this, however, are popping up everywhere. Social media sites like Facebook allow companies to set up automated chat programs on their platforms, acting as a kind of robotic frontline for customer service. It means more and more of us are encountering artificial intelligence in our daily lives.

It is estimated that more than 2.5 billion mobile phone users have at least one messaging app installed. That number is expected to grow to 3.6 billion by 2018.

“The rise of messaging apps like Facebook Messenger, WhatsApp, WeChat in China – people are just used to that type of conversation, so extending that to business makes sense,” explains Jack Kent, an analyst at IHS Technology. “It’s the type of communication that people expect to have now.”

Constant cost-cutting

Businesses have been trying to streamline – and cut the costs – of customer service for decades. The first large call centres arrived in the 1960s but automation soon intervened thanks to the rise of touch-tone phones and interactive voice systems that could play spoken messages to the caller.  

In the 1990s, companies began to look for new ways to cut costs, choosing to outsource call centres to countries where wages, and so costs, were far lower. This led to a widespread backlash from consumers who became frustrated with having to deal with people who did not work for the company they wanted to speak to, did not understand many of the problems they were having and often did not speak the same native language as the caller.

While some companies have brought customer service back in-house, others have looked for new ways to save money, turning to social media, apps and now bots

While some companies have sought to turn their customer service into a key part of their product by bringing it back in-house, others have looked for new ways to save money, turning to social media, apps – and now bots. These new technologies allow companies to react to customers immediately, but they can also feel alienating and inefficient.

Learning like a machine

Crafting a bot that works well is not easy. Paul Gibbins, a co-founder at chatbot start-up Twyla, says one of the major challenges is to script conversations that make sense in the given context.

“You want to have as human-like a conversation as possible,” he explains. “But you can’t escape the fact that customers have certain expectations.”

In order to do this, engineers at Twyla use machine learning algorithms to analyse conversations between customers and their human counterparts over live chat systems. The most common interactions and exchanges can be selected and packaged in an automated, rule-based system that allows the bot to reply efficiently, without confusing or misunderstanding the customer. In other words, the bot is fairly constrained in what it can do, but in theory that helps it respond more effectively.

Of course, it it possible to simply train a chatbot to respond to almost any enquiry, but that leads to unpredictable results, says Gibbins. For example, if a bot responds to key words only, it might misinterpret a statement. A bot used by clothing retailer ASOS appeared to stumble into this pitfall recently, when it responded to bemused posts from Facebook users on its page with complex instructions about when orders would be delivered, for example.

There are countless stories about irrelevant responses and even some downright inappropriate messages from chatbots

There can also be problems getting companies to make the most of the conversations their chatbots have with customers. Following up is often a key part of the process because, as for Ruaraidh Menzies, many queries cannot be dealt with autonomously.

This could be why the proliferation of chatbots has been met with much criticism. There are countless stories about irrelevant responses and even some downright inappropriate dialogue from chatbots. Microsoft’s troubled chatbot Tay got the software company into trouble last year when it posted a series of racist messages.

But there are some clear benefits to using chatbots over the human-led customer service we have come to expect from most companies. For example, it allows for an exchange to happen instantly, without – for instance – requiring customers to spend a long time navigating a phone menu tree. And, of course, some very simple queries – “how much do you charge for X service?” – can be quickly answered by bots.

Teething issues

As a conduit for frequently sought information, chatbots have proven their worth. One helpful chatbot, armed with a legal queries database, has been credited with overturning 160,000 parking tickets in London and New York after drivers used it to check if they really were in breach of local rules.

But they may have more trouble when asked to take part in more complex exchanges, where a human follow-up action is required. For instance, journalists who tested Air New Zealand’s chatbot found that it handled some queries well, such as how to get an upgrade, but others – like asking for help with a lost suitcase – left it confused.

That’s part of the danger with a chatbot – giving the breezy impression that it can handle any request when in fact it can’t. There is, indeed, a strange ambiguity at work with bots. As Menzies and many others have experienced, it’s often not clear at the beginning whether you’re talking to a human or not. That has led some to question whether they should be explicitly labelled.

Keen to inject creativity into conversations, the industry is now hiring human copywriters to script and inject some personality into bots

“I think it would certainly be clarifying and helpful to people,” says Luke Stark, a researcher in sociology at Dartmouth College. “But I also suspect it would drive up the number of people who say, ‘Oh I want to talk to a person instead’.”

There is a lot of work being done that will make spotting chatbots harder in the future. Currently, chatbots are not great at spotting linguistic subtleties like metaphor and sarcasm. They also struggle badly with empathy. Keen to inject creativity into conversations, the industry is now hiring human copywriters to script and inject personality into bots.

Take Poncho, for example, an intentionally humorous bot that provides weather updates. The idea with services like this is to package information in more digestable, entertaining ways – in theory increasing its impact.

Amazon, which developed the voice controlled AI personal assistant Alexa, has even offered a $2.5 million prize to the team whose chatbot can “converse coherently and engagingly with humans on popular topics for 20 minutes”.

Facebook’s research division recently announced the results of an experiment in which chatbots were taught the art of negotiation and bargaining

And Facebook’s research division recently announced the results of an experiment in which chatbots were taught the art of negotiation and bargaining. The results showed there was a trade-off between mimicking human speech and trying a more aggressive, but less comprehensible approach. Such insights could lead to the development of savvier bots, able to strike deals with humans while pandering just enough to our linguistic expectations.

But as chatbots get better at conversing with us, other questions are being raised – what if they get too good?

Liesl Yearsley, an entrepreneur who has worked on chatbots, recently raised the possibility of automated programs manipulating humans who become emotionally attached to them. She believes this is already something that is becoming surprisingly common – people often share some of their deepest secrets with chatbots. After all, it is something – if not someone – to talk to.

“Systems specifically designed to form relationships with a human will have much more power,” she wrote. “AI will influence how we think, and how we treat others.”

For many purposes, automated programs may be a clever and efficient option. Some, though, will still yearn for the human touch over the chatbot, website or structured phone menu – and that’s the desire all of these technologies have yet to satisfy.

To comment on this story or anything else you have seen on BBC Capital, please head over to our Facebook page or message us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter called "If You Only Read 6 Things This Week". A handpicked selection of stories from BBC Future, Culture, Capital and Travel, delivered to your inbox every Friday.