In an era where artificial intelligence increasingly powers customer interactions, a seemingly friendly text message from a restaurant reservation bot recently exposed a growing tension between convenience and transparency. The message, sent through the popular booking platform Resy, introduced itself as 'Theo' and inquired about dietary restrictions and celebrations for an upcoming reservation. The recipient, a regular diner, responded warmly, sharing that the meal was for Mother’s Day. After a few back-and-forths discussing location preferences, the exchange took an abrupt turn. The voice behind 'Theo' asked mechanically, 'Would you like me to save that it’s Mother’s Day for your future visits too, or just for this one?' That robotic phrasing shattered the illusion. The customer realized they had been conversing with an AI chatbot, not a human employee. The trust built over a handful of messages evaporated, replaced by irritation and a sense of being duped.
This scenario is not isolated. According to an October 2025 survey, approximately half of small businesses in the United States now deploy AI to elevate their customer service operations. That figure is likely higher today, as cost pressures and technological advancements accelerate adoption. AI can handle straightforward tasks like appointment scheduling, reservation changes, and basic FAQs efficiently. Many customers appreciate the speed and availability. However, the lack of upfront disclosure poses significant ethical and practical problems.
The Broader Trend of Stealth AI
The restaurant incident mirrors a broader trend across industries. Medical providers, telecom companies, e-commerce platforms, and even government agencies are integrating AI into customer-facing roles. In some cases, the AI is obvious from the start — a robotic voice on a phone call or a chatbot labeled 'Assistant.' Yet many businesses choose not to announce that the interlocutor is not human. They use human names, emojis, and conversational styles to mimic personal service. The intent is to build rapport, but the effect can be the opposite when the truth emerges.
One medical provider, for instance, uses an AI system for inbound calls. The author of the original article noted that the AI responded too quickly and its tone became repetitive over time. It stayed within its designated scope — handling only appointment scheduling — and passed sensitive matters to human staff. Still, the lack of an initial identification left a sour aftertaste. The customer felt tricked, even though the interaction was efficient.
Why does this matter? Trust is the bedrock of any business relationship. Customer trust is hard to earn and easy to lose. When a company permits an AI to impersonate a human without disclosure, it signals that it values cost savings over honesty. The moment the customer discovers the deception, the relationship is damaged. This is especially harmful in service industries like hospitality and healthcare, where personal connection is part of the value proposition.
The History of AI in Customer Service
Automated customer service is not new. Early interactive voice response (IVR) systems from the 1970s offered limited menu-driven options. In the 1990s, first-generation chatbots like ELIZA simulated conversation using pattern matching. The rise of natural language processing in the 2010s brought more sophisticated virtual assistants, such as Apple’s Siri and Amazon’s Alexa, into homes. Businesses began deploying chatbots on websites to handle common queries. However, these early systems were typically labeled as bots or 'virtual agents.' The current generation of AI, powered by large language models, can generate remarkably human-like responses. This blurs the line between human and machine, creating new opportunities for misuse.
The ethical guidelines for AI disclosure are still evolving. Some jurisdictions have enacted laws requiring that AI systems identify themselves during interactions. For example, the European Union’s AI Act includes transparency obligations for certain systems. In the United States, the Federal Trade Commission has warned about deceptive practices involving AI. Yet enforcement remains inconsistent, and many companies operate in a gray zone. The restaurant in question may have considered the use of a human name a sales technique rather than deception. But for the customer, it was a breach of trust.
What Businesses Should Do
For businesses deploying AI in customer service, the solution is straightforward and inexpensive: explicitly disclose that the interaction is with an AI chatbot. This can be done at the start of the conversation, in a subtle but clear manner. For example, 'Hi, I’m Theo, an AI assistant from Restaurant XYZ. How can I help with your reservation?' Or a persistent banner: 'You are chatting with an AI.' This simple step preserves trust and sets accurate expectations. Customers who object can ask for a human agent. Those who are comfortable can proceed with full awareness.
Even when AI is disclosed, the experience can be positive. Many users appreciate the convenience of automated reminders, quick answers, and 24/7 availability. The problem arises when disclosure is withheld. A 2023 study by the MIT Sloan School of Management found that customers react negatively to undisclosed AI, rating the service as less trustworthy and less satisfying than when the AI is transparent. The study also noted that transparency does not reduce usage; if anything, it helps customers calibrate their expectations.
Companies should also design their AI systems to handle handoffs to human agents seamlessly. If the AI cannot resolve an issue, it should transfer the customer without forcing them to repeat information. This respects the customer’s time and reduces frustration. Additionally, businesses must train staff to oversee AI interactions, ensuring that the bot’s tone remains appropriate and that it does not make promises it cannot keep.
The Future of AI Customer Service
As AI technology continues to advance, the line between human and machine will become even harder to detect. Voice cloning, realistic avatars, and emotional AI are on the horizon. These tools could offer extraordinary convenience — imagine a virtual concierge that remembers your preferences across multiple restaurants. But they also amplify ethical risks. The same technology that can mimic empathy can manipulate consumers. The only sustainable approach is a commitment to transparency.
Some businesses have already adopted best practices. For instance, several hotel chains use AI chatbots for booking inquiries but clearly label them as bots and provide an option to speak with a human. Airlines often disclose when a social media response is generated by AI. These examples show that it is possible to balance efficiency with honesty.
The author of the original article did not cancel their Mother’s Day reservation despite feeling deceived — a testament to the practical constraints of life. But the sting of the interaction lingered. Trust, once broken, is difficult to restore. In a competitive marketplace, businesses that prioritize transparency will win customer loyalty, while those that hide behind AI faces risk long-term damage.
The lesson is clear: AI can be a powerful tool for customer service, but it must be used ethically. Disclosure is not a nice-to-have; it is a fundamental requirement for maintaining trust. As the technology becomes ubiquitous, regulators and consumers alike will demand that the machine always announces itself. Those who ignore this call may find that the cost savings come at the expense of their most valuable asset: customer relationships.
Source: PCWorld News