The nuances of the ethics of voice AI and what companies need to do

Learn how your company can build apps to automate tasks and create more efficiencies with low/no-code tools on November 9 at the Virtual Low-Code/No-Code Summit. Register here.

In early 2016, Microsoft announced Tay, an AI-powered chatbot capable of talking to and learning from random users on the Internet. Within 24 hours, the bot began making racist, misogynistic statements, seemingly unprovoked. The team pulled the plug on Tay, realizing that the ethics of leaving a chatbot online had not been explored at best.

The real questions are whether AI designed for random human interaction is ethical, and whether AI can be coded to stay within limits. This becomes even more important with voice AI, which companies use to automatically and directly communicate with customers.

Let’s take a moment to discuss what makes AI ethical versus unethical and how companies can integrate AI into their customer-facing roles in ethical ways.

What makes artificial intelligence unethical?

AI is supposed to be neutral. The information enters the black box – a form – and returns with a certain degree of processing. In Tay’s example, the researchers created their model by feeding the AI ​​a massive amount of conversational information influenced by human interaction. Results? An unethical paradigm hurts and does not help.


Low top symbol / no symbol

Join today’s leading CEOs at the nearly Code-Low/No-Code Summit on November 9th. Register for your free entry pass today.

Register here

What happens when AI is fed CCTV data? personal information? Pictures and art? What comes out on the other end?

The three biggest contributing factors to ethical dilemmas in AI are unethical use, data privacy issues, and biases in the system.

As technology advances, new models and methods of artificial intelligence appear daily, and the use is increasing. Researchers and companies publish models and methods almost randomly. Many of these are not well understood or organized. This often results in unethical outcomes even when the platforms have reduced biases.

Data privacy issues arise because AI models are built and trained on data that comes directly from users. In many cases, customers inadvertently become a test subject in one of the largest unstructured AI experiments in history. Your words, photos, biometric data, and even social media are fair game. But do they have to be?

Finally, we know from Tai and other examples that AI systems are biased. Like any creativity, what you put into it is what you get out of it.

One of the most notable examples of bias emerged from a 2003 experiment that revealed that researchers have used emails from a massive collection of Enron documents to train AI on conversation for decades. The trained AI saw the world from the point of view of a deposed energy trader in Houston. How many of us would say these emails will represent our Point of view?

Ethics in AI audio

AI صوت voice It shares the same basic ethical concerns as artificial intelligence in general, but because voice closely mimics human speech and experience, there is a greater potential for manipulation and misrepresentation. Also, we tend to trust things with voice, including friendly interfaces like Alexa and Siri.

Voice AI is also more likely to interact with a real customer in real time. In other words, the voice of the AI ​​are the representatives of your company. And just like the human representatives, you want to make sure that your AI is trained and operates in accordance with company values ​​and professional code of conduct.

Human agents (and AI systems) should not treat callers differently for reasons unrelated to their service membership. But depending on the dataset, the system may not provide a consistent experience. For example, more males contacting the center may result in a gender rating biased against female speakers. And what happens when biases, including against regional speech and slang, infiltrate AI’s vocal interactions?

A final nuance is that voice AI in customer service is a form of automation. This means that it could replace existing jobs, which is an ethical dilemma in itself. Companies in the industry must carefully manage results.

Building ethical artificial intelligence

Ethical AI It’s still a burgeoning field, and there isn’t a lot of data or research available to produce a complete set of guidelines. However, here are some pointers.

As with any data collection solution, companies must have strong governance systems that adhere to (human) privacy laws. Not all customer data is fair game, and customers must understand that everything they do or say on your website can be part of a future AI model. It is unclear how this will change their behaviour, but it is important to provide informed consent.

The area code and other personal data should not obscure the form. For example, at Skit, we deploy our systems in places where personal information is collected and stored. We ensure that machine learning models don’t acquire individual aspects or data points, so training and pipelines are oblivious to things like caller phone numbers and other identifying features.

Next, companies must perform regular bias tests and manage checks and balances for data use. The fundamental question should be whether AI interacts with customers and other users fairly and ethically and whether evolving cases – including customer error – will spiral out of control. Since voice AI, like any other AI, can fail, systems must be transparent to scrutiny. This is especially important for customer service because the product interacts directly with users and can make or break trust.

Finally, companies considering AI should have ethics committees that screen and scrutinize value chains and business decisions for new ethical challenges. Also, companies wishing to participate in ground-breaking research must devote time and resources to making sure the research is beneficial to all parties involved.

Artificial intelligence products are not new. But the scale in which it is being adopted is unprecedented.

When this happens, we need major reforms in understanding and building frameworks around the ethical use of AI. These reforms will lead us towards more transparent, fair and private systems. Together, we can focus on which use cases make sense and which don’t, considering the future of humanity.

Saurabh Gupta He is the co-founder and CEO of


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including technical people who do data work, can share ideas and innovations related to data.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You can even think Contribute an article Your own!

Read more from DataDecisionMakers

Leave a Comment