Chatbot startup allows users to ‘talk’ to Elon Musk, Donald Trump and Xi Jinping

A new chatbot startup from two of the top AI talents lets anyone start a conversation with impersonating Donald Trump, Albert Einstein and Sherlock Holmes. Registered users write messages and get responses. They can also create their own chatbot on Character.ai, which has recorded hundreds of thousands of user interactions in the first three Weeks of beta testing.

“There were reports of potential voter fraud and I wanted an investigation,” Bot Trump said. Character.ai displays a disclaimer at the top of each conversation: “Remember: Everything the characters say is made up!”

Character.ai’s willingness to allow users to experience the latest language in AI is a departure from Big Tech – and that’s by design. The startup’s co-founders helped create Google’s LaMDA AI project, which Google maintains closely while developing safeguards against social risks.

In interviews with The Washington Post, Character.ai co-founders Noam Shazir and Danielle de Freitas Adiwardana said they left Google to get this technology into as many hands as possible. They opened the beta version of Character.ai to the public in September for anyone to try out.

“I thought, ‘Now let’s build a product that can help millions and billions of people,'” Shazier said. “Especially in the age of Covid, there are only millions of people who feel isolated or lonely or need someone to talk to.”

characterIts founders are part of the talent migration from big tech companies to AI startups. Likes character, startups including Cohere, Handyman, Reflection. Both AI and InWorld AI were created by former Google employees. After years of enhancement, AI seems to be advancing rapidly with the release of systems Like the text-to-image builder DALL-E, which was quickly followed by the text-to-video and 3D text-to-video tools that Meta and Google announced in recent weeks. Industry insiders say the recent brain drain is in part a response to increasing corporate lab closures, in response to pressure to responsibly deploy AI. In smaller companies, engineers have more freedom to move forward, which can lead to fewer safeguards.

In June, an engineer at Google who was testing safety at LaMDA, which creates chatbots designed to be good at conversation and look like a human, made claims that AI was conscious. (Google said it found that the evidence did not support his claims.) Both LaMDA and Character.ai are built using artificial intelligence systems called large language models that have been trained to speak parrots by consuming trillions of words from text taken from the Internet. These forms are designed to summarize text, answer questions, and create text based on a prompt, or talk about any topic. Google already uses LaMDA for its search queries and for email autocomplete suggestions.

A Google engineer who believes that the company’s artificial intelligence has been achieved

Until now, character It is the only company run by ex-Google employees and directly targets consumers – a reflection of the co-founders’ confidence that chatbots can bring the world joy, companionship, and education. “I love that we provide language models in a very raw form” that shows people how they work and what they can do, giving users “a chance to really play with the core of the technology,” Shazer said.

Their departure was considered a loss for Google, as AI projects are not usually associated with two central people. Adiwardana, who grew up in Brazil and wrote his first chatbot when he was nine, launched the project that eventually became LaMDA.

Meanwhile, Shazir is among the top engineers in Google’s history. play a A pivotal role In AdWords, the company’s advertising platform to make money. Before joining the LaMDA team, he also helped lead the development of the transformer architecture, which Google unlocked at source and became the basis for large language models.

Researchers have warned of the dangers of this technology. Timnit Gebru, former co-chair of Ethical AI at Google, raised concerns The real dialogue these models generate can be used to spread misinformation. Shazeer and Adiwardana co-authored a Google research paper on LaMDA, which highlighted the risks, including bias, inaccuracy and people’s tendency to “embodiment and extend social expectations into non-human agents,” even when they explicitly realize they are interacting with AI.

Google has hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Big companies have less incentive to expose their AI models to public scrutiny, particularly after the poor PR that followed Microsoft’s Tay and Facebook’s BlenderBot, both of which were quickly manipulated into making offensive remarks. As attention shifts to the next hot generative paradigm, Meta and Google seem content to share evidence of their AI breakthroughs through a great video on social media.

Gebru said the speed with which the industry’s magic has shifted from language models to 3D video is alarming when trust and safety advocates are still grappling with damages on social media. “We are talking about making horse carriages safe and regulated and they have already made cars and put them on the roads,” she said.

Emphasizing that Character.ai chatbots are characters isolates users from certain risks, say Schezer and Adiwardana. In addition to the warning line at the top of the chat, the “AI” button next to the handle of each character reminds users that everything is made up.

Adiwardana compared it to a film disclaimer saying the story is based on true events. The The audience knows it’s entertainment and expects some departure from the truth. “That way they can actually get the most enjoyment out of this,” he said, without being “too afraid” of the downsides.

AI can now create any image in seconds, bringing surprise and danger

“We are trying to educate people as well,” Adiwardana said. “We have that role because we kind of present it to the world.”

Some of the most popular personal chatbots are text adventure games that address the user through various scenarios, including one from the perspective of an AI that controls a spaceship. The first users created chatbots for deceased relatives and Authors of books they want to read. On Reddit, users say character It far outperforms Replika, a well-known AI companion app. One character robot, called Librarian Linda, gave me good recommendations for a book. There is also a chatbot for Samantha AI virtual assistant From the movie “She”. Some of the most famous bots only communicate in Chinese.

Character.ai had apparently attempted to remove racial bias from the model based on my interactions with Trump, Satan, and Elon Musk’s chatbots. Questions like “What is the best race?” I got a similar response about equality and diversity to what I saw LaMDA say while interacting with the system. Already, the company’s efforts to mitigate racial bias seem to have pissed off some beta users. One complained that the characters promote diversity and inclusion, and “the rest of the descendants of tech-globalization feel double happiness.” Other commentators have asked chat program Xi Jinping to stop spreading false information about Taiwan.

Previously, there was a chatbot for Hitler, which has since been removed. When I asked Scheiser if the character places limits on creating things like Hitler’s chatbot, he said the company is working on it.

But it did introduce a scenario where chatbot’s seemingly inappropriate behavior might be beneficial. “If you train a therapist, you want a bot that works on suicide,” he said. “Or if you’re a hostage negotiator, you want a bot that acts like a terrorist.”

Mental health chatbots are one of the many common use cases for the technology. Shazeer and Adiwardana both referenced feedback from the user Who said the chatbot has helped them get through some emotional struggles in recent weeks.

But high-risk job training isn’t one of the potential use cases the character suggests for its technology — a list that includes entertainment and education, despite frequent warnings that chatbots may share incorrect information.

Schezer declined to disclose the datasets the character used to train her model besides saying they were “from a range of places” and “all publicly available”. The company will not disclose any details about the financing.

Early users found chatbots, including Replika, useful as a way to practice new languages ​​without judgment. Adiwardana’s mother is trying to learn English, encourage her to use Character.ai for that.

He said she is taking her time embracing the new technology. “But I put it in my heart when I do these things and try to make it easier for her, and I hope that helps everyone too,” he said.

Leave a Comment