Meta is putting its latest AI chatbot on the web to talk to the masses

Meta’s AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system to gather feedback on its capabilities.

The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it looks like only residents in the US can do that.) BlenderBot 3 is able to engage in normal chitchat, says Meta, but also answer questions you’d ask a digital assistant. You can, from talking about “healthy food recipes to finding kid-friendly amenities in town.”

The bot is a prototype and built on Meta’s previous work known as the Large Language Model or LLMS – powerful but flawed text-generation software of which OpenAI’s GPT-3 is the most widely known example. Like all LLMs, BlenderBot is initially trained on a huge dataset of text, which it then mins for statistical patterns to generate the language. Such systems have proven to be extremely flexible and have been used in a variety of ways, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they reproduce biases in their training data and often invent answers to users’ questions (a big problem if they are going to be useful as digital assistants). .

This latter issue is something that Meta wants to test specifically with Blenderbot. A great feature of the chatbot is that it is able to search the internet to talk on specific topics. More importantly, users can click on its answers to see where it got its information. In other words, BlenderBot 3 can cite its sources.

By releasing the chatbot to the general public, Meta seeks to collect feedback on the various problems facing the larger language model. Users chatting with BlenderBot will be able to flag any suspicious reactions from the system, and Meta says it “worked hard to reduce bots’ use of obscene language, slurs, and culturally insensitive comments.” Is.” Users must opt ​​in to have their data collected, and if so, their conversations and feedback will be stored and later published on Meta for use by the general AI research community.

“We are committed to publicly releasing all of the data collected in the demo in the hope that we can improve conversational AI,” said Kurt Shuster, a research engineer at Meta who helped create Blenderbot 3. ledge,

An example conversation with BlenderBot 3 on the web. Users can respond and provide feedback for specific answers.
image: meta

Releasing prototype AI chatbots to the public has, historically, been a risky move for tech companies. In 2016, Microsoft released a chatbot named Tay on Twitter, which it learned from its interactions with the public. Somewhat predictably, Twitter users soon trained Tai to revive a series of racist, anti-Semitism and misrepresentations. In response, Microsoft pulled the bot offline after less than 24 hours.

Meta says the AI ​​world has changed a lot since Ty’s malfunction and that BlenderBot has all kinds of safety rails that keep Meta from repeating Microsoft’s mistakes.

Importantly, says Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), while the TiE was designed to learn in real time from user interactions, BlenderBot is a static model. This means that it is able to remember what users say during a conversation (and will also retain this information via browser cookies if a user exits the program and comes back later) but the use of this data Will only be used to improve the system down the line.

“It’s just my personal opinion, but that [Tay] The episode is relatively unfortunate, as it created this chatbot winter where every institution was afraid to put out public chatbots for research,” Williamson tells WebMD. ledge,

Williamson says that most chatbots in use today are narrow and task-oriented. For example, think about customer service bots, which often present users with a preprogrammed dialog tree that narrows down their query before handing it over to a human agent who can actually do the work. The real reward is building a system that can interact as freely and naturally as a human, and Meta says the only way to achieve this is by letting the bots interact as free and natural.

“The lack of tolerance for bots to say inappropriate things is, in its broadest sense, unfortunate,” Williamson says. “And what we’re trying to do is release it very responsibly and advance the research.”

In addition to putting BlenderBot 3 on the web, Meta is also publishing built-in code, training datasets, and small model variants. Researchers can request access to the largest model through a form here, which has 175 billion parameters.

Source link

Leave a Comment