Meta Releases Llama 3.2—and Gives Its AI a Voice

Mark Zuckerberg announced today that Meta, the social-media-turned-metaverse-turned-artificial intelligence conglomerate, will upgrade its AI assistants to give them a range of celebrity voices, including those of Dame Judi Dench and John Cena. The most important development for Meta’s long-term ambitions, however, is its models’ new ability to recognize user images and other visual information.
Meta today also announced Llama 3.2, the first version of its free AI models to have visual capabilities, expanding their use and compatibility with robots, virtual reality, and so-called AI agents. Other versions of Llama 3.2 are also the first to be developed to work on mobile devices. This can help developers create AI-powered apps that run on a smartphone and touch its camera or view the screen to run apps on your behalf.
With Meta widely accessible through Facebook, Instagram, WhatsApp, and Messenger, the development of the assistant could give many people their first taste of a new generation of highly verbal and visually-aware AI assistants. Meta said that today more than 180 million people are already using Meta AI, as the company’s AI assistant is called, every week.
Meta recently gave its AI a more prominent charge in its apps—for example, making it part of the search bar on Instagram and Messenger. New celebrity voice options available to users will include Awkwafina, Keegan Michael Key, and Kristen Bell.
In the past Meta has given celebrities to script-based assistants, but these characters have failed to gain much traction. In July the company launched a tool called AI Studio that allows users to create chatbots for any person they choose. Meta says the new voices will be available to users in the US, Canada, Australia and New Zealand next month. Meta AI’s photo capabilities will be rolled out in the US, but the company didn’t say when the features might appear in other markets.
The new version of Meta AI will also be able to provide feedback and information about users’ photos; for example, if you are not sure which bird you have photographed, it can tell you the species. It will also be able to help edit photos, for example, by adding a new background or details on demand. Google released a similar tool for its Pixel smartphones and Google Photos in April.
The new Powering Meta AI’s is an improved version of Llama, a model of the great Meta language. The free model announced today could have a wider impact, given that the Llama family has been widely adopted by developers and developers.
Unlike OpenAI models, Llama can be downloaded and run locally at no charge—though there are some restrictions on large-scale commercial use. A Llama can also be easily tamed, or prepared with additional training, to perform certain tasks.
Patrick Wendell, founder and VP of engineering at Databricks, a company that hosts AI models including Llama, says many companies like open models because it allows them to better protect their data.
Source link