Meta Platforms is releasing its latest large language model, Llama 3, and an image generator in an effort to catch up to the generative AI market leader, OpenAI. The new models will be integrated into the company’s virtual assistant Meta AI, which it claims to be the most sophisticated among its competitors like Google and Mistral AI. These enhancements will be featured within Meta’s various apps as well as a new standalone website to compete more directly with OpenAI’s ChatGPT.

In order to challenge OpenAI’s leading position in generative AI technology, Meta has been working on pushing these products out to its billions of users. This effort requires an overhaul of computing infrastructure and a consolidation of research and product teams within the company. Meta has taken the approach of openly releasing its Llama models for developers to use in building AI apps, which may pose safety concerns about potential misuse by unscrupulous actors.

Meta equipped Llama 3 with new computer coding capabilities and fed it images, in addition to text, during training. The model currently only outputs text but future versions will include more advanced reasoning capabilities and support multimodality, generating both text and images. The goal is to make users’ lives easier by assisting with tasks like interacting with businesses, writing, and trip planning. The inclusion of images in the training of Llama 3 will enhance updates to the Meta smart glasses’ Meta AI, enabling object identification.

Meta announced a partnership with Google to include real-time search results in the assistant’s responses, in addition to an existing partnership with Microsoft’s Bing search engine. The update expanding the Meta AI assistant beyond the US market will include markets in Australia, Canada, Singapore, Nigeria, and Pakistan, with plans for European expansion still in progress due to stricter privacy rules. Mark Zuckerberg emphasized Meta AI’s intelligence compared to other AI assistants, especially as it is freely accessible for users.

The release of Llama 3, which includes versions with 8 billion to 70 billion parameters, showed promising performance metrics compared to other free models. A larger version with 400 billion parameters is still being trained. Meta addressed issues with the previous Llama 2 model by using high-quality data to improve context recognition and avoid misunderstandings. The company has not disclosed the specific datasets used but claims to have fed significantly more data into Llama 3 than in the previous version, aiming to improve nuance recognition and reduce errors in responses.

Share.
Exit mobile version