With the release of the latest version of Meta AI together with Llama 3, an open-source AI model, Meta is stepping up its game in the AI competition. This virtual assistant is now accessible on all Meta platforms and is powered by Llama 3.
This is all the information you require regarding the newest large language model (LLM) and AI assistant from Meta.
What is Llama 3?
The most recent model in Meta’s Llama series of open-source AI models, Llama 3, was released. There are two versions of Llama 3 available: one with 8 billion characteristics and the other with 70 billion.
More parameters usually translate into greater performance since they boost the model’s contextual awareness. Parameters are essentially the “knowledge” the model gains during training.
According to Meta, Llama 3 raises the bar for large language models at these scales of parameters. greater pretraining and posttraining procedures have led to lower false rejection rates, greater alignment, and a wider range of responses from the model. Notably, Llama 3 has improved reasoning, code generation, and instruction following capabilities.
Technical specs and trainingÂ
With a vocabulary of 128,000 tokens, Llama 3 employs a tokenizer that, according to Meta, increases model performance and enhances language encoding efficiency. For the 8B and 70B parameter models, Meta introduced grouped query attention (GQA) to improve inference speed. To ensure that self-attention stays inside document borders, the models are trained on sequences consisting of 8,192 tokens.
More than 15 trillion tokens are included in the training set for Llama 3, which is seven times more than the dataset used for Llama 2. This data comes from publically accessible sources. Four times as much code and more than 5% of high-quality non-English data representing more than 30 languages are included in this enlarged collection.
What’s next for Llama 3?
For Llama 3, the 8 billion and 70 billion parameter models are only the beginning. In order to enable the model to process many forms at once, including text, code, audio, image, and video, Meta intends to release more models with larger context windows, multilingual support, and multi-modal capabilities.
Furthermore, Meta is working on an even denser model with more than 400 billion parameters.
Meta AI: Powered by Llama 3
At the moment, Meta AI can be found in over a dozen countries, including the US, Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe, through the search bar on all of Meta’s apps, including Facebook, Instagram, WhatsApp, and Messenger.
Despite Meta having previously pilot-tested the AI assistant with a small number of Indian consumers across its platforms, it is noticeably unavailable in India.
As a general-purpose assistant, Meta AI may respond to queries by utilizing up-to-date data from Bing and Google. It can also compose many forms of creative content, translate languages, create text and images, and summarize data.
The chatbot will also be available to users online via a brand-new meta.ai website.
What sets Meta AI apart?
Meta AI’s Imagine capability, which enables real-time image generation, is one of its most notable features.
As they type, users can observe how dynamically images emerge, changing with each keystroke.
In the US, this feature is currently available in beta on WhatsApp and the Meta AI online experience.
Users can also instruct the chatbot to animate a picture, refine it in a different way, or convert it into a GIF that they can send to their friends.
Read Also – 7 Steps For Sellers To Sell Their Home Without Any Hassle