tech
Tech News. Now Meta has also geared up in the race of Artificial Intelligence. Meta, a company led by Mark Zuckerberg, has launched its new generative AI models Llama 4 series. It includes two major models 'Llama 4 Maverick' and 'Llama 4 Scout', which have the ability to understand not only text but also images and videos. The aim of these models is to compete with giants like OpenAI and Google.
These new models of Meta can now be used in WhatsApp, Instagram, Messenger and other apps. The special thing is that Meta has made them naturally multimodal, so that they can process images and videos along with text in a better way.
Meta has currently unveiled three models under the Llama 4 series :-
-Llama 4 Maverick
-Llama 4 Scout
-Llama 4 Behemoth
Meta claims that the Llama 4 Behemoth is "one of the smartest LLMs in the world" and will act as a "teacher" for future models.
These models are built with a mixture of experts, a new machine learning technique inspired by DeepSeek (a Chinese AI startup). This helps train different parts of the model for specific tasks, improving both performance and efficiency. However, these models from Meta are not yet reasoning models like OpenAI o3-mini or DeepSeek R1, which are capable of answering complex questions like human thinking.
Meta has made the Llama 4 models available on its various platforms – WhatsApp, Instagram, Messenger, and the Meta AI website. Currently, these features have been rolled out in more than 40 countries. However, its multimodal features (such as image generation) are currently available only to English users in the US. This means that not everyone will be able to get the experience of creating "Ghibli-style" images for now.
Meta claims
"We pre-trained these models on massive amounts of unlabeled text, image, and video data, making them naturally multimodal," Meta said.
Copyright © 2025 Top Indian News