Meta Launches New Llama 4 AI Models ‘Scout’ and ‘Maverick’ to Power Innovation

Meta is stepping on the gas in the AI race with the launch of its newest open-source large language models — Llama 4 Scout and Llama 4 Maverick. Announced over the weekend, these are not just iterative updates, but a bold new leap in the company's AI roadmap. Described by Meta as its “most advanced models yet,” the Llama 4 duo aims to set a benchmark for multimodal AI systems that can understand and generate text, images, audio, and video seamlessly.

Focus on multimodal capabilities and open access

The Llama 4 models are designed to handle a variety of input and output formats, which is a crucial prerequisite for AI applications in enterprises, education, and creative sectors. This is in contrast to standard LLMs, which mostly concentrate on text. Meta claims that Scout and Maverick are best-in-class for multimodality, which, if accurate, could position them as strong open-source rivals to commercial giants like GPT-4 (OpenAI) or Gemini (Google DeepMind).

These models also integrate enhanced safety protocols and dataset transparency, aiming to address ongoing concerns about AI bias, ethical risks, and real-world deployment reliability from day one. Additionally, Meta emphasizes scalability across cloud and on-prem environments, ensuring businesses can flexibly adopt and customize these tools according to specific operational needs and compliance requirements.

In a market where proprietary models dominate, Meta’s decision to open-source both Scout and Maverick is a strategic one, not only to promote research and collaboration but also to encourage developers, startups, and academic institutions to build on Meta’s infrastructure and tooling.

Meta previews Llama 4 Behemoth for research and model training

While Scout and Maverick are now publicly available, Meta also previewed another model under development: Llama 4 Behemoth. Described internally as a “teacher model,” Behemoth is expected to support reinforcement learning, fine-tuning, and benchmarking for future iterations. This aligns with Meta’s broader goal of shaping LLM development standards and leading research in the space.

Meta addressed key shortcomings before release

The release of Llama 4 wasn't without its difficulties. The Information claims that Meta first delayed its launch because of performance issues. According to reports, initial versions of the model lacked mathematics and reasoning skills and lagged behind OpenAI's offerings in terms of voice-based conversational fluency.

As the AI sector progressively shifts toward more interactive, real-time use cases, including multimodal inputs and outputs, this performance gap is particularly noticeable. In this context, fluency, context retention, and natural voice interaction have become essential.

Significant investment backing Meta’s AI strategy

The importance of LLMs and AI platforms to Meta's long-term strategy is demonstrated by its plans to invest up to $65 billion in AI infrastructure by 2025. It is anticipated that these investments will drive projects ranging from enterprise automation on platforms like WhatsApp, Instagram, and Messenger to generative AI tools and AR/VR systems, and metaverse applications.

It is yet to be determined if open-source models such as Scout and Maverick can match or surpass their proprietary versions. However, developers and researchers looking to progress AI outside of closed ecosystems are likely to find resonance in Meta's approach of leading through transparency and accessibility.

Latest Advancements in Confidential Computing to I ...

From Minecraft to Kubernetes Security: The Foundin ...