Groq, an AI inference platform not to be confused with xAIs GROK, has released Meta’s Llama 4 Scout and Maverick models.
These models are now accessible by developers and enterprises via GroqCloud™.

“We built Groq to drive the cost of compute to zero,” said Jonathan Ross, CEO and Founder of Groq. He said, “Our chips are designed for inference, which means developers can run models like Llama 4 faster, cheaper, and without compromise.”
Groq offers Llama 4 models for developers to execute multimodal workloads efficiently and cost-effectively. Llama 4 Scout provides a more affordable option at $0.13 per million tokens (blended rate), while Llama 4 Maverick offers enhanced capabilities at $0.53 per million tokens (blended rate). More detailed pricing information is available on the Groq website.
Meta’s latest open-source model family, Llama 4, features state-of-the-art native multimodality and a Mixture of Experts (MoE) architecture. Llama 4 Scout contains strong general-purpose models, while Llama 4 Maverick contains capable models for multilingual and multimodal work.
These are accessible through GroqChat, Console Development, and Groq API (model IDs available in-console). To start building, visit console.groq.com. Free access is available.
If you click on a link and make a purchase we may receive a small commission. Learn more. All content on this site is provided with no warranties, express or implied. Use any information at your own risk. Privacy Policy.
Discover more from BITVoxy Digest
Subscribe to get the latest posts sent to your email.