Just months after Google DeepMind unveiled Gemini — its most capable AI model ever — the London-based lab has released its compact offspring: Gemma.

Named after the Latin word for “precious stone,” Gemma is a new family of open models for developers and researchers. Google designed them for cost-efficient app and software building.

“Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available worldwide starting today,” Sundar Pichai, the company’s CEO, said on Twitter.

Gemma comes in two sizes — 2 billion and 7 billion parameters. Each of them has been released with pre-trained and instruction-tuned variants.

The lightweight models are descendants of Gemini. From their parent, they inherit technical and infrastructure components. Consequently, the models offer “best-in-class performance,” Google said.

As evidence, the tech titan revealed eye-catching comparisons with Llama-2, a family of large language models (LLMs) released by Meta a year ago.