With Gemma 3n, Google has just added a lightweight and powerful AI model designed to run directly on phones. At Google I/O 2025, Gemma offered offline AI capabilities without using cloud servers, supporting text, audio, images, and videos.
Gemma 3n can run efficiently on devices with less than 2GB of RAM, making it one of the most accessible AI models for developers and users. With its on-device processing, it’s cheaper, faster, and more private.
Alongside Gemma 3n, Google also introduced two specialized models:
- MedGemma: Designed for analyzing health-related text and images, MedGemma is part of Google’s Health AI Developer Foundations and aims to power next-gen medical apps.
- SignGemma: A breakthrough model that translates sign language to spoken-language text, with a focus on American Sign Language (ASL). It aims to empower developers to build inclusive apps for the deaf and hard-of-hearing community.
Despite concerns over Gemma’s non-standard licensing, the models have seen tens of millions of downloads, signaling strong interest from developers.
Google’s Gemma 3n and MedGemma sets a new standard for AI models in 2025, pushing the boundaries of mobile AI, healthcare innovation, and accessibility.
Stay tuned with Jeffkom Story for more updates on the latest in AI, tech, and innovation.