March 17, 2026
  • Rakuten Group, Inc.

Rakuten AI 3.0 Now Available, Japan’s Largest High-Performance AI Model Developed as Part of the GENIAC Project

- Latest LLM released to accelerate Japan’s AI development, with excellent scores across multiple Japanese benchmarks

Tokyo, March 17, 2026 – Rakuten Group, Inc. released its latest Japanese large language model (LLM), Rakuten AI 3.0, developed as part of the Generative AI Accelerator Challenge (GENIAC) project promoted by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO). Unveiled in December 2025 and now fine-tuned, Rakuten AI 3.0 is Japan’s largest high-performance AI model*1 that empowers companies and professionals developing AI applications. 

Available free under the Apache 2.0 license*2 from the official Rakuten Group Hugging Face repository*3, Rakuten AI 3.0 is optimized for the Japanese language and excels at tasks including writing, code generation, document analysis and extraction. Compared to Rakuten's earlier models, it achieves significantly higher accuracy and more robust performance.

In July 2025, Rakuten was selected for the third term of the GENIAC project to develop Japanese language-optimized AI models. Part of the training cost for Rakuten AI 3.0 was provided by the GENIAC project, which offers support for computing resources necessary for Japan’s generative AI development.

Ting Cai, Chief AI & Data Officer of Rakuten Group, commented, “Rakuten is committed to delivering high-quality, cost-efficient models that empower businesses and users.  Rakuten AI 3.0, our largest and most competitive model, is an outstanding combination of data, engineering and innovative architecture at scale. By sharing open models, we aim to accelerate AI development in Japan. We are excited for the opportunity to work with the Ministry of Economy, Trade and Industry to foster a collaborative AI development community that drives progress for all.”

Best-in-class Japanese performance

Rakuten AI 3.0 (LLM) compared to leading models focusing on the Japanese language

*MMLU-ProX and MATH-100 scores for GPT-OSS-Swallow-120B-RL-v0.1 and ABEJA-QwQ32b-Reasoning-Japanese-v1.0 were updated at 10:00am on March 18, 2026.

Rakuten evaluated the model across multiple Japanese benchmarks, assessing capabilities in Japan-specific cultural knowledge, history, graduate-level reasoning, competitive mathematics and instruction following*4,5,6,7. Scores of the new model checkpoint were compared with leading models.

Rakuten is continuously pushing the boundaries of innovation to develop best-in-class LLMs for R&D and deliver best-in-class AI services to its customers. By making the models open to all, Rakuten aims to contribute to the open-source community and accelerate the development of local AI applications and Japanese language LLMs.

About Rakuten AI 3.0
Rakuten AI 3.0, is an approximately 700 billion parameter Mixture of Experts (MoE)*8 model optimized for Japanese. Developed by leveraging the best from the open source community and building on Rakuten’s high-quality, bilingual original data, engineering and research, it offers a superior grasp of Japanese language and culture.

*1 Based on publicly disclosed information as of March 17, 2026, according to Rakuten’s research. Rakuten’s developed models to date include Rakuten AI 7B with approximately 7 billion parameters and Rakuten AI 2.0 with approximately 8x7B(47B) billion parameters.
*2 About the Apache 2.0 License: https://www.apache.org/licenses/LICENSE-2.0
*3 Rakuten Group Official Hugging Face repository: https://huggingface.co/Rakuten
*4 JamC-QA: https://www.anlp.jp/proceedings/annual_meeting/2025/pdf_dir/Q2-18.pdf (*Japanese page)
*5 MMLU-ProX: https://arxiv.org/abs/2503.10497
*6 MCLM MATH-100: https://arxiv.org/html/2502.17407v2
*7 M-IFEval: https://arxiv.org/abs/2502.04688
*8 The Mixture of Experts model architecture is an AI model architecture where the model is divided into multiple sub models, known as experts. During inference and training, only a subset of the experts is activated and used to process the input.

 

*Please note that the information contained in press releases is current as of the date of release.

  • Month
  • Year
  • Category
  • Month
  • Year
  • Category