Article

GitHub Leak Hints OpenAI Will Release 120B- and 20B-Parameter Open-Source GPT Models in Hours

DATE: 8/2/2025 · STATUS: LIVE

OpenAI’s next open-source AI model could arrive any moment, leaked trails pointing at exciting global public GPT-OSS releases primed for…

GitHub Leak Hints OpenAI Will Release 120B- and 20B-Parameter Open-Source GPT Models in Hours
Article content

A recent leak suggests OpenAI is about to release a powerful open-source AI model, possibly within hours. Developers across the community followed a trail of digital breadcrumbs that appeared on public code hosting platforms and sparked intense discussion among engineers and researchers.

Screenshots showed repositories labelled yofo-deepcurrent/gpt-oss-120b and yofo-wildflower/gpt-oss-20b, among a handful of other entries. Those listings vanished shortly after they surfaced, but observers noticed the user profiles were linked to individuals on OpenAI’s team.

The tag gpt-oss seems to stand for 'GPT Open Source Software', hinting at a nod toward the organization’s early commitment to sharing key models. For a group that has moved many of its flagship systems behind gated APIs, this move could be seen as a step back toward greater transparency.

Evidence points to multiple editions under development rather than a single flagship release. Along with the suspected 120-billion-parameter version, smaller editions may carry their own unique codenames and specialized settings, suggesting a coordinated rollout slate.

A leaked configuration file sheds light on the internal design of the largest model in this lineup. According to the document, the system leverages a Mixture of Experts (MoE) framework that distributes training and inference across expert sub-models.

Under that scheme, the pool includes 128 specialist units. When a prompt arrives, a lightweight gating mechanism selects four of those units for each request. This arrangement allows the model to draw on the capacity of a giant network and run at speeds closer to much smaller footprints.

By embracing open licensing, OpenAI would place this family in direct competition with offerings such as Mistral AI’s Mixtral and Meta’s Llama family. Both have become popular in academic and enterprise environments thanks to generous weight releases and strong community support.

Technical specifications attributed to the new open-source model extend beyond expert routing. Key features reportedly include:

  • A vocabulary spanning hundreds of thousands of tokens for broad language coverage
  • Sliding Window Attention to maintain coherence over extended text sequences
  • Optimized memory usage that keeps hardware requirements within the reach of research labs
  • Compatibility with dozens of human languages, covering major global dialects

Early tests hinted at training efficiency and inference speeds that could match or exceed similarly sized closed models. Adoption in research settings may accelerate benchmarking and fine-tuning efforts, as users can inspect and adapt weights without request limits.

OpenAI’s decision to offer a robust gpt-oss kit arrives amid criticism of recent releases that prioritized exclusivity. By granting open access to high-parameter systems, the company could mend fences with developers and institutions that clamored for more direct involvement.

Observers note that rival labs captured developer mindshare by opening their code and models for public experimentation. A timely introduction of a top-tier open-source release from OpenAI has the potential to redirect attention and set new standards in collaborative AI development.

No formal announcement is on record, leaving final details unconfirmed. Nevertheless, the presence of genuine code samples and config metadata adds weight to this report, making it one of the most convincing leaks in recent memory.

If OpenAI does proceed, the introduction of a 120B-parameter MoE model under an open license would represent a defining moment in AI history. That release may arrive at any moment, offering the community a chance to deploy and explore a system from one of the field’s most prominent pioneers.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.