Active Parameters
117B
Context Length
128K
Modality
Text
Architecture
Mixture of Experts (MoE)
License
Apache 2.0
Release Date
5 Aug 2025
Knowledge Cutoff
Jun 2024
Total Expert Parameters
5.1B
Number of Experts
128
Active Experts
4
Attention Structure
Multi-Head Attention
Hidden Dimension Size
2880
Number of Layers
36
Attention Heads
-
Key-Value Heads
-
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
GPT-OSS 120B is a large open-weight model from OpenAI, designed to operate in data centers and on high-end desktops and laptops. It is developed to support advanced reasoning, agentic tasks, and diverse developer use cases, functioning as a text-only model for both input and output modalities.
Ranking is for Local LLMs.
Rank
#5
Benchmark | Score | Rank |
---|---|---|
Coding Aider Coding | 0.79 | 🥇 1 |
StackUnseen ProLLM Stack Unseen | 0.93 | 🥇 1 |
Reasoning LiveBench Reasoning | 0.78 | 6 |
Web Development WebDev Arena | 1081.54 | 6 |
Agentic Coding LiveBench Agentic | 0.10 | 10 |
Coding LiveBench Coding | 0.59 | 11 |
Mathematics LiveBench Mathematics | 0.70 | 11 |
Data Analysis LiveBench Data Analysis | 0.57 | 13 |
Overall Rank
#5
Coding Rank
#1 🥇
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens