ApX logo

GLM-4.5-Air

Active Parameters

106B

Context Length

128K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

MIT License

Release Date

28 Jul 2025

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

12.0B

Number of Experts

-

Active Experts

-

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

96

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

GLM-4.5-Air

The GLM-4.5-Air model, developed by Z.ai, is a member of the GLM-4.5 series, designed as a lightweight and efficient large language model. This variant is specifically optimized for on-device and smaller-scale cloud inference, aiming to deliver robust capabilities while minimizing hardware and computational requirements. It integrates core functionalities such as reasoning, coding, and agentic behaviors, making it suitable for a range of advanced AI applications.

Architecturally, GLM-4.5-Air leverages a Mixture-of-Experts (MoE) design. This allows the model to selectively activate a subset of its parameters during inference, enhancing computational efficiency compared to dense architectures. While the full GLM-4.5 model employs 355 billion total parameters with 32 billion active, GLM-4.5-Air features 106 billion total parameters with 12 billion active parameters. The model also incorporates a Multi-Token Prediction (MTP) layer to facilitate speculative decoding, which significantly boosts inference speed, potentially achieving generation rates of over 100 tokens per second.

GLM-4.5-Air supports a hybrid reasoning approach, offering both a 'thinking mode' for intricate, multi-step problem-solving and a 'non-thinking mode' for immediate, rapid responses. This dual-mode operation allows for dynamic adaptation to query complexity, optimizing resource utilization. The model is also engineered for advanced agentic applications, including native function calling, tool use, web browsing, and comprehensive software development tasks, such as full-stack web application creation.

About GLM Family

General Language Models from Z.ai


Other GLM Family Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#7

BenchmarkScoreRank

Web Development

WebDev Arena

1353.76

🥉

3

0.78

4

Agentic Coding

LiveBench Agentic

0.15

5

0.79

5

0.66

7

0.58

13

Rankings

Overall Rank

#7

Coding Rank

#13

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs