← All benchmarks

Offline Chat

Offline chat means running the full model locally — your conversations never leave your machine. The sweet spot is 7B–14B models for speed, or 32B for noticeably better quality.

7B–32BTypical model size
16–64 GBRecommended RAM
Llama 3.1 8B, Llama 3.2 1BKey models
20Benchmark rows

Why these models for this use case

Offline chat has a wide model range. For casual use, a 7B model at Q8 runs at 60–80 tok/s and feels fast. For more thoughtful responses, 14B at Q4 is a good middle ground. If you want GPT-3.5 class quality offline, 32B models are the target — and you need at least 24 GB RAM with a Q4 quantization (~20 GB). Ollama and LM Studio both support all these configurations with zero configuration.

Benchmark results — fastest rows first

Filtered to models commonly used for offline chat. Sorted by avg tok/s descending.

Chip Model Quant RAM req. Avg tok/s Runtime Source
M4 Max (40-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 182.6 tok/s ref
M4 Max (40-core GPU, 64 GB) Llama 3.2 1B Instruct Q4_K - Medium 180.3 tok/s ref
M4 Max (40-core GPU, 48 GB) Llama 3.2 1B Instruct Q4_K - Medium 179.0 tok/s ref
M3 Ultra (80-core GPU, 512 GB) Llama 3.2 1B Instruct Q4_K - Medium 178.8 tok/s ref
M3 Ultra (80-core GPU, 256 GB) Llama 3.2 1B Instruct Q4_K - Medium 177.9 tok/s ref
M2 Ultra (60-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 176.4 tok/s ref
M2 Ultra (60-core GPU, 64 GB) Llama 3.2 1B Instruct Q4_K - Medium 174.1 tok/s ref
M2 Ultra (60-core GPU, 192 GB) Llama 3.2 1B Instruct Q4_K - Medium 169.8 tok/s ref
M4 Max (32-core GPU, 36 GB) Llama 3.2 1B Instruct Q4_K - Medium 166.5 tok/s ref
M4 Max (GPU count not published, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 156.3 tok/s ref
M2 Max (38-core GPU, 32 GB) Llama 3.2 1B Instruct Q4_K - Medium 153.0 tok/s ref
M1 Ultra (64-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 151.1 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q4_G32 2.78 GB 149.1 tok/s MLX ref
M3 Max (40-core GPU, 48 GB) Llama 3.2 1B Instruct Q4_K - Medium 149.0 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q4 2.54 GB 148.1 tok/s MLX ref
M3 Max (40-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 146.3 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q5 3.26 GB 143.2 tok/s MLX ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q5_G32 3.5 GB 143.0 tok/s MLX ref
M1 Ultra (48-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 138.0 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q6 3.98 GB 136.6 tok/s MLX ref

Recommended chips for this use case

Other use cases

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv

Buying guide: best Mac for local LLMs →