← All benchmarks

Coding Assistant

Coding assistants need fast token generation and low latency. A 7B–14B model at Q4–Q8 gives you near-instant responses. Here is the benchmark data to guide your hardware choice.

7B–14BTypical model size
16–24 GBRecommended RAM
Llama 3.1 8B, Llama 3.2 1BKey models
20Benchmark rows

Why these models for this use case

Coding assistants benefit from speed over size. A 7B model at Q8 on M4 runs at 60–80 tok/s — fast enough that responses appear nearly instant. The 14B tier gives meaningfully better code quality with only modest speed sacrifice. Models in the Qwen 2.5 and Llama 3 families are popular for coding because they were trained on large code corpora. 32B+ models are overkill for autocomplete but useful for complex refactoring tasks.

Benchmark results — fastest rows first

Filtered to models commonly used for coding assistant. Sorted by avg tok/s descending.

Chip Model Quant RAM req. Avg tok/s Runtime Source
M4 Max (40-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 182.6 tok/s ref
M4 Max (40-core GPU, 64 GB) Llama 3.2 1B Instruct Q4_K - Medium 180.3 tok/s ref
M4 Max (40-core GPU, 48 GB) Llama 3.2 1B Instruct Q4_K - Medium 179.0 tok/s ref
M3 Ultra (80-core GPU, 512 GB) Llama 3.2 1B Instruct Q4_K - Medium 178.8 tok/s ref
M3 Ultra (80-core GPU, 256 GB) Llama 3.2 1B Instruct Q4_K - Medium 177.9 tok/s ref
M2 Ultra (60-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 176.4 tok/s ref
M2 Ultra (60-core GPU, 64 GB) Llama 3.2 1B Instruct Q4_K - Medium 174.1 tok/s ref
M2 Ultra (60-core GPU, 192 GB) Llama 3.2 1B Instruct Q4_K - Medium 169.8 tok/s ref
M4 Max (32-core GPU, 36 GB) Llama 3.2 1B Instruct Q4_K - Medium 166.5 tok/s ref
M4 Max (GPU count not published, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 156.3 tok/s ref
M2 Max (38-core GPU, 32 GB) Llama 3.2 1B Instruct Q4_K - Medium 153.0 tok/s ref
M1 Ultra (64-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 151.1 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q4_G32 2.78 GB 149.1 tok/s MLX ref
M3 Max (40-core GPU, 48 GB) Llama 3.2 1B Instruct Q4_K - Medium 149.0 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q4 2.54 GB 148.1 tok/s MLX ref
M3 Max (40-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 146.3 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q5 3.26 GB 143.2 tok/s MLX ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q5_G32 3.5 GB 143.0 tok/s MLX ref
M1 Ultra (48-core GPU, 128 GB) Llama 3.2 1B Instruct Q4_K - Medium 138.0 tok/s ref
M4 Max (40-core GPU, 64 GB) Qwen 3 4B Q6 3.98 GB 136.6 tok/s MLX ref

Recommended chips for this use case

Other use cases

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv

Buying guide: best Mac for local LLMs →