← All benchmarks

M4 (10-core GPU, 16 GB) — LLM Benchmarks

Measured LLM inference benchmarks for M4 (10-core GPU, 16 GB). Tokens per second across 4 models and multiple quantizations. Real runs, not estimates.

4 Benchmark rows
4 Models tested
76.2 Fastest avg tok/s (Llama 3.2 1B Instruct)
0 Factory-lab verified rows

Benchmark results for M4 (10-core GPU, 16 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

Model Quant RAM req. Context Avg tok/s Prompt tok/s Runtime Source
Llama 3.2 1B Instruct Q4_K - Medium 76.2 tok/s 1091.1 tok/s ref
Llama 2 7B Q4_0 3.6 GB 512 24.1 tok/s 221.3 tok/s llama.cpp ref
Llama 3.1 8B Instruct Q4_K - Medium 16.0 tok/s 166.8 tok/s ref
Qwen 2.5 14B Instruct Q4_K - Medium 8.7 tok/s 83.1 tok/s ref

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Want to contribute a benchmark for this chip? Data sourced from factory lab measurements and community reference runs. See all chips →