← All benchmarks

M1 Pro (16-core GPU) — LLM Benchmarks

Measured LLM inference benchmarks for M1 Pro (16-core GPU). Tokens per second across 1 model and multiple quantizations. Real runs, not estimates.

1 Benchmark rows
1 Models tested
36.4 Fastest avg tok/s (Llama 2 7B)
0 Factory-lab verified rows

Benchmark results for M1 Pro (16-core GPU)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

Model Quant RAM req. Context Avg tok/s Prompt tok/s Runtime Source
Llama 2 7B Q4_0 3.6 GB 512 36.4 tok/s 266.3 tok/s llama.cpp ref

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Want to contribute a benchmark for this chip? Data sourced from factory lab measurements and community reference runs. See all chips →