StackOverflowQA
Dataset94.75
SHARC
89.42
Cohere
92.44
OpenAI
94.66
Voyage
A deep dive into the MTEB leaderboard showing how SHARC-Embed-Code-001 achieves #2 ranking.

Comparison across embedding providers. Higher is better.
Dimension, price, and rank breakdown for the compared models.
| Model | MTEB Rank | Score | Dimensions | Price |
|---|---|---|---|---|
| #2 | 70.58 | 4096 | $0.05/M | |
| Cohere-embed-multilingual-v3.0 | #15 | 61.13 | 1024 | $0.10/M |
| text-embedding-3-large | #18 | 58.96 | 3072 | $0.13/M |
| voyage-3.5 | #24 | 58.46 | 1024 | $0.06/M |
SHARC leads all listed MTEB task categories.
| Category | SHARC | Cohere | OpenAI | Voyage |
|---|---|---|---|---|
| Retrieval | 88.96 | 80.35 | 76.62 | 79.26 |
| Classification | 88.85 | 74.44 | 73.43 | 70.42 |
| STS | 90.54 | 84.27 | 82.70 | 77.92 |
| BitextMining | 86.34 | 83.68 | 69.95 | 68.93 |
| PairClassification | 85.84 | 80.10 | 80.07 | 76.30 |
| Reranking | 65.63 | 64.07 | 63.89 | 64.16 |
| Clustering | 56.21 | 49.06 | 48.50 | 45.30 |
Benchmarks most relevant to semantic retrieval quality in code workflows.
Score-per-dollar view from MTEB average score and list pricing.
| Model | MTEB Score | Cost / M tokens | Score / $ |
|---|---|---|---|
| SHARC | 70.58 | $0.05/M | 1412 |
| Cohere | 61.13 | $0.10/M | 611 |
| OpenAI | 58.96 | $0.13/M | 454 |
| Voyage | 58.46 | $0.06/M | 974 |
Why SHARC-Embed-Code-001 is positioned for production code retrieval.
Metrics are sourced from the official MTEB leaderboard and SHARC internal benchmark snapshots. Refer to the official board for latest standings.
MTEB Leaderboard