Upgrade your retrieval-augmented generation (RAG) stack with Voyage embeddings
High-quality contexts
Enhancing RAG by retrieving more relevant docs.
Less hallucination
LLMs do not hallucinate with accurate contexts.
Modularity
Plug-and-play with any vectorDB and LLM.
Multi-purpose
State-of-the-art quality across domains.
Industry customizable
Engineering, finance, legal, healthcare, etc.
Company customizable
Ingesting proprietary data and knowledge.
Novel training technology leads to high-quality models
Algorithm and architecture
New self-supervised loss functions and modern architectures at an unprecedented scale.
Systematic and large-scale data processing
Diverse training data from business domains, tailored to RAG and search.
Unlabeled finetuning
Advanced finetuning techniques without human labels.
Voyage is the state-of-the-art in retrieval accuracy
Read Blog
Model
Context length
Retrieval quality
HuggingFace MTEB
Industry domains
OpenAI
8192
49.3
74.8
Cohere
4096
512
20.0%
72.7%
BAAI/bge
512
54.3
70.2
Voyage
4096
54.5
77.8
Model
Context length
Retrieval Quality
HuggingFace MTEB
Industry domains
OpenAI
8192
49.3%
74.8%
Cohere
4096
512
20.0%
72.7%
BAAI/bge
512
54.3%
70.2%
Voyage
4096
54.5%
77.8%
Excited about Voyage embeddings?
Contact us for fine-tuned models
Fill out the form to send us a message or directly email Tengyu Ma (CEO) at tma@voyageai.com.