- 95
- 10 540
How AI Is Built
Germany
เข้าร่วมเมื่อ 2 มิ.ย. 2024
This is about applied AI. If you are looking for the latest news, research or models, this channel is not for you. How AI Is Built interviews the best engineers that work in the field and take AI into production.
Inside Vector Database Quantization: Product, Binary, and Scalar | S2 E23
When you store vectors, each number takes up 32 bits.
With 1000 numbers per vector and millions of vectors, costs explode.
A simple chatbot can cost thousands per month just to store and search through vectors.
The Fix: Quantization
Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors.
Today we are back continuing our series on search with Zain Hasan, a former ML engineer at Weaviate and now a Senior AI/ ML Engineer at Together. We talk about the different types of quantization, when to use them, how to use them, and their tradeoff.
Three Ways to Quantize:
1. Binary Quantization
- Turn each number into just 0 or 1
- Ask: "Is this dimension positive or negative?"
- Works great for 1000+ dimensions
- Cuts memory by 97%
- Best for normally distributed data
2. Product Quantization
- Split vector into chunks
- Group similar chunks
- Store cluster IDs instead of full numbers
- Good when binary quantization fails
- More complex but flexible
3. Scalar Quantization
- Use 8 bits instead of 32
- Simple middle ground
- Keeps more precision than binary
- Less savings than binary
**Key Quotes:**
- "Vector databases are pretty much the commercialization and the productization of representation learning."
- "I think quantization, it builds on the assumption that there is still noise in the embeddings. And if I'm looking, it's pretty similar as well to the thought of Matryoshka embeddings that I can reduce the dimensionality."
- "Going from text to multimedia in vector databases is really simple."
- "Vector databases allow you to take all the advances that are happening in machine learning and now just simply turn a switch and use them for your application."
**Zain Hasan:**
- [**LinkedIn**](www.linkedin.com/in/zainhas)
- [**X (Twitter)**](x.com/zainhasan6)
- [**Weaviate**](weaviate.io/)
- [**Together**](www.together.ai/)
**Nicolay Gerold:**
- [**LinkedIn**](www.linkedin.com/in/nicolay-gerold/)
- [**X (Twitter)**]( nicolaygerold)
vector databases, quantization, hybrid search, multi-vector support, representation learning, cost reduction, memory optimization, multimodal recommender systems, brain-computer interfaces, weather prediction models, AI applications
With 1000 numbers per vector and millions of vectors, costs explode.
A simple chatbot can cost thousands per month just to store and search through vectors.
The Fix: Quantization
Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors.
Today we are back continuing our series on search with Zain Hasan, a former ML engineer at Weaviate and now a Senior AI/ ML Engineer at Together. We talk about the different types of quantization, when to use them, how to use them, and their tradeoff.
Three Ways to Quantize:
1. Binary Quantization
- Turn each number into just 0 or 1
- Ask: "Is this dimension positive or negative?"
- Works great for 1000+ dimensions
- Cuts memory by 97%
- Best for normally distributed data
2. Product Quantization
- Split vector into chunks
- Group similar chunks
- Store cluster IDs instead of full numbers
- Good when binary quantization fails
- More complex but flexible
3. Scalar Quantization
- Use 8 bits instead of 32
- Simple middle ground
- Keeps more precision than binary
- Less savings than binary
**Key Quotes:**
- "Vector databases are pretty much the commercialization and the productization of representation learning."
- "I think quantization, it builds on the assumption that there is still noise in the embeddings. And if I'm looking, it's pretty similar as well to the thought of Matryoshka embeddings that I can reduce the dimensionality."
- "Going from text to multimedia in vector databases is really simple."
- "Vector databases allow you to take all the advances that are happening in machine learning and now just simply turn a switch and use them for your application."
**Zain Hasan:**
- [**LinkedIn**](www.linkedin.com/in/zainhas)
- [**X (Twitter)**](x.com/zainhasan6)
- [**Weaviate**](weaviate.io/)
- [**Together**](www.together.ai/)
**Nicolay Gerold:**
- [**LinkedIn**](www.linkedin.com/in/nicolay-gerold/)
- [**X (Twitter)**]( nicolaygerold)
vector databases, quantization, hybrid search, multi-vector support, representation learning, cost reduction, memory optimization, multimodal recommender systems, brain-computer interfaces, weather prediction models, AI applications
มุมมอง: 7
วีดีโอ
Inside Vector Database Quantization: Product, Binary, and Scalar | S2 E23
มุมมอง 132 ชั่วโมงที่ผ่านมา
When you store vectors, each number takes up 32 bits. With 1000 numbers per vector and millions of vectors, costs explode. A simple chatbot can cost thousands per month just to store and search through vectors. The Fix: Quantization Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors. Today we are back co...
Local-First Search: How to Push Search To End-Devices | S2 E22
มุมมอง 12221 ชั่วโมงที่ผ่านมา
Alex Garcia is a developer focused on making vector search accessible and practical. As he puts it: "I'm a SQLite guy. I use SQLite for a lot of projects... I want an easier vector search thing that I don't have to install 10,000 dependencies to use.” Core Mantra: "Simple, Local, Scalable" This captures the essence of SQLite Vec's approach to vector search: begin with local-first functionality ...
Local-First Search: How to Push Search To End-Devices | S2 E22
มุมมอง 621 ชั่วโมงที่ผ่านมา
Alex Garcia is a developer focused on making vector search accessible and practical. As he puts it: "I'm a SQLite guy. I use SQLite for a lot of projects... I want an easier vector search thing that I don't have to install 10,000 dependencies to use.” Core Mantra: "Simple, Local, Scalable" Why SQLite Vec? "I didn't go along thinking, 'Oh, I want to build vector search, let me find a database fo...
AI-Powered Search: Context Is King, But Your RAG System Ignores Two-Thirds of It | S2 E21
มุมมอง 30121 วันที่ผ่านมา
Today, I sit down with Trey Grainger, author of the book AI-Powered Search. We discuss the different techniques for search and recommendations and how to combine them. While RAG (Retrieval-Augmented Generation) has become a buzzword in AI, Trey argues that the current understanding of "RAG" is overly simplified - it's actually a bidirectional process he calls "GARRAG," where retrieval and gener...
AI-Powered Search: Context Is King, But Your RAG System Ignores Two-Thirds of It | S2 E21
มุมมอง 7421 วันที่ผ่านมา
Today, I (Nicolay Gerold) sit down with Trey Grainger, author of the book AI-Powered Search. We discuss the different techniques for search and recommendations and how to combine them. While RAG (Retrieval-Augmented Generation) has become a buzzword in AI, Trey argues that the current understanding of "RAG" is overly simplified - it's actually a bidirectional process he calls "GARRAG," where re...
Chunking for RAG: Stop Breaking Your Documents Into Meaningless Pieces | S2 E20
มุมมอง 25828 วันที่ผ่านมา
Today we are back continuing our series on search. We are talking to Brandon Smith, about his work for Chroma. He led one of the largest studies in the field on different chunking techniques. So today we will look at how we can unfuck our RAG systems from badly chosen chunking hyperparameters. The biggest lie in RAG is that semantic search is simple. The reality is that it's easy to build, it's...
Chunking for RAG: Stop Breaking Your Documents Into Meaningless Pieces | S2 E20
มุมมอง 19828 วันที่ผ่านมา
Today we are back continuing our series on search. We are talking to Brandon Smith, about his work for Chroma. He led one of the largest studies in the field on different chunking techniques. So today we will look at how we can unfuck our RAG systems from badly chosen chunking hyperparameters. The biggest lie in RAG is that semantic search is simple. The reality is that it's easy to build, it's...
How AI Can Start Teaching Itself - Synthetic Data Deep Dive | S2 E18
มุมมอง 74หลายเดือนก่อน
Most LLMs you use today already use synthetic data. It’s not a thing of the future. The large labs use a large model (e.g. gpt-4o) to generate training data for a smaller one (gpt-4o-mini). This lets you build fast, cheap models that do one thing well. This is “distillation”. But the vision for synthetic data is much bigger. Enable people to train specialized AI systems without having a lot of ...
How AI Can Start Teaching Itself
มุมมอง 209หลายเดือนก่อน
Most LLMs you use today already use synthetic data. It’s not a thing of the future. The large labs use a large model (e.g. gpt-4o) to generate training data for a smaller one (gpt-4o-mini). This lets you build fast, cheap models that do one thing well. This is “distillation”. But the vision for synthetic data is much bigger. Enable people to train specialized AI systems without having a lot of ...
A Search System That Learns As You Use It (Agentic RAG)
มุมมอง 102หลายเดือนก่อน
Modern RAG systems build on flexibility. At their core, they match each query with the best tool for the job. They know which tool fits each task. When you ask about sales numbers, they reach for SQL. When you need to company policies, they use vector search or BM25. The key is switching tools smoothly. A question about sales figures might need SQL, while a search through policy documents works...
A Search System That Learns As You Use It (Agentic RAG) | S2 E18
มุมมอง 115หลายเดือนก่อน
Modern RAG systems build on flexibility. At their core, they match each query with the best tool for the job. They know which tool fits each task. When you ask about sales numbers, they reach for SQL. When you need to company policies, they use vector search or BM25. The key is switching tools smoothly. A question about sales figures might need SQL, while a search through policy documents works...
Rethinking Search Inside Postgres, From Lexemes to BM25
มุมมอง 62หลายเดือนก่อน
Rethinking Search Inside Postgres, From Lexemes to BM25
Rethinking Search Inside Postgres, From Lexemes to BM25 | S2 E17
มุมมอง 45หลายเดือนก่อน
Rethinking Search Inside Postgres, From Lexemes to BM25 | S2 E17
RAG's Biggest Problems & How to Fix It (ft. Synthetic Data)
มุมมอง 1732 หลายเดือนก่อน
RAG's Biggest Problems & How to Fix It (ft. Synthetic Data)
RAG's Biggest Problems & How to Fix It (ft. Synthetic Data) | S2 E16
มุมมอง 692 หลายเดือนก่อน
RAG's Biggest Problems & How to Fix It (ft. Synthetic Data) | S2 E16
From Financial Reports to Software Onboarding: Real-World Applications of ColPali
มุมมอง 722 หลายเดือนก่อน
From Financial Reports to Software Onboarding: Real-World Applications of ColPali
Text vs Vision: How Late Interaction Models Are Changing AI Search (ColBERT vs ColPali)
มุมมอง 532 หลายเดือนก่อน
Text vs Vision: How Late Interaction Models Are Changing AI Search (ColBERT vs ColPali)
Scaling Search: Can ColPali Handle Billions of Documents?
มุมมอง 892 หลายเดือนก่อน
Scaling Search: Can ColPali Handle Billions of Documents?
Dense, Sparse, and Everything In Between: AI Representations Explained
มุมมอง 322 หลายเดือนก่อน
Dense, Sparse, and Everything In Between: AI Representations Explained
From Ambiguous to AI-Ready: Improving Documentation Quality for RAG Systems | S2 E15
มุมมอง 3772 หลายเดือนก่อน
From Ambiguous to AI-Ready: Improving Documentation Quality for RAG Systems | S2 E15
From Ambiguous to AI-Ready: Improving Documentation Quality for RAG Systems | S2 E15
มุมมอง 482 หลายเดือนก่อน
From Ambiguous to AI-Ready: Improving Documentation Quality for RAG Systems | S2 E15
BM25 is the workhorse of search; vectors are its visionary cousin | S2 E13
มุมมอง 2012 หลายเดือนก่อน
BM25 is the workhorse of search; vectors are its visionary cousin | S2 E13
BM25 is the workhorse of search; vectors are its visionary cousin | S2 E14
มุมมอง 632 หลายเดือนก่อน
BM25 is the workhorse of search; vectors are its visionary cousin | S2 E14
Vector Search at Scale: Why One Size Doesn't Fit All | S2 E13
มุมมอง 182 หลายเดือนก่อน
Vector Search at Scale: Why One Size Doesn't Fit All | S2 E13
Vector Search at Scale: Why One Size Doesn't Fit All | S2 E13
มุมมอง 542 หลายเดือนก่อน
Vector Search at Scale: Why One Size Doesn't Fit All | S2 E13
Search Systems at Scale: Avoiding Local Maxima and Other Engineering Lessons | S2 E12
มุมมอง 283 หลายเดือนก่อน
Search Systems at Scale: Avoiding Local Maxima and Other Engineering Lessons | S2 E12
Search Systems at Scale: Avoiding Local Maxima and Other Engineering Lessons
มุมมอง 1383 หลายเดือนก่อน
Search Systems at Scale: Avoiding Local Maxima and Other Engineering Lessons
Training Multi-Modal AI: Inside the Jina CLIP Embedding Model | S2 E11
มุมมอง 413 หลายเดือนก่อน
Training Multi-Modal AI: Inside the Jina CLIP Embedding Model | S2 E11
Training Multi-Modal AI: Inside the Jina CLIP Embedding Model | S2 E11
มุมมอง 2143 หลายเดือนก่อน
Training Multi-Modal AI: Inside the Jina CLIP Embedding Model | S2 E11