- 588
- 1 598 028
Vizuara
เข้าร่วมเมื่อ 23 พ.ย. 2021
We are team Vizuara, a fast-growing Indian startup backed by MIT, that is revolutionizing AI education (www.vizuara.ai).
Vizuara is founded by alumni from IIT Madras, MIT, and Purdue University.
For questions, please email hello@vizuara.com.
Vizuara is founded by alumni from IIT Madras, MIT, and Purdue University.
For questions, please email hello@vizuara.com.
India's first AI Hackathon in Schools
Vizuara hosted India's first AI Hackathon in Schools at the start of this year. Students from some of the most reputed schools in Pune participated in the hackathon.
The event witnessed some fabulous projects in the domain of Artificial Intelligence. The projects covered the fields of healthcare, finance, marketing, agriculture, sustainability, art and many more!
Many of the schools which participated were using Vizuara's AI curriculum which helped the students to create impactful AI projects.
The event showed that AI can bring about joy and excitement at the school level, create impact on the society and ignite the spark of creativity and innovation in students.
The event witnessed some fabulous projects in the domain of Artificial Intelligence. The projects covered the fields of healthcare, finance, marketing, agriculture, sustainability, art and many more!
Many of the schools which participated were using Vizuara's AI curriculum which helped the students to create impactful AI projects.
The event showed that AI can bring about joy and excitement at the school level, create impact on the society and ignite the spark of creativity and innovation in students.
มุมมอง: 66
วีดีโอ
Evaluating Retrieval Systems | Mean Average Precision (MAP) | nDCG
มุมมอง 1029 ชั่วโมงที่ผ่านมา
Our live courses starting from January 2025 (taught by IITians and MIT PhDs): vizuara.ai/spit/ In this video, you will learn about evaluating retrieval systems In particular, we learn about the following: (1) Recap of semantic search (2) Framework for evaluating search systems (3) Understanding the Mean Average Precision (MAP) evaluation metric (4) Advanced evaluation metrics such as nDCG (norm...
Stochastic Gradient Descent from scratch | Intro to ML Optimization | Foundations for ML
มุมมอง 25212 ชั่วโมงที่ผ่านมา
Why Stochastic Gradient Descent (SGD) Is a Game-Changer in Machine Learning Optimization is the backbone of ML, and one algorithm that stands out for its simplicity and power is Stochastic Gradient Descent (SGD). Let’s explore why it’s so impactful. The Challenge with Gradient Descent:- Gradient Descent minimizes error by calculating the gradient (or slope) of the loss function over the entire ...
Git Fork | Basic Git commands you need to know
มุมมอง 8515 ชั่วโมงที่ผ่านมา
Git Fork | Basic Git commands you need to know
Adam Optimizer from scratch | Gradient descent made better | Foundations for Machine Learning
มุมมอง 1
Why the Adam Optimizer Is a Game-Changer in Machine Learning If you’ve trained a machine learning model recently, chances are you’ve used Adam. It’s one of the most popular optimization algorithms, and for good reason-it combines the best features of Momentum and RMSprop to deliver fast, stable, and adaptive learning. But what makes Adam so special? Let’s break it down. The Basics of Adam Adam ...
Git Pull Command - All you need to know
มุมมอง 1262 ชั่วโมงที่ผ่านมา
Git Pull Command - All you need to know
RMSprop Gradient Descent from scratch | Optimization in ML | Foundations for Machine Learning
มุมมอง 2
RMSprop: A Smarter Way to Tame Learning Rates in Machine Learning Training machine learning models often feels like balancing on a tightrope. Use a learning rate that’s too high, and your optimization jumps around without ever converging. Use one that’s too low, and progress crawls. RMSprop is an optimization algorithm designed to address this delicate balancing act by adapting learning rates d...
Introduction to Optimization for Machine Learning
มุมมอง 6694 ชั่วโมงที่ผ่านมา
Understanding Optimization in Machine Learning: The Key to Smarter Models Ever wondered what makes machine learning models "learn"? At the core of it all lies optimization, the process of fine-tuning a model to ensure it performs its task accurately-be it predicting outcomes, classifying data, or finding patterns. The journey of optimization starts with the loss function. This is the compass fo...
Momentum-based gradient descent from scratch | Intro to optimization for ML | Foundations for ML
Momentum Gradient Descent: A Smarter Way to Optimize Machine Learning Models Optimization is key in machine learning, and while Gradient Descent lays the foundation, it often struggles with inefficiencies like slow convergence and oscillations. Enter Momentum Gradient Descent-an upgrade that accelerates learning and smooths optimization. Let’s break it down. The Problem with Standard Gradient D...
Evaluating RAG Systems
มุมมอง 3
Our live courses starting from January 2025 (taught by IITians and MIT PhDs): vizuara.ai/spit/ In this video, you will learn about evaluating RAG systems. We learn about the different RAG evaluation metrics like: (1) Fluency (2) Utility (3) Citation recall (4) Citation precision (5) Faithfulness Links: (1) RAG evaluation paper: arxiv.org/pdf/2304.09848 (2) Ragas library for evaluating LLMs: doc...
Reranking for Semantic Search
มุมมอง 3507 ชั่วโมงที่ผ่านมา
Our live courses starting from January 2025 (taught by IITians and MIT PhDs): vizuara.ai/spit/ In this video, you will learn about reranking for semantic search. In particular, we learn about the following: (1) What is reranking? (2) What is the difference between dense retrieval and reranking? (3) Why reranking can drastically improve semantic search? (4) Coding a reranking system in Python Co...
What are Git branches? | A lecture for beginners [Lecture 8]
มุมมอง 2627 ชั่วโมงที่ผ่านมา
What are Git branches? | A lecture for beginners [Lecture 8]
Advanced RAG Techniques
Our live courses starting from January 2025 (taught by IITians and MIT PhDs): vizuara.ai/spit/ In this video, you will learn about Advanced RAG techniques: (1) Query rewriting (2) Multi-query RAG (3) Multi-hop RAG (4) Query routing (5) Agentic RAG Links: - RAG original paper: arxiv.org/abs/2005.11401 (1) Hands on LLM playlist link (this video belongs to this playlist): th-cam.com/play/PLPTV0NXA...
K-mean clustering for beginners | Introduction to Machine Learning in Julia [Lecture 12]
มุมมอง 4259 ชั่วโมงที่ผ่านมา
K-means clustering is one of the most fascinating techniques in the world of data science and machine learning. It is simple yet powerful, and it offers incredible insights into data patterns that might otherwise go unnoticed. At its core, K-means clustering is a method to group similar data points together into clusters based on their features. Imagine you are looking at a scatterplot of data,...
Converting your local project into a Github project [Lecture 7]
มุมมอง 4539 ชั่วโมงที่ผ่านมา
Converting your local project into a Github project [Lecture 7]
Retrieval Augmented Generation: Introduction and Coding
มุมมอง 1
Retrieval Augmented Generation: Introduction and Coding
Basic Git Commands for Total Beginners [Lecture 6]
มุมมอง 64312 ชั่วโมงที่ผ่านมา
Basic Git Commands for Total Beginners [Lecture 6]
Chunking Strategies for Large Language Models (LLMs)
มุมมอง 52214 ชั่วโมงที่ผ่านมา
Chunking Strategies for Large Language Models (LLMs)
Git Installation for Total Beginners [Lecture 5]
มุมมอง 33814 ชั่วโมงที่ผ่านมา
Git Installation for Total Beginners [Lecture 5]
Linear Regression | End to end project explained | Pune Housing Data | In Hindi [Lecture 20]
มุมมอง 48116 ชั่วโมงที่ผ่านมา
Linear Regression | End to end project explained | Pune Housing Data | In Hindi [Lecture 20]
What is a repository in Git/GitHub? [Lecture 4]
มุมมอง 43316 ชั่วโมงที่ผ่านมา
What is a repository in Git/GitHub? [Lecture 4]
What exactly does version control mean in Git? [Lecture 3]
มุมมอง 46019 ชั่วโมงที่ผ่านมา
What exactly does version control mean in Git? [Lecture 3]
Coding a LLM Dense Retrieval System
มุมมอง 55021 ชั่วโมงที่ผ่านมา
Coding a LLM Dense Retrieval System
Master Linear Regression for Your Next Interview in Just 45 Minutes | In Hindi [Lecture 19]
มุมมอง 306วันที่ผ่านมา
Master Linear Regression for Your Next Interview in Just 45 Minutes | In Hindi [Lecture 19]
How to make open source contributions in GitHub?
มุมมอง 2
How to make open source contributions in GitHub?
Hi, where can i get the notes from?
Fantabulous
Sir, it will be helpful if you could please share the micro- whiteboard file as well if possible
Lectures are great. However, the constant movement on the screen - zoom in, zoom out, slide - makes it quite hard to visually follow along. Minimizing the zooming / sliding will make it easier on the viewer. Just a minor point as a feedback in an otherwise great lecture series. Thanks.
response = co.embed( texts=texts, input_type="search_document", ).embeddings My question:- For embedding generation does Cohere store the embedding in memory of the servers or does it store in any vector database ? I am assuming that these embeddings are computed on Cohere's servers and returned as a list of vectors (likely NumPy arrays). Will the behavior be same if I send millions of lines as part of text ? Wondering capacity of servers.
commenting for the algorithm so someone else can be empowered too! Thank you for this quantum leap deep dive!
Thanks for the supperb explanation Sir
Wonderful explanation!
Thanks, Realy enjoying this latest topic. You are realy have a wornderfull teaching method.
Great Series 🔥🔥🔥🔥
How to know if a project is open source and modifications are welcome to them?
What if I pick a model and try to optimize it, but there’s a better model out there that I didn’t choose? How can I know if I’ve selected the best model for my problem?
Could you please share the youtube link
Can we use the git clone command to pull merge updates from remote to local instead of using git pull origin main? Any difference in local updates between these two commands?
where can i find the jupyter notebook?
Yes very useful - how to create embedding models from scratch
Can u please explain when to use Euclidean distance and when to use cosine similarity
Sir, how can we get answer in case we have any doubt or something?
Really Awesome Sir 🙏🙏🙏🙏 Thanks for putting so much efforts for us 🙏🙏🙏🙏
i liked the depth of explanation
Another truly amazing lecture
Can someone from Vizuara team answer this, please! Why should I join your paid course? I mean almost every important thing is here, what makes me want to prefer your paid platform over this YT channel?
ThnkWw so muchh ..... very well explnd .... 😊😊😊😅😅
Excited to begin this journey
Please upload the notes too
Thank you so much, I was looking for this type of explanation... You are really awesome
Thank you for the lecture Sir. Can we get the notes from which you are teaching?
I would have also left the USA but i am unable to vist USA so i can leave it 😎
Here is our notes + article based on this lecture: open.substack.com/pub/aivizuara/p/gradient-descent-an-introduction?r=2nt2pq&showWelcomeOnShare=true
Do a video on hybrid search
kudos to you for actually understanding the need to teach these things from scratch. Actually, gave me a right direction and topic to focus in AI
thank you so much for this detailed explanation.
Excellet Explanation, Studied these during my college days without understanding, Now i am eager how its useful in ML. Thanks
How to get that miro notes ?
Thanks
Thank you for your support. Much appreciated!
Great lectures and as diplomaed adult ED instructor with many years of experience, you are doing it right! Teaching with the brain in mind! Thanks very much.
It's only getting better! Thank you so much
Can u pls number the videos in this playlist
Thank you so much, Dr. Sreedath Can't wait to the upcoming lectures
Great Lecture . Very helpful
sir can u please upload your hand written notes
at around 9:36 when doing passing in the dvalues into the backward pass of softmax combined cross entropy, are the dvalues just the values as a vector/matrix of the loss function for each sample? Or are they supposed to be the predictions that were generated?
Very Good Video , I also want to know about the Nuts and Bolts about LLMs. Thanks for the video sir , don't stop this series ,
This is very valuable ❤
Woww Amazing
Awesome Series Sir, Thanks alot 🙏🙏
It´s a big gift at 2025.
for large negetive value Gelu will also be verry close to zero ! will dead neuron problem solved here ?
Extremely fantastic! Thanks so much.