- 1 371
- 431 290
kmit vista
India
เข้าร่วมเมื่อ 18 ต.ค. 2021
วีดีโอ
ISTM - Exercise - Prediction of next word and Introduction to Attention Mechanism (NPS)
มุมมอง 42313 ชั่วโมงที่ผ่านมา
Speaker : Ms. PRIYANKA SAXENA
LSTM - Exercise - Prediction of Next work and Introduction to Attention Mechanism (PS )
มุมมอง 39413 ชั่วโมงที่ผ่านมา
Speaker : Ms. PRIYANKA SAXENA
Long Short Term Memory with Word Embedding exercise (NPS)
มุมมอง 22115 ชั่วโมงที่ผ่านมา
Speaker : Ms. PRIYANKA SAXENA
Long Short Tern Memory with Word Embeddings exercise (PS)
มุมมอง 76715 ชั่วโมงที่ผ่านมา
Speaker : Ms. PRIYANKA SAXENA
COMMON BUS SYSTEMBUS AND MEMORY TRANSFER
มุมมอง 2722 ชั่วโมงที่ผ่านมา
COMMON BUS SYSTEMBUS AND MEMORY TRANSFER
DEFINITION OF COMPUTER ORGANIZATION,ARCHITECTURE
มุมมอง 2922 ชั่วโมงที่ผ่านมา
DEFINITION OF COMPUTER ORGANIZATION,ARCHITECTURE
REGISTER TRANSFER,REGISTER TRANSFER LANGUAGE
มุมมอง 20วันที่ผ่านมา
REGISTER TRANSFER,REGISTER TRANSFER LANGUAGE
INTRODUCTION TO MICROOPERATIONS,ARITHMATIC MICROOPERATIONS-1
มุมมอง 8วันที่ผ่านมา
INTRODUCTION TO MICROOPERATIONS,ARITHMATIC MICROOPERATIONS-1
ANN model Development from sratch - Revision
มุมมอง 690วันที่ผ่านมา
ANN model Development from sratch - Revision
INTRODUCTION, BLOCK DIAGRAM OF A DIGITAL COMPUTER
มุมมอง 48วันที่ผ่านมา
INTRODUCTION, BLOCK DIAGRAM OF A DIGITAL COMPUTER
DEFINITION OF COMPUTER ORGANIZATION,DESIGN & COMPUTER ARCHITECTURE
มุมมอง 36วันที่ผ่านมา
DEFINITION OF COMPUTER ORGANIZATION,DESIGN & COMPUTER ARCHITECTURE
INTRODUCTION BLOCK DIAGRAM OF A DIGITAL COMPUTER
มุมมอง 54วันที่ผ่านมา
INTRODUCTION BLOCK DIAGRAM OF A DIGITAL COMPUTER
Very good English and excellent teaching
You are beautiful 🤣
Thanks for these videos!
superbbbb mam .there is no words describe mam .
Ok, so this is really good, that means, I will subscribe to this, and unsubscribe to at least 3 channels that has less value.
Next time try c language to make a ANN from scratch that would be a nice excercise.
Another excellent lady in the lecture hall. Great India!
Nice ma'am
Thank you so much madam
Very nicely explained
Thank you
Please provide repo Link.
When will the RNN lecture be released?
Very soon
This lady is simply excellent!
Thank you
super analogy and well prepared teacher!
Thank you
thank you
Thank you
What was that error mam??
code files or git repo?
This mam is very knowledgeable guys please listen carefully .......😊
Thank you.😊
Thanks Dr. Aruna for the explanation. One suggestion, you can try using disappearing lines while explaining the code to temporarily highlight the code that you are explaining, instead of cluttering the entire page with un-necessary lines.
Good knowledge depth!
Happy to see you sir. It’s been 17 years.
Can you pls share the GitHub repo for the code notebook.thanks
Not sure who his audiences are. Seems like he don't have any confidence of them understanding what he is presenting. Frankly, this was very well explained. He points out the subtleties of what is happening in the agentic application. A good primer on agentic application development using Langchain. Hope the code can be shared. Thanks
There is no explanation, just reading the slides in the video!
Very good teacher. No AI can ever replace a good teacher.
Very nice explanation Sir. Can you please share the jupyter notebooks too? So that we can try the assignment
What an informative video 👏 👏 👏
Thank you
Thanks 😊
Excellent explanation 👍
Good explanation
hahaha explanning zero to mastry pytorch progame
Hmmm.... I wonder if its right to say that adding layers improves model performance unless of course the complexity in the task/data demands to. I often find that a data deep dive or understanding the data is an essential step that is ignored to really determine how to improve model performance.
You're absolutely right! Adding layers can sometimes improve model performance, but only when the task or data complexity justifies it. Blindly increasing model depth without understanding the data can lead to overfitting or inefficiencies. A deep dive into the data is indeed crucial understanding its nuances, distributions, and inherent patterns often reveals simpler or more targeted ways to enhance performance ...all this was explained in the next sessions...
We are fortunate to come across this session on Transformer by Neil sir. Looking forward for more sessions from Neil sir.
Mam more disturbance while recording
May you please provide the link to that article?
Great lecture! The only thing I would like to add is that the Transformer LLM architecture being discussed here is decoder only architecture and not encoder. As per Jay Alammar's book Hands-On Large Language Models which is the book being used here as a reference, any pink blocks with a chat symbol at the top right corner represents Decoder-only models.
Thanks for the forecast! I need some advice: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
As a professor, you can do a lot of rosy things other than training a transformer
I am really impressed the way professor teaches, the depth of understanding he has on topics he isteaching as well as the simple way to convey difficult topics to make people understand is awesome. Is there a way I can get his contact or any course where he has taught this end to end.
Nice to see you teaching. I was your student 25 years back
Thank you
Electrical transformers dont convert ac to dc 😢
amazing
Thank you so much Prof. ❤
👍🏻
Superb Explanation
Great explanation, Sir!
Sir ultimately your explanation is awesome
clear explination
Thank you..😊
@32:53 Am I reaching Question I visualise the chatGPT producing multiple answers for the same question at parallel windows