- 65
- 54 672
ML Collective
United States
เข้าร่วมเมื่อ 9 มี.ค. 2021
ML Collective (MLC) is an independent, nonprofit organization, with a mission to make research opportunities accessible and free, by supporting open collaboration in machine learning (ML) research.
Subscribe our channel to support our effort, and follow our events, especially the "Request for Plot" events where people pitch their project ideas and recruit collaborators!
More on our website: mlcollective.org/
Banner Image: "A Violet and Light Pink Tapestry representing the Collective Researcher Brain. Tessellation by M.C. Escher", generated by Nicholas Bardy.
Subscribe our channel to support our effort, and follow our events, especially the "Request for Plot" events where people pitch their project ideas and recruit collaborators!
More on our website: mlcollective.org/
Banner Image: "A Violet and Light Pink Tapestry representing the Collective Researcher Brain. Tessellation by M.C. Escher", generated by Nicholas Bardy.
Research Jam #23
MLC: Open Collab is a 100% open community for independent researchers. We held our twenty third Research Jam on September 18, 2024, where two presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/research-jam-23/
0:00 Event starts
0:57 Can LLMs Power Product Recommendations?
23:50 Webstudent
0:00 Event starts
0:57 Can LLMs Power Product Recommendations?
23:50 Webstudent
มุมมอง: 155
วีดีโอ
The AI Scientist @ DLCT
มุมมอง 1K2 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Robert Tjarko Lange Title: The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery Abstract: One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discoveri...
Privacy in LLMs @ DLCT
มุมมอง 1592 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Niloofar Mireshghallah Title: Privacy in LLMs: Understanding how data is imprinted in language models, what data is imprinted and how it might surface! Abstract: Often when talking about privacy of chatbots and large language models, I get the questi...
Research Jam #22
มุมมอง 3653 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our twenty second Research Jam on July 24, 2024, where four presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-22/ 0:00 Event starts 00:28 Generative CAD 24:58 RFP: Carecost 43:25 Rising Mortal Machines 58:07 Multimodal Multili...
Multi-Agent RL @ DLCT
มุมมอง 1774 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: MARL Research Team, InstaDeep Title: Growing the MARL software ecosystem in JAX Abstract: We present JAX-based libraries and tools to support an ecosystem for Multi-Agent Reinforcement Learning research. Our ecosystem contributions include stable and...
Self-generated data @ DLCT
มุมมอง 4235 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Rishabh Agarwal Title: Improving LLMs using self-generated data Abstract: This talk would be about some of our recent work on improving LLMs using their self-generated data with access to external feedback. I would cover how we can go beyond human da...
Research Jam #21
มุมมอง 1945 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our twenty first Research Jam on May 29, 2024, where three presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-21/ 0:00 Event starts 1:10 Mapping human transcriptomes to cellular proportions 17:30 On Search for Locally Activatin...
Efficient Pre-training @ DLCT
มุมมอง 3775 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Sunny Sanyal Title: Pre-training with a little less data and compute Abstract: Pre-training LLMs is all about extremely large data and compute. For instance the newly released LLaMA-3 models are trained with 15 trillion tokens with 16K GPUs. In this ...
Synthetic Data @ DLCT
มุมมอง 2876 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club “Deep Learning: Classics and Trends (mlcollective.org/dlct ). Speaker: Diganta Misra Title: Synthetic Data: The New Frontier Abstract: In real-world scenarios, extensive manual annotation for continual learning is impractical due to prohibitive costs. Although prior arts, influenced by large-scale webly supervised traini...
Research Jam #20
มุมมอง 4946 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our twentieth Research Jam on April 3, 2024, where three presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-20/ 0:00 Event starts 00:52 Neural Work Loss Landscapes Stochastic Narrowing Valleys 28:10 Webstudent 47:51 Alljoined
LLM Reasoning @ DLCT
มุมมอง 5617 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Muhammad Khalifa Title: Discriminator-Guided Chain-of-Thought Reasoning Abstract: During this talk, we'll explore the challenges Large Language Models (LLMs) face with chain-of-thought (multi-step) reasoning, often leading them to invalid solutions w...
Research Jam #19
มุมมอง 2718 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our nineteenth Research Jam on February 7, 2024, where four presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-19/ 0:02 Event starts 0:38 Multi-crop LLaVA 14:40 (Modern) CNNs with random filters 29:13 Meta-learning Approach for...
Training dynamics @ DLCT
มุมมอง 5369 หลายเดือนก่อน
This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ). Speaker: Elan Rosenfeld Title: Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization Abstract: There is a growing list of intriguing properties of neural network optimization, including specific patterns in their training dynam...
Research Jam #18
มุมมอง 3659 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our eighteenth Research Jam on December 6, 2023, where two presenters signed up (but one opted out of being recorded) to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-18/ 0:02 Event starts 2:05 Learning Backgammon: TD-Gammon review and AlphaZero ...
Research Jam #17
มุมมอง 1229 หลายเดือนก่อน
MLC: Open Collab is a 100% open community for independent researchers. We held our seventeenth Research Jam on October 04, 2023, where five presenters signed up to share updates on their ongoing research. Access slides and read more about it at mlcollective.org/events/research-jam-17/ 00:01 Event starts 00:21 Scalable game benchmarks for RL/IL via crowdsourced human written code 06:51 U-Turn Di...
Pre-training in Transfer Learning @ DLCT
มุมมอง 272ปีที่แล้ว
Pre-training in Transfer Learning @ DLCT
Brutal
This is so insightful!
Love this and want to get involved. but looks like discord link is broken!
Loved the guy's explanation of DNA
Quick note: @3:10 when I discuss the step size stability threshold, I mistakenly say that the maximum stable step size is 2/η. I meant to say 2/sharpness! Equivalently, if the step size is fixed at η then the stability requirement is sharpness <= 2/η.
Discovering this channel is a source of joy for me as I delve into the fundamentals and connect with a supportive community that will offer insights for my projects. It feels like a dream come true!
please share slide
You should provide GitHub link for this work.
How can I join these meetups ?
These video is really helpful. I expect more videos to come.
I wanna know about Aditya's education background. What he studied and where?
Undergrad at NYU (CAS/Courant School)
@@dfmrrd Source? I didn't find any of his social media except Instagram and he is not even on linkedin ig.
At a startup, would a generalist have greater value?
These are great insights.
Thank you for sharing this! One's personal schedule can often make opportunities like this slip out of reach. Having it made available as a recording is most appreciated. I would suggest the viewer also check out Keerthana's website link above. A person to watch and someone who will go very far indeed! :)
Thanks for posting these as videos
very frank and insightful talk, i wish all top industry performers analyzed themselves in public like this. thank you!
interesting work, and helpful to democratize large RL model~
Great talk! One point is that the argument for why the lambda is seemingly at 0.5 doesn't seem right. Because these cases are chosen with random seeds, all you can expect is that the distribution of lambda is peaked at 0.5 (for lots and lots of seeds) but it doesn't follow by symmetry that it would be exactly 0.5. That seems to warrant an explanation.
This was a great talk! I missed the live talk. Thanks for recording this one.
Excellent
Thank you
Great video!
47:00
Amazing discussion
It was wonderful to present our work in this workshop, keep up the great work.
Is the book available for free?
It is not. We had limited-time access to drafts for the purposes of the reading group. The link to preorder is here: twitter.com/chipro/status/1526049559540944897?s=20&t=MC7VnVXF0evyvIwDdK0kbA
Support it! I believe ur job is meaningful.
Nice to see ML collective has a TH-cam channel. Didn’t watch the whole vid but I know Rosanne is top notch from Twitter :)
Followup on the "Overfitting a Single Batch" discussion from 31:49 -- I did some experiments to follow up on my claim about Transformers not being able to overfit single batches, and I actually want to weaken it a lot. I spent some time with HF Transformers and I've been able to get them to consistently overfit single batches for simple tasks like sequence classification. The other transformer problem I was working had a more difficult task -- image-to-text -- and the implementation was not as well-tested. Results are here: wandb.ai/cfrye59/hf-transformers-overfit-glue-mrpc/sweeps/soi1gyw5?workspace=user-cfrye59 Code is here: colab.research.google.com/drive/1pAWd6MsY4yJrjoqknIbPGxW0usiTTAOJ?usp=sharing The issues with the initialization, normalization, and gradient stability of the TF architectures are real. I've seen them in real-world models, e.g. from BigScience @ HF huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard and in Dall-E mini from Boris Dayma twitter.com/charles_irl/status/1506487785783365633?s=20&t=qcNiNoQ9OF6uJmmqFx20SQ. They may still be related to the failure of the other model+task combo, but they're not as bad as I thought.
Will be session 4 uploaded? Or you leave chapter to the participants :)
Actually, Session 4 covers Chapter 5! The book is still being edited, and the numbering of the chapters changed mid-stream. So the next session is this one: th-cam.com/video/ad_HzOeuwpo/w-d-xo.html
@@charles_irl Thank you Charles, you're my best teacher ever in ML. 🔥
Your glasses remind me about adversarial attack on images. But its really very much colorful and nice @Charles
Bummed I missed this one. I’ll have to come do a quick share on progress
The area of the circular cross section perpendicular to the white pole black pole axis reduces when you get closer to the poles. This means that you have fewer shades to chose from. Isn't this invalid and shouldn't the number of shades remain the same?
Good initiative, keep up the great work.
This is one genuine talk.
Regarding a minor point around 8:45 mark -- I don't think that conference paper decisions are *that* correlated. Sure, strong papers get in, terrible papers get rejected. But for the mid-tier papers, re-submitting to different conferences is the action based on the belief that the reviewing process from one to the other is more independent (in a probabilistic sense) than correlated. Otherwise, if the reviewing processes are extremely correlated, a rejection from one conference is enough evidence that you shouldn't submit to somewhere else because they are all correlated.
Being open about personal experiences and vulnerabilities is still much too rare in tech. Thank you, Rosanne.
Hearing one of the ML community's rockstars share such an honest perspective on the struggles we likely all recognize is refreshing and motivating. Thank you for sharing this!!
i’m new to her work and need a bit of context - what are you referencing when saying she is a rockstar? (ie what must we know about her?)
With due respect, I do not buy the generalist argument for hiring. isn't there already so many people who know a little about everything (like RL, vision, gradient descent, conv nets, etc)? Even any fresh school graduate worked on ML should know a bit about these. Isn't it that, as a research community, we want to understand why deep learning works at the fundamental level rather than treating it as a black box, and that is where we need depth more than ever?
I think she meant being jack of all trades, master of one. BUT your 'jack' being equivalent to others 'master'. Also, I do agree to your point on interpretability of AI!
Realistic, open, and brave! Thanks a lot for this brilliant talk.
Nice video, thanks :)
Simply fabulous presentation! I love the thematic connection between the career advice of changing approach to alter outcomes, and the clever tweaking of the model to significantly change its output!
Here fully watching from Jamaica 🇯🇲👍
Incredibly brave and intelligent points to make. I hope it starts a lasting conversation, thanks for starting it.
Fantastic !!! Quiet relatable, inspiring, and very helpful. Thanks a lot, Rosanne :)
It is narrow when ... all of them are trying to hire the same kind of people, with the same rigid rubric. Can not agree more on this, we call this "内卷" in chinese.
I’m glad that you are an extremely petty person because I am just the same. Thanks for bringing up this topic.
Great talk! Your story almost brings tears to my eyes. 一定要成功呀!
great topic.
Amazing work, everyone!