DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't)

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ย. 2024

ความคิดเห็น • 369

  • @TMtheScratcher
    @TMtheScratcher 3 ปีที่แล้ว +61

    Great video, but one small mistake at around 19:55 : You do not have two torsion angles because you are in 3D. The thing is, that atoms can rotate around a single covalent bond. The amino acid backbones, however, are connected in such a way, that a double bond is created (to be true, it is just a partial double bond, but this it not further important to get the point). Double bonds create an atom plane, in which the atoms are fixed and can no longer rotate individually. This is the case for connected amino acids. In detail, the center carbon atom, called C_alpha, is connected to two planes, one is the connection to the prior amino acid and the second the connection to the next one. The rotation angles of these plane in relation to the C_alpha are then the torsion angles. These are a direct result from the underlying chemistry and are used to describe structures since the birth of structural biology. If there wasn't a partial double bond, we would have a huge problem, since each protein would have even more angles we would have to consider (the side-chain angles are freely rotatable in most amino acids and lead to many many more possible combinations, but the backbone is more important and thankfully there are just two angles).

  • @carlos24497
    @carlos24497 3 ปีที่แล้ว +455

    Yannic Kilcher is all you need

    • @Guytron95
      @Guytron95 3 ปีที่แล้ว +5

      lol

    • @sehbanomer8151
      @sehbanomer8151 3 ปีที่แล้ว +44

      The unreasonable efficiency of Yannic Kilcher

    • @jg9193
      @jg9193 3 ปีที่แล้ว +24

      Learning to summarize from Yannic Kilcher

    • @jg9193
      @jg9193 3 ปีที่แล้ว +21

      Self-training with Noisy Yannic Kilcher

    • @anotherplatypus
      @anotherplatypus 3 ปีที่แล้ว +5

      Machine Learning Research Paper Summarization Models are Yannic Kilchers!

  • @FrakCylon
    @FrakCylon 3 ปีที่แล้ว +24

    I've done my bachelor thesis in structural proteomics and your introduction was very very good!
    Looking forward to the explanation on the paper on AlphaFold2!

  • @surajmath3527
    @surajmath3527 3 ปีที่แล้ว +59

    Yannic:"If youre watching this youre a machine learning person,and dont know about proteins"
    Me:"Actually...........quite the opposite"

    • @5602KK
      @5602KK 2 ปีที่แล้ว

      Same 😂😂

  • @NextFuckingLevel
    @NextFuckingLevel 3 ปีที่แล้ว +287

    Friendship ended with CNN
    Now, Transformer is my best friend

    • @scottmiller2591
      @scottmiller2591 3 ปีที่แล้ว +4

      Is it because of the butt-wiping feature?

    • @alefratat4018
      @alefratat4018 3 ปีที่แล้ว

      Yeah, well transformers are not the universal answer, CNNs won't go anywhere soon.

    • @harsh9558
      @harsh9558 3 ปีที่แล้ว

      Hello comrade

    • @Supreme_Lobster
      @Supreme_Lobster 3 ปีที่แล้ว +1

      good reference

    • @okagbasuna246
      @okagbasuna246 3 ปีที่แล้ว +1

      My friendship has ended with every media outlet like FOX and CNN, they all give us the illusion of a competition.

  • @16876
    @16876 3 ปีที่แล้ว +12

    Note that at ~10:00 we get the impression that 'shape is all you need', but while some alternative AAs that replace common ones in given positions of a particular protein can retain the energetically favored structure, the functionality might be altered drastically: shape != functionality - you can have two different proteins with the same shape but only one is functional in the examined spatiotemporal sphere, or functions as expected.
    Further, the AA composition of the primary chain and its multitude of intrinsic properties are not the sole determinants of the final 3d structure as this depends largely on the environment (acidity, temp. etc.). Finally - different shapes (different proteins) can have similar functions.
    Overall top effort and overview of AlphaFold, thanks Yannic!

    • @justfoundit
      @justfoundit 3 ปีที่แล้ว +3

      And there are quantum mechanics effects. Even a slight change in the atomic structure - like deuterium instead of hydrogen - can alter the total energy of an electron that tries to tunnel through the molecule. And the whole protein "machine" falls apart.
      But I guess shape is still VERY important. So good job Deepmind, CASP and of course Yannic! :)

  • @scatteredvideos1
    @scatteredvideos1 3 ปีที่แล้ว +10

    Great job explaining everything. I'm a ~~protein engineering PhD student and all of the other videos I've watch have played into the hype and not explained anything well.
    Based off their CASP results they haven't solved anything yet but if they keep up this rate of innovation, they will in the next 2-4 years. They are absolutely killing the other big player in the field though (Rosetta), it is truly amazing what they have been able to accomplish.

    • @wdai03
      @wdai03 3 ปีที่แล้ว +2

      Could you explain exactly why it can't be considered solved? Based on their blog they basically say their predictive error is close to what you would observe if you tried to determine the structure experimentally, which seems to be pretty close to being solved. I'm a ml student with limited knowledge of proteins, although I took a bioinformatics course and pretty much just coasted lol

    • @firecatflameking
      @firecatflameking 2 ปีที่แล้ว

      Would love to know why you don't consider it solved aswell!

    • @scatteredvideos1
      @scatteredvideos1 2 ปีที่แล้ว

      @@firecatflameking To be considered 'solved' in my mind the model should be able to predict structures with ~90-95% percent crystal structure resolution or roughly cryo-EM resolutions, in >80% of cases. This would give me enough confidence in my structures to begin engineering proteins using this software and then only expressing the protein to validate changes periodically throughout the design process.

    • @firecatflameking
      @firecatflameking 2 ปีที่แล้ว

      @@scatteredvideos1 Makes sense! I'm guessing we're gonna get there within a few years

    • @scatteredvideos1
      @scatteredvideos1 2 ปีที่แล้ว

      @@firecatflameking if they keep up at the same rate that they are we should be nearly there next year! But that's yet to be seen. I'm excited to see what they do

  • @sinkler123
    @sinkler123 3 ปีที่แล้ว +7

    Thank you, finally, someone providing a longer more detailed presentation about AlphaFold.
    Just found your channel and will definitely check out more content. Great job!

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 3 ปีที่แล้ว +111

    Lightspeed Kilcher strikes again. He's faster than Usain Bolt. ✨

    • @quebono100
      @quebono100 3 ปีที่แล้ว +1

      first 🤣

  • @misteratoz
    @misteratoz 3 ปีที่แล้ว +13

    @17:20 the second line has "NERDS" in it.
    That's it. That's my contribution to this discussion.

  • @chrisavery5397
    @chrisavery5397 3 ปีที่แล้ว +15

    There are a few amino acids with rings (they are called aromatic): Phenylalanine, Tryptophan, Tyrosine, (and histadine). Proline also connected strucure :) I love these videos man!

  • @henpark
    @henpark 3 ปีที่แล้ว +1

    My comments as a computational biophysicist student:
    1. What about protein which need extra proteins which need helping such as chaperone? Anfinsen's dogma (i.e. AA sequence encodes 3D) does not apply quite well here.
    2. Nature paper on this AlphaFold mentioned that complex structure (probably meaning such as homo/hetero-n-meric proteins) is yet to be predicted with high accuracy due to intermolecular interactions distorting the structures.
    3. Most importantly at least to me...what about correct folding PATHWAY? Deeplearning, MC-based, homology modeling whatsoever is all about the END structure. Molecular dynamics can perhaps (depending on force fields) predict folding pathways (called reaction coordinate or collective variable).

  • @joppo758
    @joppo758 3 ปีที่แล้ว +13

    I study biochemistry and the explanation about folding proteins is actually really good!

  • @veedrac
    @veedrac 3 ปีที่แล้ว +46

    You can see Yannic's brain breaking in realtime, not able to cope without there being *something* to be grumpy about.

    • @liesdamnlies3372
      @liesdamnlies3372 3 ปีที่แล้ว +1

      You can be grumpy about who controls it.

  • @Ronnypetson
    @Ronnypetson 3 ปีที่แล้ว +112

    Plot twist: the intern at CASP wrote buggy code for the score computation

    • @nsubedi451
      @nsubedi451 3 ปีที่แล้ว +30

      if "DeepMind" score = 2 * highest score

    • @saanvisharma2081
      @saanvisharma2081 3 ปีที่แล้ว

      Turns out you're true

    • @israelRaizer
      @israelRaizer 3 ปีที่แล้ว

      @@saanvisharma2081 Wait, what do you mean by "you're true"?

    • @Kage1128
      @Kage1128 3 ปีที่แล้ว +1

      Nah fam

    • @Ronnypetson
      @Ronnypetson 3 ปีที่แล้ว +5

      @@BR-fu9px that would be a second-order plot twist

  • @ashwhall
    @ashwhall 3 ปีที่แล้ว +3

    You say that the 64x64 conv can only see 64 amino acids at a time, but that's not true. While it is the case for a single layer conv net, when you stack convolution layers the effective receptive field grows with each successive layer.
    Their model with "220 residual convolution blocks" is deep enough for a receptive field of at least thousands of amino acids.

    • @yevhendiachenko3703
      @yevhendiachenko3703 3 ปีที่แล้ว

      They have deep convolutional model that have 220 convolutional layers and takes 64x64 input size. But the whole thing has size LxL, where L > 64, so they must run their network several times on separated parts of the input and aggregate predictions. So it can really see only 64 aminos at time.

  • @littlebigphil
    @littlebigphil 3 ปีที่แล้ว +2

    The 2 step process reminds me of the symbolic regression on physical systems paper. Use deep learning to generate some intermediate representation, and then use that representation as the model to approximate for a different algorithm that has nicer properties.

  • @tristanridley1601
    @tristanridley1601 3 ปีที่แล้ว +1

    DNA is your compressed source code.
    RNA is your decompressed source code.
    Proteins are your binaries.
    Each 3 digits of base-4 dna or rna code represents one amino acid, with some seeming redundancy.
    We are slowly learning the exact compiler code. It was not that long ago when we found the code for "start" and "end".
    This folding puzzle is one of the last big steps before we can program life like we program computers.

  • @quebono100
    @quebono100 3 ปีที่แล้ว +11

    Wow Yannic you have such amazing teaching skills.

  • @TheGroundskeeper
    @TheGroundskeeper 3 ปีที่แล้ว +2

    A big issue with protein folding is that there are structures at many different scales. Small recursions wrap into large complexes, which fold into large knots. A CNN would in itself struggle to make those long chain assocations, alpha 2 has to be a transformer with attention in order to draw a relationship between the protein segment at position #2,736,203,023 and its nearby neighbor in 3d space, protein segment #72,720,022,853 millions of aminos down the line

  • @alexmorehead6723
    @alexmorehead6723 3 ปีที่แล้ว +1

    John Jumper mentioned at CASP14 on Tuesday that their structure prediction system uses "equivariant" transformers and, most importantly, is end-to-end, meaning they can backpropagate errors through the entire prediction system. Just FYI.

  • @nano7586
    @nano7586 3 ปีที่แล้ว +3

    You're 1) smart and 2) a great teacher. Reeeally good video. Super entertaining and rich of information.

  • @Hovane5
    @Hovane5 3 ปีที่แล้ว +5

    That alligator drawing though... 👌🤩

  • @scatteredvideos1
    @scatteredvideos1 3 ปีที่แล้ว +3

    So the end after the references is basically the supplimentals. I'm not sure if that is common in CS papers but it just goes into detail on exactly how everything was done.

  • @gs2271
    @gs2271 3 ปีที่แล้ว +3

    Nicely explained (even biochemistry)!! Loved it when you compared DNA to source code,protein as binary and the whole process to compilation.
    I am a biologist interested in machine learning and AI and it is great to see this explanation.
    BTW, the amino acids with the rings exist . But especially proline's ring with amine group inside the ring makes protein folding even more complicated.

  • @michaelnurse9089
    @michaelnurse9089 3 ปีที่แล้ว +1

    You are the best explainer I have encountered - and the population in question is large.

  • @banjerism7281
    @banjerism7281 3 ปีที่แล้ว +22

    Biology has come a long way since 1995

    • @poksnee
      @poksnee 3 ปีที่แล้ว +1

      This also involves chemistry

    • @kellyjackson7889
      @kellyjackson7889 3 ปีที่แล้ว

      @@poksnee and snacks don't forget the snax

  • @russelldicken9930
    @russelldicken9930 3 ปีที่แล้ว +4

    Thanks for your effort in shedding light on this development

  • @johnpapiewski8232
    @johnpapiewski8232 3 ปีที่แล้ว +9

    How about going backwards? Start with a shape you want, and from that generate the sequence needed to produce it.

    • @npm1811
      @npm1811 3 ปีที่แล้ว +2

      I think the problem with this approach is defining the “shape you want” in the first place. It’s difficult for scientists to accurately estimate the shapes of proteins based on their function. I mean, to an extent. Like if your protein serves the function x, then it might contain structural components a, b and c. But estimating even crudely accurate xyz coordinates for all the residues would be an exercise in futility. Also, the nature of the protein folding problem is that we know the sequence but we don’t know the 3D structure. Since sequence is the defining factor in determine tertiary structure, we should start with the sequence and go from there.

    • @EMSV66
      @EMSV66 3 ปีที่แล้ว +1

      it's been done for antibodies

    • @npm1811
      @npm1811 3 ปีที่แล้ว +1

      @@EMSV66 is this because their structure is very well known/conserved. Perhaps you shed light on this “going backwards method” more than I can, it’s not my field but I’m v interested!

    • @EMSV66
      @EMSV66 3 ปีที่แล้ว +1

      @@npm1811 Yes, that is part of the reason. You only need to model the variable region. With antibodies you are trying to create a complementary surface to the antigen's so you can start from the structure of the antigen. Look up the work of Costas Maranas at PSU.

    • @npm1811
      @npm1811 3 ปีที่แล้ว

      @@EMSV66 that makes a lot of sense, thanks

  • @fuhaoda
    @fuhaoda 3 ปีที่แล้ว +2

    Very good explanation, are you going to explain Alpha Fold 2 paper and RoseTTAFold?

  • @rbain16
    @rbain16 3 ปีที่แล้ว +8

    I was pre-med, got a bachelor's in Biology and am in an MS for ML. I cried reading this news yesterday. I don't want to rain on the skeptics parade as I sympathize with that view a bunch (since I'm usually "that guy"). Assuming they didn't train on the test samples inferred during the competition, this is big progress and a big deal.
    Is it worth pointing out that this is the same group a couple years ago that proved they can beat everyone on the planet at Go, Chess, etc?
    Pharmaceuticals is the obvious beneficiary of this tech, but we could imagine some next level stuff too. Like a CRISPR that we didn't just steal from bacteria and reverse engineer. We could, in the future, create those cellular machinery on our own terms. People all know DNA, but it is just blueprints for the proteins, the proteins are the real important stuff which do the enzymatic work.

    • @herp_derpingson
      @herp_derpingson 3 ปีที่แล้ว +12

      One small step for man. One giant leap towards genetically engineered cat girls.

    • @GyroCoder
      @GyroCoder 3 ปีที่แล้ว

      Will this speed up the progress of biotechnology / synthetic biology in general, or just specific things?

    • @rbain16
      @rbain16 3 ปีที่แล้ว

      @@GyroCoder given that all the important bits (the cellular machinery) of us are proteins, this will have large impacts across biology

    • @alvaromendoza4406
      @alvaromendoza4406 3 ปีที่แล้ว

      @@rbain16 True, but not all biotech applications / workflows need to know the structure of a protein. It sure will help on many things, many of which we will probably not see coming; but I'd say that for now it'll mainly impact those applications where the spatial configuration of the protein can be exploited.

    • @rbain16
      @rbain16 3 ปีที่แล้ว

      @@alvaromendoza4406 The structure and the function go hand in hand, maybe you can give me an example?

  • @EMSV66
    @EMSV66 3 ปีที่แล้ว

    A structural biologist here. Decent explanation of protein folding. What is the best way to jump into neural networks for a newbie. Also, a comment on Nature papers. They have a print version that is shorter and an extended online version that contains the Methods section. So the short print version contains all the data with little explanation of how it was obtained. A deeper explanation of the methods used can be found in the extended online version. I hope this helps.

  • @hanyanglee9018
    @hanyanglee9018 3 ปีที่แล้ว +1

    20:12 Till this moment. The Idea is that, train a network which reads the sequence of amino acids and predicts their distance. Stage 2 do the gradient descending in order to both generate something real spacial data in the form of vector3d and check if the prediction from stage 1 is possible. If it's not possible, let's say, distance(point 1,point 2) == 1, d(2,3)== 1,but d(1,3)== 3, the stage 2 has to deal with this, and gives out a result which fulfill all the distance prediction as possible.

  • @VladimirBrasil
    @VladimirBrasil 3 ปีที่แล้ว +1

    Beau-ti-ful explanation. One of the best explanations of any subject I've ever seen.
    Brilliant turn from a complex matter to a understandable subject.
    Ge-ni-us.
    Congrats and, above all, Thank You Very, Very Much. Beau-ti-ful.

  • @wenhanzhou5826
    @wenhanzhou5826 2 ปีที่แล้ว

    This is super cool, glad that I found you!

  • @Davourflave
    @Davourflave 3 ปีที่แล้ว

    The paper states that they used dilated convolutions, this made it possible to also model long term interactions. It is crucial, since protein folding is going to be highly dependent on those longterm interactions that determine the 3D structure of a protein.

  • @cupajoesir
    @cupajoesir ปีที่แล้ว

    I love the on the fly real human to human explanation and the fun that ensues. @2033s "My drawing skills are to be criticized in another video " 🙂 . Technically accurate and compact and relevant. Enjoyed it immensely in many ways. Thanks!

  • @scatteredvideos1
    @scatteredvideos1 3 ปีที่แล้ว +4

    The iterative process is probably the folding step. Typically, when using one of the algorithms you will fold the protein thousands of times and build the final structure from a wieghed average of all the folded structures.

  • @Soundslikelife13
    @Soundslikelife13 3 ปีที่แล้ว +1

    I wonder if the results would be improved further if they used a tetryonic / quantom field base model. Similar to prior alpha projects, human assumptions and training on some human data actuall was holding the project back for the final stretches of improvement.

  • @rohanbhatia3013
    @rohanbhatia3013 3 ปีที่แล้ว +4

    When is the new video coming out for the recent Nature paper?

  • @cisy
    @cisy 3 ปีที่แล้ว +3

    Please do an update about the Alphafold database

  • @ashmitharajendran1130
    @ashmitharajendran1130 2 ปีที่แล้ว +3

    Hi! Thank you for such a great explanation. This has been so helpful. Would love an update with the published Alphafold2 paper!

  • @ricodelta1
    @ricodelta1 3 ปีที่แล้ว +2

    I'm actually here by suggested videos, where the first video was about pimple popping.

  • @petrleiman475
    @petrleiman475 3 ปีที่แล้ว +8

    OK, the algorithm of alphafold2 is different to alphafold version 1.0. they replaced EVERY part of alphafold version 1 routine. there is no point in discussing alphafold v. 1.0 at all. apples vs. oranges. different neural network, different physical constraints, different everything. the co-evolution matrices and the final 3d minimization function were separate routines, but they are part of a single neural network now. the network adjusts and rebuilds itself (its type changes) depending on the sequence. Etc, etc, etc. Once again. There is no point in reading that nature paper any more, the new algorithm is really different.

    • @sevret313
      @sevret313 3 ปีที่แล้ว +3

      Even if it is not relevant, it is still interesting to hear how the old Alphafold worked.

  • @PetrGladkikh
    @PetrGladkikh 3 ปีที่แล้ว +1

    25:23 It says "gradient descent on _protein_ _specific_ _potential_ ". I believe at that stage initial predicions are not used anymore (only as initial state).

  • @gmcenroe
    @gmcenroe 3 ปีที่แล้ว +3

    Really there are only 20 amino acids found in proteins encoded by genetic code, Selenocysteine which is found in about 50 eukaryotic proteins replaces cysteine by incorporating Se in place of S, not a huge structural change. There can also be post translational modifications. I bet that some more highly ordered proteins, such as those with a large number of alpha helices, and/or beta sheets connected by more disordered strands are easier to predict as well as a larger data set from x-ray crystallography where protein homology aids the prediction. X-ray crystallography will still be important when looking at how other small molecules bind to proteins or how protein to protein interactions are observed.

  • @mikhailfranco
    @mikhailfranco 3 ปีที่แล้ว +1

    Very nice summary
    Good visual explanations.
    Enjoyed the alligator.
    Thanks.

  • @gollumdiefee2189
    @gollumdiefee2189 3 ปีที่แล้ว +2

    Does anyone know, if there is a good source for explanations about convolutional neural networks?
    I am a biochemistry bachelor, so I've basically no experience with computer science at all.... But it would be super helpful :)
    And of course: Thanks for the great explanation, it was really entertaining to watch that explanation of the paper and it did really help to understand the concepts of the underlying math and computer sciences aspects.

  • @nels6991
    @nels6991 3 ปีที่แล้ว +1

    Update: This video is great.
    I watched up until 3:30 when Yannic says this video is for a "Machine learning person" rather than a protein engineering type person. I know quite a bit about protein folding and protein engineering and fairly little about the technical side of machine learning. Is watching this hour long video going to be a waste of time for me?

  • @Omnifarious0
    @Omnifarious0 3 ปีที่แล้ว +3

    Also, how does this account for how the environment affects how a protein folds? For example, don't some proteins misfold in the presence of other misfolded proteins of the same type?

  • @markdonatelli8611
    @markdonatelli8611 3 ปีที่แล้ว +2

    D.R.N.A. and the R.G.B. of the albelian ring heliocase of light Theora

  • @pastrop2003
    @pastrop2003 3 ปีที่แล้ว +5

    As I remember it was a paper by the Salesforce team about 6 months ago on using BERT to predict binding points on the protein chains. Do you think that Google folk had sort of the same idea?

    • @shahikkhan
      @shahikkhan 3 ปีที่แล้ว

      Bertology meets biology scored better than AlphaFold1? @yannic

  • @ericcodes
    @ericcodes 3 ปีที่แล้ว +2

    1st bar: Yannic
    2nd bar: Next best AI TH-camr

  • @RedShipsofSpainAgain
    @RedShipsofSpainAgain 3 ปีที่แล้ว +21

    10:58 Yes they're called "beta sheets"

  • @matasuki
    @matasuki 3 ปีที่แล้ว +1

    So instead of using chemical properties of each amino acid, they took the data science/ML approach to build a model based on all publish folding data?

  • @andrewwilliam2209
    @andrewwilliam2209 3 ปีที่แล้ว +48

    I'm learning to be more skeptical of these breakthroughs, they might turn out to be overhyped

    • @quebono100
      @quebono100 3 ปีที่แล้ว +14

      Yeah I have the same feeling after ML street talk about GPT-3.

    • @andrewwilliam2209
      @andrewwilliam2209 3 ปีที่แล้ว +4

      @@quebono100 Another point I got from talking with an expert, is that with tech like GPT-3, you still need a way to properly mitigate and handle the bias it can produce, otherwise imagine all the incidents that will arise from "surprising insensitivity" in a gpt-3 application. :/

    • @quebono100
      @quebono100 3 ปีที่แล้ว +1

      @@andrewwilliam2209 Im not an expert. But I think you can do it like they discussed, you can be a prompt engineer :) and simply discover promts and pin it with a seed. (they discussed just the promt engineer part, I expiremented with gpt2 and theire you can have a random seed)

    • @mitchellty
      @mitchellty 3 ปีที่แล้ว +4

      You should always be slightly skeptical

    • @quebono100
      @quebono100 3 ปีที่แล้ว +4

      @@mitchellty Yeah you right. It doesnt mean, that they didnt build something amazing. But nower days such companies try to fake it until they make it. Like UNU AI predicted the kentuky derby 3 correct winners in a row. I thought wow thats amazing how did they do such a difficult task. They truly predicted the kentuky derbi, but not only with theire software but also with expert who cherry picked the most likely winners. Its all marketing, to push the product.

  • @ShusenWang
    @ShusenWang 3 ปีที่แล้ว

    As for AlphaFold2: I guess the pairwise distances are just for training part of the model. It may not be used in predicting the structure. Directly predicting the structure may be better than using the pairwise dist as the middleman.

  • @pt3931
    @pt3931 3 ปีที่แล้ว +3

    A new paper :
    Autoencoder Variationnal Auto encoding

  • @hypegt6885
    @hypegt6885 3 ปีที่แล้ว +1

    I can't wait for you to disect their second paper when it's published!

  • @asifdomo500
    @asifdomo500 3 ปีที่แล้ว +1

    thank you for explaining the research papers the way do .
    I find very hard to understand by reading them; I am 3rd year Bsc student for Computer Science.
    I love the fields and the papers you talk about so it definitely feels great understanding a bit more about the papers you explain!

  • @rinku4532
    @rinku4532 3 ปีที่แล้ว +1

    It's amazing how people are explaining things they don't know about

    • @MR-uk7iy
      @MR-uk7iy 3 ปีที่แล้ว

      ooof

  • @AnimeshSharma1977
    @AnimeshSharma1977 3 ปีที่แล้ว +1

    Convultionss=>Tansformers, now thats a cool insight Yannic! Wondering if the paper is out?

  • @markdonatelli8611
    @markdonatelli8611 3 ปีที่แล้ว +1

    Quantum light generators and the duality of the double helix heliocase transfer of the R.G.B.z to rod's connectivity

  • @gregmattson2238
    @gregmattson2238 3 ปีที่แล้ว +2

    wow, that was deja vu. I worked on a project that had the same basic approach as alphafold 1 (distance matrix+neural network+gradient descent) in the 90's. Of course the neural net was a simple backprop net and the sample set was much tinier, but yeah, basic same approach.
    I've got to think that the deepmind group is infuriating a lot of pure science teams out there - they descend into their territory, clean their clocks on specific problems and then go 'hey - look how great we are'. Its GOT to drive the original scientists there absolutely nuts. But then again, I don't really mind as long as I get flying cars out of it. :)

    • @dionbridger5944
      @dionbridger5944 3 ปีที่แล้ว +1

      "infuriating a lot of pure science teams"
      Diddums. I'd rather have better medicines than scientists without injured egos.

    • @gregmattson2238
      @gregmattson2238 3 ปีที่แล้ว

      @@dionbridger5944 well, yes that is what I was getting at with my "not really minding as long as I get flying cars out if it" comment.
      Yet still there is something about the tone of the deepmind folks that reminds me of the masters' attitude from monty python's 'meaning of life' skit about rugby at boarding school: th-cam.com/video/HKv6o7YqHnE/w-d-xo.html

  • @subashinikennedy5032
    @subashinikennedy5032 2 ปีที่แล้ว +3

    Thank you for this informative video. Can you do a similar one for alpha fold 2 as they have now published the paper?

  • @harisbournas6600
    @harisbournas6600 3 ปีที่แล้ว +2

    Hey Yannic, great work on you videos, I really appreciate it. Could you cover the topic of lavasz loss and the respective paper "The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks" in future videos? I read that this specific jaccard approximation is used as loss function in many image segmentation tasks, even in U-net it has been observed that it gives better results. However it is not straightforward and I still don't fully get it. It would be awesome if you could create a video for it, breaking down the concept as you have done amazingly so many times. thank you :)

  • @JackSPk
    @JackSPk 3 ปีที่แล้ว +1

    Today I was informed that AlphaFold 2 presentation for CASP14 is online, and it contains more information than DeepMind's blogpost (a bit more info about the model on slide #10):
    predictioncenter.org/casp14/doc/presentations/2020_12_01_TS_predictor_AlphaFold2.pdf
    Presentations of all participants:
    predictioncenter.org/casp14/doc/presentations/
    For AlphaFold 1 and all other participants (CASP13 - 2018)
    predictioncenter.org/casp13/doc/presentations/

  • @oisiaa
    @oisiaa 3 ปีที่แล้ว

    This is huge. I can't wait to see where machine learning and AI takes us in the 2020s and 2030s.

  • @cycman98
    @cycman98 3 ปีที่แล้ว +29

    5 seconds into the video and I can already tell that you're going to throw a bucket of cold water on this paper XD

    • @herp_derpingson
      @herp_derpingson 3 ปีที่แล้ว +4

      There is no paper to throw water on.

    • @cycman98
      @cycman98 3 ปีที่แล้ว +2

      @@herp_derpingson unfortunately yes, that was a quick guess before even watching the video

    • @lolerie
      @lolerie 3 ปีที่แล้ว

      @@herp_derpingson there is. www.nature.com/articles/s41586-019-1923-7.epdf?author_access_token=Z_KaZKDqtKzbE7Wd5HtwI9RgN0jAjWel9jnR3ZoTv0MCcgAwHMgRx9mvLjNQdB2TlQQaa7l420UCtGo8vYQ39gg8lFWR9mAZtvsN_1PrccXfIbc6e-tGSgazNL_XdtQzn1PHfy21qdcxV7Pw-k3htw%3D%3D

    • @acerld519
      @acerld519 3 ปีที่แล้ว +1

      @@lolerie That paper addresses an earlier model of theirs at CASP13, two years ago.

  • @blender_wiki
    @blender_wiki 3 ปีที่แล้ว

    First good analysis of alphafold2 i see, compared to other click bite news is really refreshing

  • @guillaumewenzek4210
    @guillaumewenzek4210 3 ปีที่แล้ว +1

    I feel like the predicted torsion are more important than you say. For me the (torsion+distances) will be somewhat inconsistent, and the gradient descent enforce consistency. But given the space of search I'd bet you need the initial guess to be good to not get stuck in a local minima. Just my intuition though.

  • @michaelmuller136
    @michaelmuller136 2 ปีที่แล้ว

    Good overview, well presented, thank you!

  • @toussaid5340
    @toussaid5340 2 ปีที่แล้ว

    SUperb video. What application do you use to create these videos? Id like to learn to draw and scroll through images while recording my voice over just like your format.

  • @maverick9300
    @maverick9300 3 ปีที่แล้ว +2

    I'm going to be sceptical because this type of problem seems incredibly similar to solving conway's game of life, but way more complicated. We haven't even figured out the super-simple version yet.

  • @JonathanBreiter
    @JonathanBreiter 3 ปีที่แล้ว +1

    Thanks for the video. Understandable for non-AI people too!

  • @sarvagyagupta1744
    @sarvagyagupta1744 3 ปีที่แล้ว +1

    Great video like always. So it seems like DeepMind went directly from predicting something similar to adjacency matrix to transformers. I was wondering if they ever implemented spectral graph analysis here.

  • @levydasilvacruz2407
    @levydasilvacruz2407 3 ปีที่แล้ว +1

    The Chladne´s plates experiments could help in someway?

  • @InfiniteUniverse88
    @InfiniteUniverse88 3 ปีที่แล้ว

    'Pyrene ring' isn't a thing. It's called an aromatic ring. Beta strands are joined by hydrogen bonds, to form a beta-pleated sheet. The beta sheet is the reason why the protein-folding took so long to solve. Hydrogen bonds of alpha helices are local, whereas beta sheets have global bonds.

  • @morkovija
    @morkovija 3 ปีที่แล้ว

    33:54 - finally we're on the same playing field! )
    Thanks for the break down, I hope i'll get to your other videos soon as well

  • @alperenkantarci3503
    @alperenkantarci3503 3 ปีที่แล้ว +2

    It's like after GANs era. Transformers are everywhere.

  • @innate-videos
    @innate-videos 2 ปีที่แล้ว

    Great vid, highly informative, very interesting and you are unquestionably the 'Richard Burton, Morgan Freeman and Jeremy Irons' hybrid voice of science vids! 😃

  • @robm838
    @robm838 3 ปีที่แล้ว +1

    Thank you. What stocks and sectors will benefit from this?

  • @arhainofulthuan
    @arhainofulthuan 3 ปีที่แล้ว +1

    Watched one AlphaFold video and I'm now getting advertisements for pre-weighed biochemical research sample blisters.

  • @G12GilbertProduction
    @G12GilbertProduction 3 ปีที่แล้ว

    Loss function is like L²u for a Lagrangian progress error counting, but these Austrian mathematics school is really neat for angle repair in the aminase codification.

  • @alfcnz
    @alfcnz 3 ปีที่แล้ว +12

    Hahaha, awesome! Thanks!

  • @burakkaya7287
    @burakkaya7287 ปีที่แล้ว

    Awesome explanation, please do a video about AlphaLink 🙏🙏

  • @williamm8069
    @williamm8069 3 ปีที่แล้ว

    Thanks for the video. I studied biology and love tech. The ribosomes produce the amino acid chains which then go to the Endoplasmic Reticulum (ER) and then for further modification in the Golgi Body. Metals such as iron or magnesium along with atoms such as N are added. The question is what is conducting this process? It is more than residue attraction/repulsion and torsion angles. Possibly there are other proteins guiding the folding. What about 2 identical amino acid chains producing different multiple outcomes?

  • @lucast2212
    @lucast2212 3 ปีที่แล้ว +1

    The explanation torsion angles is false. What is described and drawn at minute 20:49 are angles (3-body terms), torsion angles are dihedral angels between the two planes spanned by 4 neighbouring atoms (or any type of points).

  • @jeffhow_alboran
    @jeffhow_alboran 3 ปีที่แล้ว

    This video is amazing! Agree with the comment "Yannic Kilcher is all you need".

  • @amiman23
    @amiman23 2 ปีที่แล้ว

    I wonder if there is another new discovery theologic layer never imagined that surprises teams.

  • @Talpham
    @Talpham 3 ปีที่แล้ว

    This is revolutionary!

  • @shaktivaderdristi
    @shaktivaderdristi 3 ปีที่แล้ว +1

    Two different structures will always have different pairwise distances? That seems to be assumption of 2nd stage. Don't know enough math on this.

  • @Ohmriginal722
    @Ohmriginal722 3 ปีที่แล้ว

    Why is the paper STILL NOT RELEASED?! IT'S BEEN 4 MONTHS!

  • @hyiping5926
    @hyiping5926 3 ปีที่แล้ว +1

    Im not a machine learning person, i barely even know what a protein is, how to flip it, twist it, fold it to a paper plane or similar is far beyond my knowledge.. i am however a HTML programmer, made everything from divs to s, to websites build completely out of tables or even better canvas and SVG.. im currently trying to make an AI with HTML, marquee is the core

  • @dud3man6969
    @dud3man6969 3 ปีที่แล้ว +1

    Chaos could never produce these things.

  • @thewaysh
    @thewaysh 3 ปีที่แล้ว +4

    Autobots, fold out!

    • @hypegt6885
      @hypegt6885 3 ปีที่แล้ว +1

      underrated.

  • @L-A1640
    @L-A1640 3 ปีที่แล้ว +1

    Very educational video…thank you

  • @tinyentropy
    @tinyentropy 3 ปีที่แล้ว

    thanks for the great video! :) and... since I am coming from the field of bioinformatics, I really enjoyed your confusion about the format of Nature papers - I remember I had the same strange feelings about it when I started to write papers for similar journals.

  • @ishanmistry8479
    @ishanmistry8479 3 ปีที่แล้ว +1

    Honestly I am so fascinated by the papers but as someone who is new to the domain...it feels overwhelming...for eg..in this video the transformers were like a prerequisite and I feel there might be some prerequisite for every video maybe...
    So is there like a playlist or some flow that someone could suggest.
    Thanks

  • @abdalazizrashid
    @abdalazizrashid 3 ปีที่แล้ว +2

    Great job!. By the way which app are you using?