Vectoring Words (Word Embeddings) - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ต.ค. 2019
  • How do you represent a word in AI? Rob Miles reveals how words can be formed from multi-dimensional vectors - with some unexpected results.
    08:06 - Yes, it's a rubber egg :)
    Unicorn AI:
    EXTRA BITS: • EXTRA BITS: More Word ...
    AI TH-cam Comments: • AI TH-cam Comments - ...
    More from Rob Miles: bit.ly/Rob_Miles_TH-cam
    Thanks to Nottingham Hackspace for providing the filming location: bit.ly/notthack
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 397

  • @VladVladislav790
    @VladVladislav790 4 ปีที่แล้ว +316

    "Not in this data set" is my new favorite comeback oneliner

    • @MrAmgadHasan
      @MrAmgadHasan ปีที่แล้ว +1

      It's similar to "not in this timeline" that we hear a lot in time-travel scifi

    • @Jason-wm5qe
      @Jason-wm5qe ปีที่แล้ว +1

      😂

  • @wohdinhel
    @wohdinhel 4 ปีที่แล้ว +1059

    “What does the fox say?”
    “Don’t they go ‘ring ding ding’?”
    “Not in this dataset”

    •  4 ปีที่แล้ว +51

      Train the same algorithm on songs instead of news articles and I figure you could get some really interesting results as well. Songs work on feelings and that should change the connections between the words as well - I bet the technology can be used to tell a lot about the perspective people take on things as well.

    • @argenteus8314
      @argenteus8314 4 ปีที่แล้ว +21

      @ Songs also use specific rhythmic structures; assuming most of your data was popular music, I bet that there'd be a strong bias for word sequences that can fit nicely into a 4/4 time signature, and maybe even some consistent rhyming structures.

    • @killedbyLife
      @killedbyLife 4 ปีที่แล้ว +1

      @ Train it with only lyrics from Manowar!

    • @ruben307
      @ruben307 4 ปีที่แล้ว +3

      @ I wonder how strong Rhymes would show up in that dataset.

    •  4 ปีที่แล้ว

      @@killedbyLife That's odd - I listen to Manowar regularly. Nice pick. 😉

  • @kurodashinkei
    @kurodashinkei 4 ปีที่แล้ว +299

    Tomorrow's headline:
    "Science proves fox says 'Phoebe'"

  • @xario2007
    @xario2007 4 ปีที่แล้ว +290

    Okay, that was amazing. "London + Japan - England = Tokyo"

    • @yshwgth
      @yshwgth 4 ปีที่แล้ว +11

      That needs to be a web site

    • @cheaterman49
      @cheaterman49 4 ปีที่แล้ว +65

      More impressed by Santa + pig - oink = "ho ho ho"

    • @VoxAcies
      @VoxAcies 4 ปีที่แล้ว +28

      This blew my mind. Doing math with meaning is amazing.

    • @erikbrendel3217
      @erikbrendel3217 4 ปีที่แล้ว +9

      you mean Toyko!

    • @Dojan5
      @Dojan5 4 ปีที่แล้ว +15

      I was actually expecting New York when they added America. As a child I always thought New York was the capital of the U.S., I was at least around eight when I learned that it wasn't. Similarly, when people talk of Australia's cities, Canberra is rarely spoken of, but Sydney comes up a lot.

  • @Cr42yguy
    @Cr42yguy 4 ปีที่แล้ว +111

    EXTRA BITS NEEDED!

  • @bluecobra95
    @bluecobra95 4 ปีที่แล้ว +269

    'fox' + 'says' = 'Phoebe' may be from newspapers quoting English actress Phoebe Fox

    • @skepticmoderate5790
      @skepticmoderate5790 4 ปีที่แล้ว +20

      Wow what a pull.

    • @rainbowevil
      @rainbowevil 3 ปีที่แล้ว +3

      It was given ‘oink’ minus ‘pig’ plus ‘fox’ though, not fox + says. So we’d expect to see the same results as for cow & cat etc. of it “understanding” that we’re looking at the noises that the animals make. Obviously it’s not understanding, just an encoding of how those words appear near each other, but we end up with something remarkably similar to understanding.

  • @Alche_mist
    @Alche_mist 4 ปีที่แล้ว +275

    Fun points: A lot of the Word2vec concepts come from Tomáš Mikolov, a Czech scientist at Google. The Czech part is kinda important here - Czech, as a Slavic language, is very flective - you have a lot of different forms for a single word, dependent on its surroundings in a sentence. In some interview I read (that was in Czech and in a paid online newspaper, so I can't give a link), he mentioned that this inspired him a lot - you can see the words clustering by their grammatical properties when running on a Czech dataset and it's easier to reason about such changes when a significant portion of them is exposed visibly in the language itself (and learned as a child in school, because some basic parts of it are needed in order to write correctly).

    • @JDesrosiers
      @JDesrosiers ปีที่แล้ว +8

      very interesting

    • @afriedrich1452
      @afriedrich1452 ปีที่แล้ว +8

      I keep wondering if I was the one who gave the inventor of Word2vec the idea of vectoring words 15 years ago. Probably not.

    • @notthedroidsyourelookingfo4026
      @notthedroidsyourelookingfo4026 ปีที่แล้ว +3

      Now I wonder what would've happened if it had been a Chinese, where you don't have that at all!

    • @GuinessOriginal
      @GuinessOriginal ปีที่แล้ว +1

      Wonder how this works with Japanese? Their token spaces must be much bigger and more complex

    • @newbie8051
      @newbie8051 ปีที่แล้ว +1

      Technically you can share the link to the newspaper

  • @adamsvoboda7717
    @adamsvoboda7717 4 ปีที่แล้ว +73

    Meanwhile in 2030:
    "human" + "oink oink" - "pig" = "pls let me go skynet"

  • @buzz092
    @buzz092 4 ปีที่แล้ว +152

    Always love to see Rob Miles here!

    • @RobertMilesAI
      @RobertMilesAI 4 ปีที่แล้ว +40

    • @yondaime500
      @yondaime500 4 ปีที่แล้ว +1

      Even when the video doesn't have that "AAAHH" quality to it.

  • @rich1051414
    @rich1051414 4 ปีที่แล้ว +165

    This thing would ace the analogy section of the SAT.
    Apple is to tree as grape is to ______.
    model.most_similar_cosul(positive['tree', 'grape'], negative['apple']) = "vine"

  • @veggiet2009
    @veggiet2009 4 ปีที่แล้ว +84

    Foxes do chitter!
    But primarily they say "Phoebe"

  • @panda4247
    @panda4247 4 ปีที่แล้ว +105

    I like this guy and his long sentences. It's nice to see somebody who can muster a coherent sentence of that length.
    So, if you run this (it's absurdly simple, right), but if you run this on a large enough data set and give it enough compute to actually perform really well, it ends up giving you for each word a vector (that's of length however many units you have in your hidden layer), for which the nearby-ness of those vectors expresses something meaningful about how similar the contexts are that those words appear in, and our assumption is that words that appear in similar contexts are similar words.

    • @thesecondislander
      @thesecondislander ปีที่แล้ว +34

      His neural network has a very large context, evidently ;)

    • @MrAmgadHasan
      @MrAmgadHasan ปีที่แล้ว +2

      Imagine a conversation between him and D Trump.

  • @Chayat0freak
    @Chayat0freak 4 ปีที่แล้ว +251

    I did this for my final project in my bsc. Its amazing. I found cider - apples + grapes = wine. My project attempted to use these relationships to build simulated societies and stories.

    • @Games-mw1wd
      @Games-mw1wd 4 ปีที่แล้ว +21

      would you be willing to share a link? This seems really interesting.

    • @TOASTEngineer
      @TOASTEngineer 4 ปีที่แล้ว +7

      Yeah, that sounds right up my alley, how well did it work

    • @ZoranRavic
      @ZoranRavic 4 ปีที่แล้ว +24

      Dammit Dean, you can't bait people with this kind of a project idea and not tell us how it went

    • @KnakuanaRka
      @KnakuanaRka 4 ปีที่แล้ว +2

      You want to give some info as to how that went?

    • @blasttrash
      @blasttrash ปีที่แล้ว +1

      you are lying you did not do it. if you did, then paste the source(paper or code).
      - cunningham

  • @alexisxander817
    @alexisxander817 3 ปีที่แล้ว +220

    I am in love with this man's explanation! makes it so intuitive. I have a special respect for folks who can make a complex piece of science/math/computer_science into an abstract piece of art. RESPECT!

    • @nidavis
      @nidavis ปีที่แล้ว +10

      "it's the friends you make along the way" lol

    • @sgttomas
      @sgttomas 11 หลายเดือนก่อน +2

      I was just thinking this and came to the comments…. Yup. Mr Miles is terrific. 🎉

    • @webgpu
      @webgpu 11 หลายเดือนก่อน

      "complex" ? 🙂

    • @Commiehunter12
      @Commiehunter12 8 หลายเดือนก่อน

      He's Twerp. He's afraid to talk about X Y and XX Chromosomes and how we express them in language. shame on you

    • @subject8332
      @subject8332 5 หลายเดือนก่อน +3

      @@Commiehunter12 No, he just didn't want to trigger the priesthood in a video about word embeddings but looks like he wasn't careful enough.

  • @wolfbd5950
    @wolfbd5950 4 ปีที่แล้ว +62

    This was weirdly fascinating to me. I'm generally interested by most of the Computerphile videos, but this one really snagged something in my brain. I've got this odd combination of satisfaction and "Wait, really? That works?! Oh, wow!"

  • @muddi900
    @muddi900 4 ปีที่แล้ว +97

    'What does it mean for two words to be similar?'
    That is a philosophy lesson I am not ready for bro

    • @williamromero-auila7129
      @williamromero-auila7129 4 ปีที่แล้ว +5

      Breau

    • @_adi_dev_
      @_adi_dev_ 4 ปีที่แล้ว +3

      How dare you assume my words meaning, don't you know its the current era

    • @cerebralm
      @cerebralm 4 ปีที่แล้ว +8

      that's kind of the great thing about computer science... you can take philosophical waffling and actually TEST it

    • @youteubakount4449
      @youteubakount4449 4 ปีที่แล้ว

      I'm not your bro, pal

    • @carlosemiliano00
      @carlosemiliano00 4 ปีที่แล้ว +3

      @@cerebralm "Computer science is the continuation of logic
      by other means"

  • @nemanjajerinic6141
    @nemanjajerinic6141 5 หลายเดือนก่อน +6

    Today, vector databases are a revolution to AI models. This man was way ahead of time.

  • @tridunghuynh5573
    @tridunghuynh5573 2 ปีที่แล้ว +1

    I love the way he's discussing complicated topics. Thank you very much

  • @Sk4lli
    @Sk4lli 4 ปีที่แล้ว +7

    This was soooo interesting to me. I never dug deeper in how these networks work. But so many "Oh! That's how it is!". When I watched the video about GPT-2 and you he said that all the connections are just statistics, I just noted that internally as interesting and "makes sense" but didn't really get it. But with this video it clicked!
    So many interesting things, so thanks a lot for that. I love these videos.
    And seeing the math that can be done with these vectors is amazing! Wish I could like this more than once.

  • @arsnakehert
    @arsnakehert ปีที่แล้ว

    Love how you guys are just having fun with the model by the end

  • @superjugy
    @superjugy 4 ปีที่แล้ว +4

    OMG that ending. Love Robert's videos!

  • @kal9001
    @kal9001 4 ปีที่แล้ว +86

    Rather than biggest city, it seems obvious it would be the most written about city, which may or may not be the same thing.

    • @packered
      @packered 4 ปีที่แล้ว +12

      Yeah, I was going to say most famous cities. Still a very cool relationship

    • @oldvlognewtricks
      @oldvlognewtricks 4 ปีที่แล้ว +12

      Would be interested by the opposite approach: ‘Washington D.C. - America + Australia = Canberra’

    • @Okradoma
      @Okradoma 4 ปีที่แล้ว

      Toby Same here...
      I’m surprised they didn’t run that,

    • @tolep
      @tolep 2 ปีที่แล้ว

      Stock markets

  • @channagirijagadish1201
    @channagirijagadish1201 ปีที่แล้ว +2

    Very well done. I love the explanation. He obviously has deep insight to explain it so very well. Thanks.

  • @b33thr33kay
    @b33thr33kay ปีที่แล้ว +17

    You really have a way with words, Rob. Please never stop what you do. ❤️

  • @abdullahyahya2471
    @abdullahyahya2471 8 หลายเดือนก่อน +1

    Mind blown, Thanks for the easy explanation. So calm and composed.

  • @Verrisin
    @Verrisin 4 ปีที่แล้ว +4

    floats: some of the real numbers
    - Best description and explanation ever! - It encompasses all the problems and everything....

    • @RobertMilesAI
      @RobertMilesAI 3 ปีที่แล้ว +8

      "A tastefully curated selection of the real numbers"

  • @LeoStaley
    @LeoStaley 4 ปีที่แล้ว +21

    I'm a simple man. I see Rob Miles, I click.

    • @koerel
      @koerel 3 ปีที่แล้ว

      I could listen to him all day!

  • @SeanSuggs
    @SeanSuggs 9 วันที่ผ่านมา

    Rob Miles and computerphile thank you... IDK why youtube gave this gem back to me today (probably for my insesent searching for the latest LLM news these days) but I am greatful to you even more now than I was 4yrs ago... Thank you

  • @lonephantom09
    @lonephantom09 4 ปีที่แล้ว +1

    Beautifully simple explanation! Resplendent!

  • @cheeyuanng853
    @cheeyuanng853 ปีที่แล้ว

    This gotta be one of the best intuitive explanation of word2vec.

  • @PerMortensen
    @PerMortensen 4 ปีที่แล้ว +22

    Wow, that is mindblowing.

  • @Alkis05
    @Alkis05 3 ปีที่แล้ว +17

    This is basically node embedding from graph neural networks. Each sentence you use to train the it can be seen as a random walk in the graph that relates each world with each other. The number of words in the sentence can be seem as how long you walk from the node. Besides "word-vector arithmetics", one thing interesting to see would be to use this data to generate a graph of all the words and how they relate to each other. Than you could do network analysis with it, see for example, how many clusters of words and figure out what is their labels. Or label a few of them and let the graph try to predict the rest of them.
    Another interesting thing would be to try to embed sentences based on the embedding of words. For that you would get a sentence and train a function that maps points in the word space to points in the sentence space, by aggregating the word points some how. That way you could compare sentences that are close together. Then you can make sentences-vector arithmetics.
    This actually sounds like a cool project. I think I'm gonna give it a try.

    • @jamesjonnes
      @jamesjonnes 9 หลายเดือนก่อน

      How did it go?

  • @helifalic
    @helifalic 4 ปีที่แล้ว

    This blew my mind. Simply wonderful!

  • @kenkiarie
    @kenkiarie 4 ปีที่แล้ว +1

    This is very impressive. This is actually amazing.

  • @tommyhuffman7499
    @tommyhuffman7499 ปีที่แล้ว +1

    This is by far the best video I've seen on Machine Learning. So cool!!!

  • @tapanbasak1453
    @tapanbasak1453 5 หลายเดือนก่อน

    This page blows my mind. It takes you through the journey of thinking.

  • @Razzha
    @Razzha 4 ปีที่แล้ว +3

    Mind blown, thank you very much for this explanation!

  • @vic2734
    @vic2734 4 ปีที่แล้ว +1

    Beautiful concept. Thanks for sharing!

  • @kamandshayegan4824
    @kamandshayegan4824 6 หลายเดือนก่อน

    I am amazed and in love with his explanations. I just understand it clearly, you know.

  • @joshuar3702
    @joshuar3702 4 ปีที่แล้ว +40

    I'm a man of simple tastes. I see Rob Miles, I press the like button.

  • @distrologic2925
    @distrologic2925 4 ปีที่แล้ว +10

    I love that I have been thinking about modelling natural language for some time now, and this video basically confirms my way of heading. I have never heard of word embedding, but its exactly what I was looking for. Thank you computerphile and youtube!

  • @rishabhmahajan6607
    @rishabhmahajan6607 3 ปีที่แล้ว

    Brilliantly explained! Thank you for this video

  • @helmutzollner5496
    @helmutzollner5496 ปีที่แล้ว +1

    Very interesting. Would like to see more about these word vectors and how to use them.

  • @crystalsoulslayer
    @crystalsoulslayer 11 หลายเดือนก่อน +1

    It makes so much more sense to represent words numerically rather than as collections of characters. That may be the way we write them, but the characters are just loose hints at pronunciation, which the model probably doesn't care about for meaning. And what would happen if a language model that relied on characters tried to learn a language that doesn't use that system of writing? Fascinating stuff.

  • @peabnuts123
    @peabnuts123 4 ปีที่แล้ว

    16:20 Rob loves it, he's so excited by it 😄

  • @taneliharkonen2463
    @taneliharkonen2463 ปีที่แล้ว +1

    Mind blown... Able to do arithmetic on the meaning of words... I did not see that one coming :o A killer explanation on the subject thanks!! :D

  • @patricke1362
    @patricke1362 2 หลายเดือนก่อน

    super nice style of speaking, voice and phrasing. Good work !

  • @Galakyllz
    @Galakyllz 4 ปีที่แล้ว

    Amazing video! I appreciate every minute of your effort, really. Think back, wondering "Will anyone notice this? Fine, I'll do it." Yes, and thank you.

  • @alisalloum629
    @alisalloum629 2 ปีที่แล้ว

    damn that's the best enjoyable informative video I've seen in a while

  • @redjr242
    @redjr242 4 ปีที่แล้ว +2

    This is fascinating! Might we be able to represent language in the abstract as a vector space? Furthermore, similar but slightly different words in different languages are represented by similar by slightly different vectors in this vector space?

  • @Noxeus1996
    @Noxeus1996 4 ปีที่แล้ว

    This video really deserves more views.

  • @SanderBuruma
    @SanderBuruma 4 ปีที่แล้ว +1

    absolutely fascinating

  • @TrevorOFarrell
    @TrevorOFarrell 4 ปีที่แล้ว

    Nice thinkpad rob! I'm using the same version of x1 carbon with the touch bar as my daily machine. Great taste.

  • @Nagria2112
    @Nagria2112 4 ปีที่แล้ว +6

    Rob Miles is back :D

  • @bruhe_moment
    @bruhe_moment 4 ปีที่แล้ว +2

    Very cool! I didn't know we could do word association to this degree.

  • @MakkusuOtaku
    @MakkusuOtaku 4 ปีที่แล้ว +22

    Word embedding is my favorite pass-time.

  • @Verrisin
    @Verrisin 4 ปีที่แล้ว +89

    Man, ... when AI will realize we can only imagine 3 dimensions, it will be so puzzled how we can do anything at all...

    • @overloader7900
      @overloader7900 3 ปีที่แล้ว +12

      Actually 2 spacial visual dimension with projection...
      Then we have time, sounds, smells...

    • @Democracy_Manifest
      @Democracy_Manifest 8 หลายเดือนก่อน

      The amount of neurons is more important than the experienced dimensions.

  • @simonfitch1120
    @simonfitch1120 4 ปีที่แล้ว

    That was fascinating - thanks!

  • @datasciyinfo5133
    @datasciyinfo5133 ปีที่แล้ว +1

    Thanks for a great explanation of word embeddings. Sometimes I need a review. I think I understand it, then after looking at the abstract, n-dimensional embedding space in ChatGPT and Variational Autoencoders, I forget about the basic word embeddings. At least it’s a simple 300-number vector per word, that describes most of the highest frequency neighboring words.

    • @michaelcharlesthearchangel
      @michaelcharlesthearchangel ปีที่แล้ว

      Me too. I loved the review after looking how GPT4 and its code/autoencoder-set looks under the hood. I also had to investigate the keywords being used like "token" when we think about multi vector signifiers and the polysemiology of glyphic memorization made by these massive AI databases.
      Parameters for terms, words went from 300 to 300,000 to 300,000,000 to 1.5 trillion to ♾ infinite. Meaning: Pinecone and those who've reached infinite parameters have created the portal to a true self-learning operating system, self-aware AI.

  • @dzlcrd9519
    @dzlcrd9519 4 ปีที่แล้ว

    Awesome explaining

  • @danielroder830
    @danielroder830 4 ปีที่แล้ว +2

    You could make a game with that, some kind of scrabble with random words, add and substract words to get other words. Maybe with the goal to get long words or specific words or get shortest or longest distance from a specific word.

  • @rafaelzarategalvez6728
    @rafaelzarategalvez6728 4 ปีที่แล้ว +2

    It'd have been nice to hear about the research craze around more sophisticated approaches to NLP. It's hard to keep up with the amount of publications lately related to achieving "state-of-the-art" models using GLUE's benchmark.

  • @jackpisso1761
    @jackpisso1761 4 ปีที่แล้ว

    That's just... amazing!

  • @mynamesnotsteve
    @mynamesnotsteve 3 ปีที่แล้ว +3

    I'm surprised that there's been no mention of Rob's cufflinks in the comments for well over a year after upload

  • @WylliamJudd
    @WylliamJudd 3 ปีที่แล้ว

    Wow, that is really impressive!

  • @maksdejna5486
    @maksdejna5486 ปีที่แล้ว

    Really nice explanation :)

  • @WondrousHello
    @WondrousHello ปีที่แล้ว +13

    This has suddenly become massively relevant 😅

  • @wazzzuuupkiwi
    @wazzzuuupkiwi 4 ปีที่แล้ว

    This is amazing

  • @debayanpal8107
    @debayanpal8107 หลายเดือนก่อน

    best explanation about word embedding

  • @edoardoschnell
    @edoardoschnell 4 ปีที่แล้ว

    This is über amazing. I wonder if you could use that to predict cache hits and misses

  • @phasm42
    @phasm42 4 ปีที่แล้ว

    Very informative!

  • @worldaviation4k
    @worldaviation4k 4 ปีที่แล้ว +3

    is the diagram with angles and arrows going off in all directions just for us to visualise it rather than how computers are looking at it, I didn't think they'd be calculating degrees. I thought it would be more about numbers of how close the match is like 0-100

  • @shourabhpayal1198
    @shourabhpayal1198 2 ปีที่แล้ว

    Great explanation

  • @RafaelCouto
    @RafaelCouto 4 ปีที่แล้ว +2

    Plz more AI videos, they are awesome!

  • @SpaceChicken
    @SpaceChicken ปีที่แล้ว +4

    Phenomenal talk. Surprisingly compelling given the density of the topic.
    I really do hope they let this man out of prison one day.

  • @StevenVanHorn
    @StevenVanHorn 4 ปีที่แล้ว +20

    I'm realllly curious about the basis vectors in this. What's the closest few words to etc..

    • @Guztav1337
      @Guztav1337 4 ปีที่แล้ว +2

      That. Now I'm really curious.

    • @yugioh8810
      @yugioh8810 4 ปีที่แล้ว

      I don't think that such reprenstation captures the distance information at all to begin with. The *closest* word is it has a distance of 1, (hamming distance in this case, I claim that each flipped bit counts as 1 hamming distance), but is not a word at all. Whereas in a vector-encoded representation since the words are mapped to a *vector space* then the closeness-farness of two vectors are conveyed in that representation. information representation if a fabulous topic I don't think I understand it yet. Information theory may help us understand information and information representation.

    • @Guztav1337
      @Guztav1337 4 ปีที่แล้ว +7

      @worthy null , wtf are you on about? Nobody said anything about Hamming distance.
      He asked: what few words are the closest to the basis vectors [in euclidean distance] in that vector space.

    • @LEZAKKAZ
      @LEZAKKAZ 4 ปีที่แล้ว +2

      I see where youre going with your analogy, but embeddings generally dont work like that. At first all the words are randomly given a random vector and then those vectors change throughout the training process. So the words you're looking for would be meaningless in this case. If you're looking for the centroid word(words that appear in the center of the embeddings) then that would be words that have very broad contexts such as "the".

    • @StevenVanHorn
      @StevenVanHorn 4 ปีที่แล้ว

      @Gerben van Straaten something that might be cute would be defining some human meaningful basis vectors then rotating/scaling the points to fit them. Then see what the remaining basises are. You're definitely right that they would not be human meaningful out of the box though

  • @MrSigmaSharp
    @MrSigmaSharp 4 ปีที่แล้ว +3

    Oh yes, explaination and a concrete example

  • @user-cj2rm3nz7b
    @user-cj2rm3nz7b 3 หลายเดือนก่อน

    Wonderful explanation

  • @JamieDodgerification
    @JamieDodgerification 4 ปีที่แล้ว +23

    Would it be possible for Rob to share his colab notebook / code with us so we can play around with the model for ourselves? :D

    • @jeffreymiller2801
      @jeffreymiller2801 4 ปีที่แล้ว

      I'm pretty sure it's just the standard model that comes with gensim

    • @steefvanwinkel
      @steefvanwinkel 4 ปีที่แล้ว

      See bdot02's comment above

  • @Gargamelle
    @Gargamelle 3 ปีที่แล้ว +1

    If you train 2 networks with different languages I guess the latent space? would be similar. And the differences could be really relevant to how we thought differently due to using different language

  • @matiasbarrios7983
    @matiasbarrios7983 4 ปีที่แล้ว

    This is awesome

  • @PMA65537
    @PMA65537 4 ปีที่แล้ว

    I wrote some code to extract authors' names from man pages using clues such as capital letters (and no dictionary). I added special cases to exclude Free Software Foundation etc. Vectors would be an interesting way to try the same.

  • @youssefezzeddine923
    @youssefezzeddine923 28 วันที่ผ่านมา

    This is one of the coolest things i've seen in a while. Just thinking how small a neighbourhood of one word/vector should we take ? Or how does the implementation of context affect the choice of optimal neighbourhoods ?

    • @youssefezzeddine923
      @youssefezzeddine923 28 วันที่ผ่านมา

      And contexts themselves vary from a person to another depending on how they experienced life. So it would be interesting to see also a set of optimal contexts and that would affect the whole thing.

  • @Sanders4069
    @Sanders4069 หลายเดือนก่อน

    So glad they allow this prisoner a conjugal visit to discuss these topics!

  • @RazorbackPT
    @RazorbackPT 4 ปีที่แล้ว +48

    I would suspect that this has to be very similar to how our own brains interpret languange, but then again evolution has a tendency to go about solving problems in very strange and inefficient ways.

    • @maxid87
      @maxid87 4 ปีที่แล้ว +1

      Do you have examples? I am really curious - so far I always assumed nature does it the most efficient way possible.

    • @wkingston1248
      @wkingston1248 4 ปีที่แล้ว +22

      @@maxid87 mammals have a nerve that goes from the brain to the throat, but due to changes in mammals it always goes under a vien in the heart then back up to the throat. This is so extreme that on a giraffe the nerve is like 9 feet long or something. In general evolution does a bad job at remmoving unnecessary features.

    • @Bellenchia
      @Bellenchia 4 ปีที่แล้ว

      Clever Hans

    • @maxid87
      @maxid87 4 ปีที่แล้ว

      @@wkingston1248 how do you know that this is inefficient? Might seem like that at first glance but maybe there is some deeper reason for it? Are there actual papers on this topic that answer the question?

    • @cmilkau
      @cmilkau 4 ปีที่แล้ว

      I doubt there is a lot of evolution at play in human language processing. It seems reasonable to assume that association (cat~dog) and decomposition (Tokyo = japanese + city) play an important role.

  • @phasm42
    @phasm42 4 ปีที่แล้ว +1

    The weights would be per-connection and independent of the input, so is the vector composed of the activation of each hidden layer node for a given input?

  • @UserName________
    @UserName________ 8 หลายเดือนก่อน +1

    How far we've come only 3 years later.

  • @giraffebutt
    @giraffebutt 4 ปีที่แล้ว +27

    What’s with that room? Is this Prisonphiles?

    • @MichaelErskine
      @MichaelErskine 4 ปีที่แล้ว +2

      It's Nottinghack - but true it's a bit prison-like

  • @theshuman100
    @theshuman100 4 ปีที่แล้ว +1

    word embeddings are the friends we make along the way

  • @unbekannter_Nutzer
    @unbekannter_Nutzer ปีที่แล้ว

    @0:56 A set of characters doesn't have repetition and - in further not specified sets - the ordering isn't specified.
    So dom, doom, mod and mood map to the same set of characters, so a set is underspecific.

  • @MenacingBanjo
    @MenacingBanjo 2 ปีที่แล้ว

    Came back here because I fell in love with the Semantle game that came out a couple of months ago.

  • @petevenuti7355
    @petevenuti7355 ปีที่แล้ว

    Question for Miles, can you factorise the neural matrix, break it up into smaller models, to run on a cluster of machines then by adding vectors from nearby machines provide responses?

  • @endogeneticgenetics
    @endogeneticgenetics ปีที่แล้ว +1

    Would love sample code in cases like this where there’s a Jupyter notebook already laying about!

  • @cmilkau
    @cmilkau 4 ปีที่แล้ว +10

    There are a lot of words that appear similar by context but are very different in meaning. Sometimes they're exact opposites of each other. This doesn't matter too much for word prediction but for tasks that extract semantics. Are there techniques to get better semantic encoding out of the text, particularly separating synonyms from antonyms?

    • @Efogoto
      @Efogoto 10 หลายเดือนก่อน

      Auto-antonyms, words that mean the exact opposite in different context: cleave, sanction, dust ...

  • @nonchip
    @nonchip 4 ปีที่แล้ว +5

    3:00 pretty sure that graphic should've been just 2 points on the same line, given what he said a few sentences before that.

    • @panda4247
      @panda4247 4 ปีที่แล้ว

      Yep, if the mapping of images is just taking the values each pixel and then making N-dimensional vector (where N is number of pixels), then the picture with more brightness would be the on the same line (if solid black pixels were still solid black, depending on your brightness filter applied).

  • @OpreanMircea
    @OpreanMircea 4 ปีที่แล้ว

    I love this

  • @carlossegura403
    @carlossegura403 3 ปีที่แล้ว +1

    Wow, it is 2020, and I haven't used Gensim and GloVe in years - ever since the release of BERT and GPT.

  • @sebastienmoeller256
    @sebastienmoeller256 4 ปีที่แล้ว

    Can we have access to the google colab where this model is loaded? thanks for the content!

  • @arsilvyfish11
    @arsilvyfish11 ปีที่แล้ว

    Can you share the above colab notebook, it would be really great for a quick reference with the vid.

  • @oneMeVz
    @oneMeVz 4 ปีที่แล้ว

    i think this video just have me a better understanding of neural networks