An AI Video Like No Other!

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ธ.ค. 2024

ความคิดเห็น • 38

  • @nathanstreger3851
    @nathanstreger3851 3 หลายเดือนก่อน +10

    Really wish more people would think about AI like this. Seemingly everyone has confused AI with ML. Hoping this company can get more publicity and funding. Loved the nine part series he referenced at the beginning. Learned a ton from that.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน +4

      Thanks a lot for your vote of support. Be sure to share this video with friends and colleagues to help the TH-cam algorithm share this video.

    • @lobiqpidol818
      @lobiqpidol818 2 หลายเดือนก่อน

      Do your research. Symbolic Ai is responsible for both Ai Winters in the past. The scam falls apart when people realize the obvious fallacy that you can't write down every single detail of the real world in a computer. They haven't solved anything with neuro symbolic only attached the problems of both in 1 package

  • @wanfuse
    @wanfuse 2 หลายเดือนก่อน +4

    statistical confounders, meets group theory, meets attribute symbolic encoding! A set of three of those attributes describing a thing!

  • @sgrimm7346
    @sgrimm7346 3 หลายเดือนก่อน +3

    This is basically "In a nutshell" video of what these networks do and I found it to be enlightening as well as something more people interested in this field should watch. I've been watching this site from the sidelines for a while and truly appreciate the thought and work that has gone into this project. I will try to commit more time to studying these networks, as I find them not only logically correct but it just makes 'common' sense. Thank you.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน +1

      We have an online meeting coming up next week and you are welcome to join. Visit futureaisociety.org for details.

  • @Classicalpianosongs
    @Classicalpianosongs 3 หลายเดือนก่อน +1

    I'm so glad people are finally doing this. I've been trying to get a similar idea out to the world for a while about mapping language in tree maps with hypernyms to show relationships and being a digitally represented diagram you could do a lot with it, create more flexible maps over the top of it with relationships since the map only holds contexts of categorical distinctions, map multimodal things onto it, cipher it to create a new programming language or universal language and more. Even the basic map would allow ai to understand context let alone with all the other maps on top of the base map. It also has far reaching consequences for adding order to the way the mind maps language in humans if we could parse the vectors into visual diagrams and it would also inform us of hidden patterns and logic within and behind language and communication. Anyway I was never going to do it myself so I'm really glad people with the tech no how are doing it because I was sure something like that could basically 10xthe capacity of llms while also potentially reducing the need for scaling and power to create more intelligence. Thanks for sharing the idea it's really a relief you don't even know haha I was worried no one else might think of it and yet I couldn't get it out there so great work

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน +1

      A key observation is that language is fraught with ambiguity but your mind is not. If I say, "Think of a table," I may mean with legs or rows and columns. You don't know which I mean but you know with 100% confidence which type of table you are thinking of. This means that language is somewhat independent of abstract thought. We have a project to address this which you can read about here: futureaisociety.org/dealing-with-ambiguity-in-language/

    • @Patapom3
      @Patapom3 3 หลายเดือนก่อน +3

      People are not "finally doing this", this kind of graph-oriented relationships has been around for decades... It's just not yielding any interesting fruits because it's lacking an automatic learning component, an equivalent of the backpropagation algorithm... And rule-based systems are too rigid and demand too much maintenance if there's no self-supervision.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน

      I agree that knowledge graphs have been around for decades, but more recently they've gotten the short end of the research stick because money has been flowing to ANNs and LLMs. Rule-based needs the weights (re your other comment) so 1) it's not so rigid and 2) you can implement a learning algorithm to replace backpropagation as described in the video.
      We've made a slew of other tweaks to the graph representation which make it much more useful and extensible. Go to futureaisociety.org learn more and try out the software.

    • @Classicalpianosongs
      @Classicalpianosongs 2 หลายเดือนก่อน +1

      @@FutureAISociety llms even now can usually pick which ones we are talking about based on context without explicitly telling them and we have to remember the maps would be to help the llm understand things and works in collaboration with the datasets and trainings so we don't have to necessarily solve every problem.
      Nouns probably need their own map aside from verbs but even if we could fit them in the same map verbs would be easier to start with since we are talking about only 9k words and maybe only 2k of regularly used verbs. It would be a good test to see if it made a big difference even well before we attempted to do the entire English language. I havnt followed your link yet il check it out. Thanks

    • @sehbanomer8151
      @sehbanomer8151 2 หลายเดือนก่อน +1

      ​@FutureAISociety language can be "parsed" into abstract thought, which is necessary to learn these abstract relations/patterns without supervision (manual labeling). They can also be learned from videos, or physical interaction with the environment, but I'd imagine that to be much more challenging. Text is already in a nicely abstracted symbolic format. Also, it'd be nice to have a mechanism to translate abstract thoughts into language.

  • @goldeternal
    @goldeternal 3 หลายเดือนก่อน +3

    key is the brain is energy efficient, what is the algorithm to be so powerful yet consume little energy

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน +3

      I'll cover this in a coming video. Stay tuned and join futureaisociety.org . Most energy is in searching and the key is to minimize the search scope.

  • @Buckle-my-shoe12
    @Buckle-my-shoe12 2 หลายเดือนก่อน

    If we add some consciousness and 3D reality recognition then that could result in an AGI.

  • @chrisminnoy3637
    @chrisminnoy3637 3 หลายเดือนก่อน +1

    Aint't this what Minsky discovered already in the '70. Wondering how you let it learn from raw data, cause that is what backpropagation brought to the scene and frames did not.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน +1

      The idea of a knowledge graph has been around for a while--now we're adding more to it. Backpropagation's Achilles heel is the amount of supervised training samples needed and the massive computer horsepower needed to process them. Contrast this with how a young child learns by exploration and a relatively small number of explanations. We're pursuing the latter approach.

  • @lagrangianomodeloestandar2724
    @lagrangianomodeloestandar2724 3 หลายเดือนก่อน +1

    Good video🧠🤖, perhaps from my opinion and that of others, I am perhaps not relevant, but I have an idea from a long time ago that would not be about actual software intelligence, but about hardware intelligence that could be combined to directly use physical properties rather than software abstractions, such as distilling software properties into hardware, why the problem you observed in these networks is even more efficient, is the hardware based,and the limits of silicon, and the idea that has been going around in my head for a while is this network that would perhaps be even more ideal than the intensive computational models of ML, or perhaps using ML to scan material properties of the following:An inorganic material can be made to embody the properties of an artificial neural network and even better, convert a natural material based on any other element, by asking a digital system that is, or rather, connected, that manages to use a material to expand itself, using physical properties read by sensors and modifiers that transform an ineffcient computational substrate,into an efficient one and part of the network interacting being modified or encoding knowledge in some way?...For example, you have that network, but instead of saving the states of things and the relationships in a program, you pass that to a material by measuring its properties and interacting with them,and the objective will not be to measure the properties of the material but that the material can do the same process of the digital network or different due to its substrate, for example a chemical element with a tendency to form certain structures or molecules, such as bismuth, which forms colored structures under certain conditions, since these emerging patterns store information, or in perhaps more stable elements or compounds,Since these physical elements are the computerization of mathematics and mathematics a formalization of physics, perhaps then intelligence is a physical formalization of the element used. Looking at the computing capacity of our brain, its theoretical maximum that I read is estimated at a quintillion computations per second, but theorists say that a kilogram of matter could reach a one next to fifty zeros of computation, about 100 trillion times the theoretical maximum capacity of our simulated mind of Tuszynski,And although they are based on the lowest estimates, if we remember the polyseminality in ML networks, in a synapse the trust functions are neural networks in themselves, based on neurotransmitters,ionic flows and other molecules and processes...That is to say, these relationships could be complete neural networks, probability distributions of networks that learn trust in each connection,For example, what we saw from Levinthal, where proteins are configured in milliseconds, without going through all the possibilities,or photonic entanglement in myelination that accelerates speed and superposition computations, such as trust training accelerators,and then these trust functions of relationships, these things or cellular bodies,I also saw that they can be considered environment controllers, because the main idea is to generate complexity efficiently and accurately, to minimize energy,and each component would reduce the complexity and quirks in itself to minimize costs and maximize profits which is like the stock of a company...So, given the hardware used, even if it is about looking for the most efficient software, my suggestion is to consider any other substrate as a possible extensible intelligence inherent in its capabilities,and I suggest that if this project works, it can be attempted with minerals and molecules in some more distant future where electronic digital silicon is not the only one, or the existing paradigms and Other elements are used for thinking. It may not be successful if perhaps those elements do not seem scalable for certain things, some, and others like carbon and silicon were appreciated, But if they can be scaled in their inherent capabilities to find more efficient routes, up to unexpected orders of complexity of intelligence, we could perhaps have intelligences based on totally different elements, substrates unknown for such uses...You get some sensors, some physical modifiers, and you get a material and you train the material with the computer so that it becomes a neural network that contains efficient processes given its physics,that perhaps the digital one inherently costs more than that material due to its circuitry... They did an experiment with hydrogel where it learned to play pong in 20 minutes, artificial human neurons took 5 minutes, and it solved the game quite smoothly, for example. The article is 33 pages long and was published in a physics journal from which I downloaded and read part of the scientific article. Epoch ai released an article where they said that the future computation for 2030 due to the acceleration of demand, was going to be 10,000 times greater, but only 24 times more efficient, that is, 1 quintillion molecular operations that our brain performs in one second consumes 5 milliwatts, this in several months would consume 6 gigawatts with a computation It may be 10 times smaller or 10 to 100 times larger. The number of computations per synapse could then be used and they would be much lower, but considering the previous higher estimate by Tuszynski, I prefer to stick with the upper bound as a reference of computation to be achieved in one second of time, close to a quintillion calculations times 20 watts or the small fan in my house. Using materials with less linear and perhaps even digitally incomparable properties would allow calculations of other kinds that Booleans approximate, and such as analog circuits or perhaps extensions of stochastic processes to digital physical stochastic processes passed to other materials,could be combined and have hardware increasing efficiency and efficient computing capacity continuously.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน

      Interesting. A key distinction between our brains and our computers is that our brains work with ionic neurotransmitters which physically move and/or change orientation while our computers work with electrons which can transmit their information at nearly the speed of light- ~a billion times faster.

  • @Patapom3
    @Patapom3 3 หลายเดือนก่อน +1

    I'm working on such a system myself but I don't need weights.
    Why do you need weights?
    Also, I'm not using "is", "has" or "can" relationships as they're not biologically plausible.

    • @FutureAISociety
      @FutureAISociety  3 หลายเดือนก่อน

      The weights are needed to represent the confidence or importance of some bit of knowledge. If Fido has 4 legs and Fido is-a dog, this implies that dogs have 4 legs but with a confidence which changes based on the number of dogs you know about and the number of exceptions you encounter.
      Also, we do not predefine the Relationship types, they are just Things. So you can create new ones as needed. That makes is, has, and as plausible as any other Relationship types. The only hard-coded one is "is-a". That way you can say: can is-a relationthpType.

  • @Classicalpianosongs
    @Classicalpianosongs 2 หลายเดือนก่อน

    Just a tidbit of info that may or may not lead to new ideas Down the track, but spaces only exist because humans need to be able to take a breath, it'sbiologically based grammar. In the same way that words are sometimes made up of roots and morphemes sentances are just big words made of many morphemes, with spaces between them so we can breathe

    • @FutureAISociety
      @FutureAISociety  2 หลายเดือนก่อน

      If you exclude writing for a moment, speech is a continuous stream of sound. Sometimes there are breaks between words and often not. If you don't have a written language, perhaps you don't think in terms of separate words at all.

    • @Classicalpianosongs
      @Classicalpianosongs 2 หลายเดือนก่อน

      @@FutureAISociety the spaces are also a kind of grammar used to indicate the place holders of meanings, and for fullstops so we know when there may be a change in context. but take the language ithkuil for example, every word is 3 letters, and you can write sentances as a word with no spaces and you still know where the place holders are because they are all 3letters. We can't do this in other languages simply because of the difference in letters in each word makes it to complex to work out. fundamentally though, a word means something and a sentance means something, length determines how complex that meaning can be but if sentances arnt words then some words arnt even words since they're made up of multiple suffixes and morphemes just like sentances are. I'm not sure with this understanding if anyone could come up with a reason why sentances arnt just more complex single words especially considering the language ithkuil and what it means about the need for spaces since it language format exempts us from the need of them, they seem more of a function of use to me rather than a necessary fundamental aspect of language

  • @wallneradam
    @wallneradam 2 หลายเดือนก่อน +1

    It is interesting. But I still don't understand how it will be an intelligence. This is just a database, and it can learn only supervised. Of course current AI solutions needs lots of data and power, but only because we use tools for them not invented for this. And I think the problem is to find a much better training method, not just the underlying technology. AI neurons can learn this kind of relations and much-much more, you can't imagine. If you want to add all the attributes and all the connections, a current LLM could store, you will need all the storage of the world.

    • @FutureAISociety
      @FutureAISociety  2 หลายเดือนก่อน

      Thanks for your comment. One might argue that your mind is mostly a database as well. But don't overlook that our system does inference, automatic categorization, and a host of other actions on its own. In future videos I'll show how the graph can implement more generic learning and an internal mental model. A shortcoming of the LLM approach is that it DOES NOT UNDERSTAND anything and makes up for this lack with billions of learned examples. We humans, who UNDERSTAND, can get by with much less stored knowledge (we remember the inferred rules rather than the numerous examples).