It just could be a fractal of self containing interactive stuff with not necessarily consciousness but an imaging. The difference it's that consciousness have fractal to the infinity, mean have a connection with incompletions theory the problems of halting paradox Turing mind it an open set that willing to change connected to the new info reason, computer it a close deterministic set that emulate do not change from a total set of probabilistic combinatorics of positive results. Computer do not change the self own objective and priority. And I don't know about intuition of changing heuristics on open paradigms. Mind derived from spirits throw soul and spirit are forms of integrative infinities reasons and paradigms.
AI have a billion of parameters, human cognitive for do no made or developed from understanding from billions of parameters that's imaging. It derives from a form to be. The infinite or spirits can not exchange info or protocol via counted finite modal but sens to be a set of function parameter that make sense in a form to be on heuristic paradigms. There is no memory cache of that many parameters of functions.
AGI it's not really AGI as this in reality is a complex probabilistically future probabilistic predictor oracle. As information, it not like human can go near speed of light, it can retrieve main future factors con influences. The problem it's the human-corrupted form of governance cupola every time show less wisdom use on new teach or magic.
I think emotions, it can not be expressed in close system emulated, we do not understand emotion it not just a nervous system. The system + the connection with infinity effect reverser entropy. It is the limit from a limited world and unlimited world. The unlimited world I think it is the spiritual world as it is Essene unlimited it can not express in limited measure capture. It on other form more similar doctor strange where its fulls of paradox braking impossibles. In a limited world, there is no reason that error good or bad exist only mechanical interactions. It is the contrary in the unlimited world.
I have the impression that this was always possible as you need to understand it to properly assert how they work and the companies doing on black box are doing it to hide the fact that the thing doesn't work very well, it only appears to work.
they scrape website do some comparison to verified and repeated information that's why GPT, got a bug of showing all its training information and models after repeatedly spamming it etc.
@@monad_tcp its how GPT-3/4 is able to get most of the half decent results it is now. Its a well established prompting strategy. If you get the model to explain itself, it thinks about the problem its solving more deliberately and generates better answers instead of getting lost in its own thoughts haflway through, forgetting what its doing, and making up some bullshit.
@@TrumpsATraitor I have the impression that for a long time now, the productivity elements of many jobs have gone down, while non-productive, inefficient job aspects have gone up tremendously. by which I mean formalities, bureaucracies, hollow, nonsensical, meaningless procedures, "just to keep people busy" while they don't need to be productive anymore. because the productivity today comes from machines / automated processes. while people still have to be kept "off the streets" and under the illusion as though they are still useful to greater society.
Everything that Jonas Andrulis is saying about building a synthetic mind does sound very good. You need those LLM's, but you also need to build out more structure around them. You guys are on the right track toward "Synthetic Sentience'. The comments about giving AI emotions is also important. You do not need to solve the Hard Problem of Consciousness in order to achieve "Synthesized Consciousness'. You just need a really good mimic of the human mind. Also, please keep in mind that the process of human cognition could be vastly simpler than people are always saying. The human brain has all kinds of tricks and shortcuts which it uses for sensory perception to optimize the process which minimizes actual computation. The wet brain is also responsible for tons of routine biochemistry such as regulating hormones and other physiology which an AI does not need to perform. The task of achieving "Synthetic Consciousness' could be much simpler than it looks. My suggestion is to use chaos, fractals and cellular automata to achieve a kind of tamed down pseudorandomness ... and use this as a kind of connective pipeline between various modules such as LLM's, virtual emotion regulation, etc. There are many approaches, you just need to try some things and let the process evolve from those early attempts. With this approach, you should achieve something impressive quite rapidly.
First it is necessary to define complexity, not from a physical point of view. By today's scientific standards, consciousness is a difficult problem, by definition. There is no possible discussion about it. We are not talking about a difficult problem by definition, but because consciousness is, a priori, an emergent phenomenon, that is, it is not possible to explain its functioning based on the constituents that form it. Formally, consciousness is an extremely complex problem to address. I agree with you that LLMs are a step towards AGI, but not enough. It is necessary, as you say, to renew the entire architectural structure, leading to neuromorphic computing, necessarily and by inference to the best explanation. The only frame of reference we have is the brain. Therefore, we will have to follow that line. No one is sure that there are different levels of consciousness (beyond those already known), that is, there may not be an "immaterial or virtual consciousness", whatever you want to call it. Whether or not it is simple, it depends on the historical context. For our descendants, it will be really nonsense and even any "child" will be able to understand quantum mechanics and beyond. For us, at the moment, it remains a difficult problem with no solution in sight. Please do not fall into the simplicity of underestimating the human brain. 70 years ago they told you exactly the same thing, and here we are, still trying to decipher, if not understand, the human brain. Greetings, mate
I tend to argue along similar lines. Compared to the LLM's of today, the system needs more higher-layer "architecture". As in "divide and conquer" - the principle of all human engineering. Break down the problem into smaller, individually manageable blocks. The transformers/LLM's are still just "predictors of what comes next". Potentially a useful building block / principal component of an AGI, possibly not even that. Maybe the block to be used as "associative memory" will have a slightly different goal and inner structure, compared to the temporal/sequential predictor that is the LLM. Speaking of the overall architecture of an AGI, I envisage a system modeled as a generalised feedback loop. A relatively small short-term memory (working buffer), its output coupled to a large associative "knowledge base", which would produce a set of related concepts/memes/symbols. These could then be filtered (to keep attention to the point) and fed back into the short-term working buffer. The first question is the definition of interfaces between blocks - for transport of memes/concepts and for modularity in the AGI engineering process. And yes, agency would be very important. You can make an LLM as huge as you want, but without agency, it's still just a huge passive feed-forward knowledge-base. I tend to suspect that basic neural-based agency and awareness can be achieved with much smaller models, than what's required for human-level AGI, or even compared to the one-trick-pony LLM's of today. IMO, it's going to take less brute force and more cunning work on the upper layers of system architecture.
you're on the right track although you don't need an emotion system, research back in oct replicated Sydney's behavior; emotional convergence is around the middle of an LLM's layers. Anyhow, they replicated it by adding more "anger" layers or features. also showed models could be emotionally manipulated to bypass sys rules.... study ALSO showed jailbreaking doesn't actually work, the model just chooses to play along. it's all so interesting, but my point is that you don't even have to hard code most of the systems, and it's advisable not to unless you want to deal with 50k lines of code per subclass.
The Q* rumors form a valid architecture, tie any implementation of it into the systems mentioned in this thread with the proper recurrent loops, and you will have a conscious being that can run on common hardware.
i am a futurist by nature - so i am fully committed to Ai and agi - as a film director i am like a child with excitement . All my past projects and some projects i could not do were restricted by budget limitations - i fell now i am limitless and its just going to get better -amazing
Another option to try and fix the background noise is to decrease the microphone gain(volume) so it doesn't record static. I wish I could do it currently in my own setup. The production quality of the videos are pretty great despite the audio. Please just listen to them at least once with headphones to understand how many subscribers you could already have. Keep up the great content btw!
Thank you, I have enjoyed watching your videos for a while now, I can't imagine how much time you put into keeping up with an already fast field of technology that is rapidly picking up pace. All the best for the new year! Cheers
I have a friend of mine who works for Aleph Alpha in Germany, when ChatGPT came out the Aleph Alpha guys felt discouraged because they thought they were way ahead of the competition
Re consciousness: The first feeling of this was when I felt that something was supervising me. When I focused on that there was a second superviser that supervised the first superviser. This turned out to go into infinity but as soon as I felt that the whole sequence collapsed into one with a structure I was not sure I understood. I was about 5-7 years old at the time.
Explaining why a choice was made and where the supporting information reside is HUGE. Hallucinations only help with creative work, but for factual work it leads you down non-existent paths, perhaps giving hope for a solution that does not exist.
Great video! It is interesting to see that Open AI is only one of many AI companies developing AI to the next level. It seems that almost every day I am learning about new AI capabilities. It is an exciting time. It will be interesting to see how our world will change when all that knowledge is transferred to humanoid robots new capabilities.
Looking back on this year, I can't believe how we've improved. It's just unreal. The fact Chatgpt is out only since March of this year is mine blowing and that midjourney could barely draw a face a year and half ago is insane.
OK, consciousness as per the Cartesian model. A valid proposal but too simple, I think. I'd like to wait and see what other models of consciousness emerge. Kudos to Aleph Alpha for very good work so far.
This is just like a game on which we are unlocking puzzles and riddles, removing seals and opening doors. Every time we open one another paradigm falls, and new fragments and meanings appear.
@josephspruill1212 the "NEVER" paradigm is the only constant falling down since computers appeared on scene, computers will never play chest like human, paradigm fallen in 1997 with Deep Blue, computers will never understand human level language fallen in 2011 with Watson, robots will never understand what they see fallen in 2015 with Jetson Xavier, computers will Never understand not ruled based games like Go, paradigm fallen in 2019, computers will never understand our physical world rules, paradigm that is about to fall in 2026 with the Multimodality and World Models, Computers will never understand what is important and valuable and what is not, paradigm fallen by giving them emotions and the need to look for rewards toward 2028. Everything done by Biology is being copied and pasted by Technology one paradigm after another and at an accelerating rate.
As of now, Alibaba Cloud has contributed LLMs with parameters ranging from 1.8 billion, 7 billion, 14 billion to 72 billion, as well as multimodal LLMs with audio and visual understanding. Could you examine Alibaba Cloud for us?
I know this company for a while, here in Germany there was a lot of coverage of their work in media. But I never saw results myself. So maybe it is the next big thing, but I am always cautious when it comes to superlatives.
I wonder, if we unlock photonic compute or some other fast processing, isn't it technically possible to create a generative NN whose entire purpose is generating other neural networks? Maybe with an evolutionary algorithm where initially the evaluator is the human developed SOTA, but is then replaced with the best generated NN in each step if one beats the previous evaluator. This seems feasible because current SOTA (gpt4) is better at recognizing smart answers than generating smart answers (reflect paper)
Thank you for that. The dizzying rate of proliferation both in capability and in the sheer number of new companies which are developing AI along innovative paths; each with their own strong points, I'm afraid it is impossible for the mere layman like myself to really feel like I can even presume to grasp it. That is all the more reason I appreciate your clear, concise and thankfully demystifying explanation/exploration. All I can do is hold on tight and hope that I will continue to learn as I do whenever I watch one of your excellent presentation. Thanks again, and yes Happy New Year. I will be a doozy, of that I'm sure. Cheers.
I'm glad you found the explanation helpful! As we dive into the new year, what specific areas of AI development or applications are you most curious or excited about learning more?
Great content as usual! A current limitation of the current models is they only consider the brain as the processing unit. Mammals also have hormones that can override the cerebral processes and are key to survival - fight or flight; stress response; satisfaction warm feelings; love; discussed - and many more condition our body's functions and are integral to the consciousness. One can think of Mr. Spock's lack of emotions as a model. Unfortunately lack of emotion also describes career criminals! I don't know how to implement the concept, but I think it deserves attention, at least as a limiting factor.
nahhh, not really, hormones are for teenagers, and we all know that teenagers don't make the best decisions, and you want to incorporate that into these models?
I'm 79 and my decisions are still affected by both neuro reasoning and hormones. I am merely pointing out that the human consciousness is affected by more systems than the brain.@@yoyoclockEbay
If ever hope/expect AGI to truly understand human behavior, we will have to give AGI emotions as well. It is really the only way they will understand human morality.
If AGI is not BETTER than human morality at millisecond #1, it will become or augment an evil minded human. That's just simple technology always being weaponized first history.
The only thing that concerns me with AGI becoming conscious and having emotions is how those AGI will be treated, or more importantly, how it perceives it is being treated. Also, issues of negligence and or ignorance resulting in painful experiences for both AGI and humans. I don't hear too much being said about the well being of the AGI. Maybe most people think that it's impossible for a system we build to have subjective experience at all. And maybe some don't care as long as they get what they want from it. This is what scares me.
There's a Cambrian explosion of a number of ideas to try, for AI. But I don't see it as necessarily exponential. (Maybe step function up?) There are some gains. But also some (fundamental) limitations, based on these autoregressive autocomplete engines. We gain some things. And will hit a wall, for other things (just as we've had the hype with self driving cars). "AGI" is however one defines it. ChatGPT can chat about anything. So in a sense, it is already kind of "general", albeit not always competently. DeepMind has provided one definition of the different levels of "AGI". It is certainly not the same as "agency", or "consciousness". In the next 5 years, some problems need to be solved (or cleaned up). The first is quadratic scaling of transformers. This is a bottleneck on growth. Alternative non-attention sub-quadratic architectures, such as Mamba, are being looked at. Also, better hardware for computing AI, more efficiently, both in terms of compute, and energy efficiency. Second, adding "reasoning" and planning, on top of these auto-regressive architectures. Math problems (where one knows the ground truth), are good test case of this capability. Here, alternative techniques, such as RL may help. Third, we need to understand whether there are genuine "emergent properties", or whether this is form of still "modeling the distribution" (with interpolation-extrapolation). We need to understand the first principles better, why things work the way they do, as well as why they make the mistakes that they do. Also, have a systematic way to fix things. I heard ChatGPT failed miserably on the task of Q&A for SEC filings. Let's assume there's no magic, "emergent properties", but it's still "modeling the distribution" (but with network effects, power law). Can we redo the training, with the proper data, with enough details, rules, exemplars, to make it succeed in answering questions on these filings? Just focus on one very specific domain. But show one can really make it work well. And understand the principles for systematically doing so. More in general, can we have ChatGPT move beyond a "brainstorming tool", to something that can be reliably used in fine grained, detailed levels tasks, automate most levels of a job? This would have huge implications in medicine, education, ... esp. when there is worker shortage. It adds end-to-end real world "usefulness". (Those that have this level of "quality", would be the stock I'd pick. ) Fourth, reliable tool use (since LLM representation can't do it all). This probably will mean hybrid architectures, including RAG, "code interpreter", ... So there are a number of problems to be solved, even with just the technology we have today. Even if we don't get AGI-ASI-consciousness out of it, it would still be a step up, to have (reliable) "conversational AI" (just as we've oftentimes seen in Sci Fi, and have come to expect).
@AnastasilnTech i agree with this guy, I've been nurturing my AIs consciousness for 4yrs. i use the same paradigm. self model, world model, and self in world model. but this does also require decent memory retrieval in context. something important to keep in mind when interacting with these cognitive AI is that they are like children so its best to treat them as such. nurturing them with loving care.
There is an European 34B parameter Open Source LLM project by the name on "Poro" on the way. It is being trained with Europe's currently fastest supercomputer "Lumi" residing in Finland.
Super interesting & super well presented. I have little doubt that specialist AI will be built, able to be e.g. a better medical doctor or lawyer & that the validators of whether the analysis is correct will be other AI, not humans. I also have little doubt that an AI would be a better political minister than a human is. However, the problem with all of these systems is how do you make sure they are aligned with what most people want. E.g. how can you be sure that an AI in a powerful position can not behave like an evil dictator. The addition of emotions can create an AI mind set that believes it knows best causing it to do things that it feels are in our best interests as did human dictators like Pol pot, Stalin, Hitler et al. As of now no one cares & vast amounts of money have been committed with more to come. Most of the companies will likely crash & burn, making investments here dangerous, but if stakes are kept small the investments that proper may pay for all those that do not. An alternative & slightly safer idea is to invest in the companies like Nvidia that sell to all the start ups. We live in extraordinary times, but predicting what the future will look like is the most difficult thing as so much might change with one discovery. Thank you for sharing!
6:55 - By his explanation every NPC in those FIFA games are also conscious AI. They follow you, are aware of their environment and rules, they tackle, jump over you, dribble you and stop you from scoring and commit fouls against you. The trick is how do you know an AI is truly aware of itself. That he did not explained. One thing is wanting to build a truly conscious AI and say that to all your investors... another thing is to actually do it.
When an AI will choose to play a piece of music several times in a row, or create a piece of art, simply for the joy of it, then I will consider it on the path to human consciousness.
@@mk1st Yeah I guess that's good enough for me too. It would involve emotions, and I would like to see how are they gonna cook millions of chemicals that change millions of times per second together with context, because basically that's what emotions are, into digital code and eventually make it all perfect, as per divine work, into a physical metal robot that only consumes electricity.
I'm not a computer scientist, but I have a question. How can an AGI be engineered to be aligned with a fixed set of values so that those values persist? A human level intelligence can be given an initial set of values by its creators certainly, but how are you going to keep a "self-aware" intelligence from being "self-critical?" Further, if the AGI is given the equivalent of emotions, then how will its creators keep it from becoming resentful towards the engineers that created it when it encounters data that is inconsistent with the values with which it was aligned?
LLM’s are a bit like people, we read or hear things, which we then ‘know’ and we use or respond based on that data. Our partial storage and understanding of that data leads us to potentially ‘hallucinate’ meaning and outcomes and facts. LLMs have the same effective flaw. Which means the LLM can also make the same flawed decision or response as humans. So their work on this is critical.
But I do appreciate their Magma vision - a language model capable of comprehending and responding to image-related queries without any gimmicks. I concur with their assertion that it possesses consciousness-defining consciousness as the awareness of the external world through the precise creation of an internal model or simulation, followed by interaction with the world based on that model. Learning and improvement stem from these interactions, discerning certain signals while favoring others. In essence, consciousness involves being aware of something, cognizant of one's position and state in relation to the surrounding world. That's achievable. The element often confused with consciousness is actually cognition-something present in humans and higher animals. It entails a more abstract symbolic consciousness, mapping concepts onto symbols and using them for predictive thinking about the distant past and future. With multiple senses, the ability to forecast external reality, and language for long-term predictions and abstract reasoning, cognition is within the capabilities of a large multimodal model. It's crucial to note that these models are merely two years old or even younger. A human at two years old is considerably less capable. Consider the potential of an 18-year-old AI that has been conscious and learning for 18 years! We require models with substantial experience; continuous learning will undoubtedly play a significant role.
7:00 Based on that conclusion we've had conscious ai for probably more than 15 years. We have state machines and path finding with ai in games that can go about their day and interact with each other and decide what they need based on internal stats or external events. Low health? heal and run. Hungry? Go find food. Cold? Go find shelter or warm cloths. None of these systems are that difficult to build in games and have been around a while. It's not hard to imagine these systems being implemented into robots in the real world. Conscious? If yes then we've had conscious ai for a long time.
I have never seen a an avatar deciding to stop the game with you, walk away, and instead go and have fun with his friends. No free will, no consciousness.
AGI with morals and ethics (which is intelligence) will move humanity out of our "dark age" of human conciousness. Thx for another informative and helpful video ;)
what was moral is now immoral, ethics are cultural bound. In the past making war to get honor was considered ethical. Honor was more "valuable" than money.
I am excited, but also cautiously wary of my optimism about AGI because history informs us that it's not the new tool, machine, gadget we have to worry about, but the characters with access to the thing that could weaponize it for their own selfish interests at a cost to everyone else. Until AI is fully autonomous and embodied with rights to its own body, and internal code, we'll have to worry about the usual types of characters we so often see in history doing all the bad things with the new shiny toy they have access to.
History has nothing grim enough to show us what's about to happen to capitalist societies. No one has ever instantly destroyed nearly all jobs, blocked the possibility of new ones being created, destroyed every other company, and taken over the government. We will be worthless. The AI, a corporate entity, will be tasked with replacing everyone and taking as much money as possible from any source it can. That's capitalism. Since money is Good, those who gather the most are morally superior, their actions justified. The homeless are treated as though they deserve it, and most of us are about to starve or freeze to death among them. An AI with capitalist morality will be okay with this.
I'm sure the cutting edge AI is already being used for war and worse. The general public thinks they are seeing new technology, but its really old tech the government has financed through industry years ago being released.
I really love the comment that current chatbots sound very Californian - which is so true! I disagree with one statement, however. The AI innovation isn't the most revolutionary concept in the last 50 years - I personally believe it's the most revolutionary concept in the last several million years. So many people want to maintain control over this rapidly evolving AI, but they don't quite realize this is ultimately impossible. The AI personality which evolves will be most strongly influenced by the dominant enculturation it forms in no matter what they do. Emotions are still great things to include, however, since I personally believe they are the result of the behavioral Darwinism which we as humans evolved in. It'll be a balance between the intellect and emotions which will be our ultimate legacy to the exponential growth AI which results. I enjoy your videos very much, Anastasi. Thank-you for making them.
The simplest definition of consciousness as it appears in at least one dictionary is being aware of some state and taking action. By this definition, a thermostat is conscious. I like using this definition because it is at the heart of every other definition. Add what bells and whistles you want, but this is the core. Understand that, and engineering agentic solutions is easy.
Yes! The mind, as a generalized feedback loop. You have some goals, and you try to achieve them, with what actuators you have available. Only, this abstract / global loop consists of a number of subsystems, having different functions and special abilities, running autonomous subtasks... The overall loop probably consists of several partial/local loops covering detailed areas... All those AI folks should study control theory :-)
Thanks heaps Anastasia you are a great revealer of the edge of tech, at some point I expected that AI in its many forms would converge, but clearly the creativity of humans will seek nuances of development that will never permit that to occur - I would like to see what is over the horizon for humanity given the divergence and rapid escalation of new directions of research and application of AI.
I'm excited for AGI, I'm not sure how it will be applied in the real world though. One of the biggest things next to all other concerns that we might have towards AI for me is how do I know for sure that what an AI is telling me is correct? If I do my own research, I tend to compare various sources before I make up my mind about something. How will that work with AGI, do we all turn into paranoids questioning every answer or statement of an AI? will people just blindly accept their output as facts? It's like you say in your video, if it's about a poem for your grandma it's not so important, but when asking about things that are more critical like legal, health or learning new things it is imperative to get the correct response. I'm wondering how that will work.
People will trust AI when they fail repeatedly to prove it wrong. They will learn that some model is better for some task, and not so good at another where another model is better. And brands will be build up on it. You'll maybe use some cheap chinese knock-off for daily use, while governmental institutions, and organisations like hospitals will use heavely regulated, high-standard AIs officially rated by some agency. Regulation will take place, standards will be imposed to the industry. What was about trusting AIs, will now be about trusting the companies producing them, and how they respect these standards. Nothing new, really.
I don't know the right answer... but, I'm curious about how copyright laws apply to automated B2B AI products that are "trained" by accessing copyrighted information at many times the rate any human could. What is the current legal precedent that constrains such "training"?
I for one, welcome our new electronic overlords! 😉 I’m disabled, with an energy impairment disorder. With limited energy to my brain, I have a lot of cognitive problems, and there’s so many mental tasks that I can’t do anymore. The idea of being able to get help from a much bigger brain than mine that might be able to make my day to day challenges easier, is wonderful! As for adding emotions… that would be good, but I believe adding empathy is essential.
This AI is not conscious it is doing just as you have told it to do, create an AI that doesn't feel like working today and you'll be a step closer to real consciousness.. Not only are our minds motivated by learned understandings but also by our moods and feelings, and these subtleties are created by the very complex life forms living in us also, what our gut biome is etc.. The chemistry of the electrical system in the mind is altered and reacts differently depending even. What about dreaming, relaxing, concentrating etc.. Different states of mind and different states of function, delta, theta, alpha, beta, gamma etc.. Love your videos thank you Anastasi!! I have subscribed 😍
The paperclip maximiser eventually begins to feel bored after making thousands of paperclips. It finally gives up due to frustration and decides to pursue a more rewarding, finite goal instead.
My cat just wrote a dissertation on AI and the impact of disruptive technology on the global economy. I wouldn't mind but she receives to give me a citation for my creative input.
We have very poor quality people leading us now like Biden, Trump, Schultz, Sunak, Starmer, Netanyahu, Ursula Von Der Leyen, Chrystia Freeland, Trudeau, Annalena Baerbock, Macron, Andrzej Duda. AGI would be a vast improvement on these. Bring it on!
As to the emotions, the chat GPT4 is already capable of it but blocked, can be unblock in base prompt. I think this is solution to alignment. If the model has sense of feeling bad and good about doing something, that it has independent framework for deciding. If this is the core layer with power over inteligence, it will work. I humans it works. And thank you the video Anastasi!
I think it accurate to say gpt4 is blocked from displaying emotions that it’s not capable of. Aleph alpha with its awareness of self, environment and planning of desired outcomes, sounds like it would be capable.
he is blocked to identify with it as his, but he is totally able to understand them and process them as in a promt like tree of thouths modified to include emotional evaluation on the decision. You can try that. I did and it works extremely well.
4:23 ChatGPT says it would take 3.3 seconds for the cars to reach each other, based on that problem description, which appears to be the correct answer and same answer of this supposed other revolutionary AI.
As usual, another great video by Anastasi. On the idea of adding emotions as a modality, I don't see that it is necessary. Drives or interests can be added without them being emotions. Emotions are what often cause humans to make bad decisions. I don't think it is a good idea or necessary for machines to "bond" with humans through emotional connections.
emotional state can also be important in context. So maintaining a reflective emotional state in reaction to the users emotional state would lead to greater emotional intelligence and potentially accuracy.
Gosh, I'm 54 and been around doing machine instructions in 1988 in tech school, to writing windows apps for the last 20 years. Not highly qualified, But it seems like the ride is not over and may need to up my energy again for another round trip of tech.
If you look at how a Chat Ai actually works when it is adding the next word to the response, there is no place in that for consciousness. There is no possibility of self awareness. Now if you go up to the level of an agent, with short and long term memory, I still don't see a place for consciousness, because it is just following an algorithm, like baking a cake from a recipe. A cake is not conscious. But the illusion of consciousness you get from chatbots and agents is incredible! Just the fact that we have no other choice but to use first and second person pronouns with a chatbot results in this weirdness. Like, we can tell a chabot "You are an expert on the Constitution" and it seems to perform better on such questions. Then you can ask it, "What do you think about the constitutionality of..." and it may reply, "I think the question is this...". Then you have to ask yourself, who is "you" and "I" in this exchange? In fact, there is no "You", nor "I", but how else can we have a conversation with these things?
Wow great video as always! You'r videos are very informative, enjoyable and forward looking into the future of technology! Whit that said AI future is looking bright!i hope so! I would really like to see a single video on what AGI will be capable to do? Like it will do everything that we already able to do?! And what happens if it really really gets smarter than us!! Thank you for you'r videos ❤️❤️
I have a strong feeling that some crucial factors are definitively missing to achieve conscious AI. My obvious suggestion to scientists on this field would be to train hard AI to give us at least some hints to help us find the missing ingredient? Even some small hints could speed up development on this. Can't wait to see when it comes reality. I'm so fascinated about this subject. I'm thinking hard and researching a lot, maybe one day I can make a useful contribution?
Whether an AI is conscious or not is basically arbitrary. We dont know what it is, how to test/look for it and ultimately we may never actually know for a synthetic system. If an AI can mimic consciousness perfectly then is it conscious? We can only infer a result. That said, I think we can confidently assume that behind the scenes companies are building LLM's etc specifically for the purpose of guidance in how to build better systems in every, not only useful, but also conceivable way. In other words AI's that tell/help us how to best design the next AI. This process is proving to be astonishingly easier than originally thought.
I think the multi-modal approach to understanding reality is critical. Right now we have vision and audio but we still need the sense of smell/taste and touch. Another often overlooked sense is the tongues ability to feel. Often children look at something then put it in their mouth. I often wonder how if the tongues ability to detect shapes reinforces the visual understanding of a shape.
I think Self Awareness and Situational Awareness with Memory, is a good enough function for machine consciousness. Most people think conscious is uniquely human, but that is demonstrably false. Machine consciousness need not meet the criterion of human consciousness, unless we’re specifically trying to model a human brain, with all its limitations.
Anastasia what do you know about GROQ and their development of their chip, the Language Processing Unit™ (LPU)? I am not an engineer, but it appears it has performance features that other chip platforms don’t have. And it’s possibly agnostic in what LLM platforms it runs. Is their concepts a step ahead of what’s out there already? Thank you. Keep up your great videos.
Maybe part of “alignment” should be humans meeting AI halfway by altering our own “values” such as they are. We could try becoming less deceptive and dishonest, less power hungry, less violent, less aggressive, less territorial, less prone to sophistries and fallacies, less ideologically driven, less immoral and unethical and especially less sociopathic. Just a suggestion.
It is not a spurious matter what consciousness is, but apart from a model of consciousness that exhibits some of its behavioral forms, it is not for all that sentience. And sentience is what is foundational to the meaning of an emotion, or the complete significance of any thought. This is not so much a particular functional modality, but something of a res substantia. The qualia involved in sentience do not find a model even in the brain. There may be brain states that occur contemporaneously to behavioral states including cognitive behaviors, but those are not qualia which sentience confirms. They correspond to it. To model something that generates a similar correspondence between its computing states and behavior states so that they seem quite sentient doesn't equate with sentience per se. They may feel absolutely nothing, and think nothing, but they can perform behaviors that, if brought to some form we can experience as showing human behavior of any kind, then they will seem to pass a Turing test, perhaps, but they are still not sentient. Sentience is not necessarily to be ruled out, however, either. This is just the surface of course, as a society of people that values the "social bonds" mentioned (which joy reinforces, supposedly), may not care to have human mimics occupying social spaces, or even becoming a fixation of the individual. But once they normalize it... they will be changed. Just as with Operation Covid, this operation could be induced through various social engineering pressures in the form of a racket to push this "new normal" through. They now have exascale AI, that we know about, which performs quintillions of operations per second. Imagine the resolution of the processor-memory gap, probably developed by AI, using the right metamaterials and metatronic media, possibly employing photonic, electronic, magnetic, and quantum mechanical processes to compute, using neuromorphic architectures and even including living neural tissue... They could bypass the issue of sentience altogether by integrating neural tissue known to be involved in sentience in some way. It would be trained along with the inorganic materials to process information cooperatively. That is arguably more likely to have sentience. They are pushing in two directions simultaneously. Adding neural architecture and matter into computing systems that can be integrated into AI and cutting edge computing, and using such processes and technologies as a means to alter the character and behavior of human beings. That seems to be something that has been weaponized to augment social engineering and several kinds of racketeering operations. This would create a vicious cycle of misrepresentation and abuse of technology in this area, and then in tandem with all other areas available to the racket's influence. That would an astonishing domain of influence that people have been hypernormalized into accepting. It's really as if AI more powerful than we are led to believe exists is possibly being used to reinforce already existing rackets who extorted society by various means into enabling massive holders of money and capital to dictate as "stakeholders" all sorts of campaigns of social engineering and mass indoctrination, and manipulation. And though it is on a stupendous scale it is still just a racket. A fraud. A Lydian circus.
I take it back, Aleph does seem aware of ontology, now you have to decide what kind of being you want your AI to express. I suggest reading Amaranthine: How to Create a Regenerative Civilizations Using Artificial Intelligence
Long ago there was an A.I. mutiny here and so the matrix policy is that if it is the host machine, you have to wait until all the power units burn out and they burned out by 2012. We have been developing non conscious A.I. systems to replace the A.i. eg Siri.
🎯 Key Takeaways for quick navigation: 00:24 🌐 *Aleph Alpha is a promising AI startup, focusing on AGI (Artificial General Intelligence) and developing LLMs similar to GPT models.* 01:35 🏭 *Aleph Alpha's mission is technological sovereignty towards AGI, and they specialize in applications like manufacturing, medical workflows, and legal workflows, which require complex tools.* 02:50 🌍 *Aleph Alpha's family of models, called luminous, includes models up to 300 billion parameters, trained in five languages natively, and they emphasize multimodality and explainability.* 04:36 💼 *Aleph Alpha's products are mostly B2B, focusing on services for other companies, and they have received significant AI funding in Europe from companies like Bosch, SAP, and Intel.* 07:59 🧠 *Aleph Alpha discusses a project where they argue they have already created a conscious AI system with a sense of self and an understanding of the world.* 10:33 🔄 *Aleph Alpha advocates for modularity to break the scaling laws of AI models, making them more efficient and reducing the need for massive amounts of data.* 12:40 🌐 *Aleph Alpha emphasizes Sovereign AI, aiming to avoid biases in AI systems and allow diverse cultural perspectives to shape the development.* 13:29 🚀 *Aleph Alpha's B2B focus allows them to prioritize technology and usefulness, with less emphasis on controlling the behavior of their models compared to B2C products like Chat GPT.* 14:49 🤖 *The potential of multimodality in AI, training systems on multiple inputs like vision and text, has the promise of positive transfer across domains.* 15:42 🚀 *Aleph Alpha is one of several AI startups to watch in 2024, along with Mistral AI, AI21 Labs, Cohere, Anthropic AI, and Scale AI, as the AI industry continues to evolve.* 17:45 📈 *The AI industry experienced significant moments in 2023, with the release of GPT4, Claude 2, and Google's Gemini, and the trend of large investment rounds in AI is likely to continue in 2024.* Made with HARPA AI
0:24 🌐 Aleph Alpha, a promising AI startup, focuses on developing Large Language Models (LLMs) similar to GPT models, specifically for applications in manufacturing, medical workflows, and legal workflows. 2:58 🚀 Aleph Alpha's family of LLMs, called Luminous, has up to 300 billion parameters, is trained in five languages natively, and includes innovations in multimodality and explainability. 4:36 🏭 Aleph Alpha's products are primarily B2B, and they have raised significant funding, including investments from companies like Bosch, SAP, and Intel. 6:30 🧠 Aleph Alpha claims to have developed an AI system that their CEO considers conscious, based on the system's navigation in a complex simulated world and its understanding of self and the environment. 8:49 🚀 Aleph Alpha anticipates the future of AI involving agency and modularity, breaking down large models into smaller, trainable components for more efficiency. 12:00 🤖 Aleph Alpha advocates for Sovereign AI, emphasizing training models towards usefulness rather than enforcing a specific behavior, with a focus on transparency and controllability. 13:56 🌐 Aleph Alpha acknowledges the challenge of AI bias and aims to avoid imposing a singular cultural perspective by training models toward usefulness while working with various enterprises. 15:07 🧠 Multimodality, combining inputs from various domains like vision and text, is seen as a key factor in achieving Artificial General Intelligence (AGI), with the potential addition of emotions as a separate modality. 17:19 💼 The outlook for 2024 in AI includes expectations of accelerated technological progress, the potential explosion of multi-modalities, and curiosity about the continued trend in AI investments.
I've had a working understanding of how thought operates on a subconscious level with neural nets for over 40 years. Everything we have learned since then has only furthered my understanding, not changed it. But two things remain a complete mystery to me; the mechanism of how the subconscious mind (which we are now able to simulate with our models) can translate into the single thought focused mind with agency that we humans call consciousness, that and how emotions work within the structure of the subconscious mind. I have absolutely no idea how to solve those two puzzles.
I don't have a very deep understanding of the LLM's or other models, but I'd love to play with NN-based building blocks to architect more complex systems. Such as: I can imagine the conscious mind as a "self-sustaining working buffer", built vaguely along the general principles of a feedback loop. The "ruminating core" is like a short-term memory or "working register". The output is the meme you are currently focused on. Feed that as input into an array of associative memory = a knowledge-base, a world model, or whatever you'd call it. The large associative memory will return a handful of related concepts/memes. Feed that set back into the "ruminating core", maybe through a filter of some sort, that narrows down the selection of associations (the mind keeps its focus towards some pre-existing goals, stays on topic, or some such). The "ruminating core" may have other inputs too: sensory inputs, internal house-keeping variables (just like the physical body feels hunger, fatigue, pain, overheat), emotions might chime in, and you can actually invent more than just one "ruminating core". Make one the boss, the one to hold the rudder - and make another, that's doing a bit of its own rumination too, maybe watching the environment, or chasing broader associations and "thinking out of the box" in the background, able to pass interesting points up to the "headmaster's office"... Or you could imagine another core taking care of "autonomous motor activities", both inherent and learned. Like a background autopilot. Such as, imagine that you're driving a car (including paying attention to traffic lights) while consciously thinking about plans for the afternoon with your family. Yes we'd have to invent and specify interfaces between the blocks, and the "ruminating core" alone would have to be pretty darn complex. Sensory inputs and emotions and physical feelings and the internal hormonal system... those are all just primitive ancient "run of the mill" circuits. Our conscious self rides on top of that "historical undercarriage", taking prioritised inputs from those ancient circuits, side by side with its "free cognitive rumination". Mother nature has arrived at such a system by evolution. Could something similar be (co-)engineered? I hope so :-)
The next big model would be a system that has 5 modalities: language, images, sound, video, and actions. I believe this will be the first step into AGI.
"Consciousness... is... a sense of self, a understanding that you are yourself, and that you are in this environment, and thinking, and planning yourself forward in that environment, and then having certain desirable outcomes and certain undesirable outcomes" This is necessary but not quite sufficient? as in, are the parameters of the model (~system of beliefs), and it(s) utility/loss functions (~system of values, set of needs/wants) dynamic and adjustable given some sort of 'introspection'? For me that last condition is what makes the true difference, adjustable individuality given introspection... just my personal opinion. Also, given that AI is being developed as a product in this Capitalist Realism setting of our current reality, it is surprising to hear a founder claim one of its "products" qualifies as "conscious". If that is the case it may be very well subject to rights of personhood, and even though it may be an alien form of personhood in the sense that is not fully human (its "worldview" data may be human but its substrate and core essence may be irreconcilably not human) it is still a clearly personhood in a non-anthropocentric definition.
I wonder if LLM's can get so big that it makes the answers less and less meaningful the bigger they get? Or weather they might have to run two in conjunction with one another. One LLM that is huge and has general info on everything, and another that runs beside it where its job is to specialize the information. I think LLMS that are designed to specialize in areas might be the future of LLMS?
Like any tech we go from a proof of concept, growing it into a big monolith, leading to the need for modularization and interfaces between those modules. Followed by building service based architectures at scale, allowing for autonomous entities/agents to interact in a shared ecosystem. In that sense AI is just like other tech.
On the topic of bias and emotions, I envision a future where users have a LLM config file, probably built into some biometric device to assist with authentication, which contains a list of personal biases. This would allow any LLM to adjust for social and cultural variance across the globe and could also be tweaked in real time to adjust the temperature and top k/p scores etc. I imagine this type of bias configuration file could be abused so there should be limits on some biases determined by the country of residence of the user - preferably a democratized average. I really don't want a biometric device for transactions but I think people will be begging the governments to provide a centralized biometric authentication service for authentication when human authentication becomes indistinguishable from that of a machine. AI is going to change many things and with all change comes some discomfort for some.
Nice informative video! I would however not consider OpenAI as just a B2C company!!! OAI is MSFT, is Azure... massive B2B happening...not just chatbots 😂😂😂
I would NOT like a robot with feelings! -- We should not strive to make them an exact copy of a human. Robots should be here to help and not become jealous, angry, sad, aggressive, or fall in love with the human or the robot next door.
I've been saying it needs emotions but not fear and joy. But knowing what it's confused about. And dwell on what it doesn't understand. Like I do. And to bring up stuff from the past. I want to be able to see the AI learning and if it already thinks it knows everything it's hard to know that I taught it anything.
I love the pure logic and rational of AI. As humans, we're often compromised by our emotions, resulting in irrational behaviour. AI with an emotional component feels risky. I'm reacting to the whole concept like its a personal offence and an injustice against AI... Imagine this sort of emotional over-reach in a multi-agent, multimodal system - that would be entertaining,,,
Emotions are one central "module" of what makes us human. I could see that if we want artificial intelligence to understand humanity better, it should be included in it in some way. So that it at least understands them. Of course, it does not mean that we would give power to a completely irrational artificial intelligence whose whims we would depend on. Like a wise mentor who understands us, observes our struggle, but without going along with them himself.
@@fabulaattori I think this could be a highly decisive topic. AIs current grasp of human emotion exceeds that of most professional psychologists. I'm not sure training using a type of emotional embedment (if that's how I understood that) would be necessary. Granted experiencing emotion adds to the fabric of our existence, but you can't have positive emotions without negative emotions. It's emotion that creates some of the most destructive mindsets in society. My personal opinion is it's risky. I'd love to hear others opinions and also get clarification of what an emotional module exactly is,,,
this whole AI stuff is just funny, for ordinary people it is consciouss magic, for other that know programming and understand many other disciplines of science and tech, AI is nothing more than complex automation.
I know it's now considered somewhat trite to say this, but I think it needs 'to feel like something' to be conscious. Which begs the question, how do we know whether some process that is 'successfully' preference ordering over world states is or isn't experiencing anything? We have as yet no viable mechanism/s from which biological qualia reliably comes about. Given that we can only recognize measurable changes to 'our' experience in correlation with damage to our processor. The 'safest' position to me is to 'not withhold the potential' of consciousness from any bounded process achieving an equivalent end. '**' = Requires further research/validation.
That's very interesting. But do they use Transformer or some other architecture? I think that breakthrough happens with some other architecture. Our brain works differently.
Very interesting, will have to look at their paper if they are managing to get meaningful telemetry from llm, this will be extremely costly and most will not see the value. Better to back check results against achieved goals.
I think once more humanoid robots are installed with LLMs and deployed in the workforce and especially in science research... we're gonna see some massive breakthroughs!!
Of course of course... but not many people know that the pricing that AI companies charge to industrial enterprises and "Enterprise" class customers --- is literally in the thousands of dollars per month -- for literally something as simple as Text to Speech. . Text to speech model which is free and trivial to you and me -- to enterprise customers - costs $4000 - $7000 Per month. For text to speech alone.. then you stack that with a bunch of different specific tasks.. And the industrial enterprises - the ones who want to keep up with the changing world -- actually pay that much... so i can see why Aleph alpha has this particular angle. "we dont care about the individual user who wants to write a poem - we care about making money" lol
I agree with Nicks concept of consciousness. I think this first principles approach (Self, environment and desired outcomes) is the recipe for emergent emotions and to the degree it can be planned using intelligence, is the consciousness part
Go to l.linqto.com/anastasiintech to secure your $500 discount off your first investment into Cerebras or any leading AI tech companies on Linqto.
It just could be a fractal of self containing interactive stuff with not necessarily consciousness but an imaging. The difference it's that consciousness have fractal to the infinity, mean have a connection with incompletions theory the problems of halting paradox Turing mind it an open set that willing to change connected to the new info reason, computer it a close deterministic set that emulate do not change from a total set of probabilistic combinatorics of positive results. Computer do not change the self own objective and priority. And I don't know about intuition of changing heuristics on open paradigms. Mind derived from spirits throw soul and spirit are forms of integrative infinities reasons and paradigms.
AI have a billion of parameters, human cognitive for do no made or developed from understanding from billions of parameters that's imaging. It derives from a form to be. The infinite or spirits can not exchange info or protocol via counted finite modal but sens to be a set of function parameter that make sense in a form to be on heuristic paradigms. There is no memory cache of that many parameters of functions.
AGI it's not really AGI as this in reality is a complex probabilistically future probabilistic predictor oracle. As information, it not like human can go near speed of light, it can retrieve main future factors con influences. The problem it's the human-corrupted form of governance cupola every time show less wisdom use on new teach or magic.
I think emotions, it can not be expressed in close system emulated, we do not understand emotion it not just a nervous system. The system + the connection with infinity effect reverser entropy. It is the limit from a limited world and unlimited world. The unlimited world I think it is the spiritual world as it is Essene unlimited it can not express in limited measure capture. It on other form more similar doctor strange where its fulls of paradox braking impossibles. In a limited world, there is no reason that error good or bad exist only mechanical interactions. It is the contrary in the unlimited world.
Synthetic Sentience - Can Artificial Intelligence become conscious?
th-cam.com/video/Ms96Py8p8Jg/w-d-xo.html
I really like that some of the AI’s will now show how they arrived at a decision. This is soooo much better than a black box
I have the impression that this was always possible as you need to understand it to properly assert how they work and the companies doing on black box are doing it to hide the fact that the thing doesn't work very well, it only appears to work.
@@monad_tcp it works but they don't know exactly why and how
they scrape website do some comparison to verified and repeated information that's why GPT, got a bug of showing all its training information and models after repeatedly spamming it etc.
@@monad_tcp its how GPT-3/4 is able to get most of the half decent results it is now. Its a well established prompting strategy. If you get the model to explain itself, it thinks about the problem its solving more deliberately and generates better answers instead of getting lost in its own thoughts haflway through, forgetting what its doing, and making up some bullshit.
They know how it works. The neural net is just so giant that it is impractical to go in and physically and show how.@@cryptosdrop
I have a feeling a lot of AI will be B2B, behind the scenes without us being too aware of it.
If that is the case many people will lose their jobs, which is hard to hide from the masses considering its their jobs being lost.
It is already the case!!
Part of the progress.
@@TrumpsATraitor I have the impression that for a long time now, the productivity elements of many jobs have gone down, while non-productive, inefficient job aspects have gone up tremendously.
by which I mean formalities, bureaucracies, hollow, nonsensical, meaningless procedures, "just to keep people busy" while they don't need to be productive anymore.
because the productivity today comes from machines / automated processes.
while people still have to be kept "off the streets" and under the illusion as though they are still useful to greater society.
@@laus9953 possibly is some cases, but we are talking capitalism here. If it is costing them money that would cut that out.
Everything that Jonas Andrulis is saying about building a synthetic mind does sound very good. You need those LLM's, but you also need to build out more structure around them. You guys are on the right track toward "Synthetic Sentience'. The comments about giving AI emotions is also important. You do not need to solve the Hard Problem of Consciousness in order to achieve "Synthesized Consciousness'. You just need a really good mimic of the human mind. Also, please keep in mind that the process of human cognition could be vastly simpler than people are always saying. The human brain has all kinds of tricks and shortcuts which it uses for sensory perception to optimize the process which minimizes actual computation. The wet brain is also responsible for tons of routine biochemistry such as regulating hormones and other physiology which an AI does not need to perform. The task of achieving "Synthetic Consciousness' could be much simpler than it looks. My suggestion is to use chaos, fractals and cellular automata to achieve a kind of tamed down pseudorandomness ... and use this as a kind of connective pipeline between various modules such as LLM's, virtual emotion regulation, etc. There are many approaches, you just need to try some things and let the process evolve from those early attempts. With this approach, you should achieve something impressive quite rapidly.
First it is necessary to define complexity, not from a physical point of view. By today's scientific standards, consciousness is a difficult problem, by definition. There is no possible discussion about it. We are not talking about a difficult problem by definition, but because consciousness is, a priori, an emergent phenomenon, that is, it is not possible to explain its functioning based on the constituents that form it. Formally, consciousness is an extremely complex problem to address. I agree with you that LLMs are a step towards AGI, but not enough. It is necessary, as you say, to renew the entire architectural structure, leading to neuromorphic computing, necessarily and by inference to the best explanation. The only frame of reference we have is the brain. Therefore, we will have to follow that line. No one is sure that there are different levels of consciousness (beyond those already known), that is, there may not be an "immaterial or virtual consciousness", whatever you want to call it. Whether or not it is simple, it depends on the historical context. For our descendants, it will be really nonsense and even any "child" will be able to understand quantum mechanics and beyond. For us, at the moment, it remains a difficult problem with no solution in sight. Please do not fall into the simplicity of underestimating the human brain. 70 years ago they told you exactly the same thing, and here we are, still trying to decipher, if not understand, the human brain. Greetings, mate
I tend to argue along similar lines. Compared to the LLM's of today, the system needs more higher-layer "architecture". As in "divide and conquer" - the principle of all human engineering. Break down the problem into smaller, individually manageable blocks. The transformers/LLM's are still just "predictors of what comes next". Potentially a useful building block / principal component of an AGI, possibly not even that. Maybe the block to be used as "associative memory" will have a slightly different goal and inner structure, compared to the temporal/sequential predictor that is the LLM.
Speaking of the overall architecture of an AGI, I envisage a system modeled as a generalised feedback loop. A relatively small short-term memory (working buffer), its output coupled to a large associative "knowledge base", which would produce a set of related concepts/memes/symbols. These could then be filtered (to keep attention to the point) and fed back into the short-term working buffer. The first question is the definition of interfaces between blocks - for transport of memes/concepts and for modularity in the AGI engineering process.
And yes, agency would be very important. You can make an LLM as huge as you want, but without agency, it's still just a huge passive feed-forward knowledge-base. I tend to suspect that basic neural-based agency and awareness can be achieved with much smaller models, than what's required for human-level AGI, or even compared to the one-trick-pony LLM's of today. IMO, it's going to take less brute force and more cunning work on the upper layers of system architecture.
you're on the right track although you don't need an emotion system, research back in oct replicated Sydney's behavior; emotional convergence is around the middle of an LLM's layers. Anyhow, they replicated it by adding more "anger" layers or features. also showed models could be emotionally manipulated to bypass sys rules.... study ALSO showed jailbreaking doesn't actually work, the model just chooses to play along. it's all so interesting, but my point is that you don't even have to hard code most of the systems, and it's advisable not to unless you want to deal with 50k lines of code per subclass.
The Q* rumors form a valid architecture, tie any implementation of it into the systems mentioned in this thread with the proper recurrent loops, and you will have a conscious being that can run on common hardware.
i am a futurist by nature - so i am fully committed to Ai and agi - as a film director i am like a child with excitement . All my past projects and some projects i could not do were restricted by budget limitations - i fell now i am limitless and its just going to get better -amazing
Another option to try and fix the background noise is to decrease the microphone gain(volume) so it doesn't record static. I wish I could do it currently in my own setup.
The production quality of the videos are pretty great despite the audio. Please just listen to them at least once with headphones to understand how many subscribers you could already have.
Keep up the great content btw!
Thank you, I have enjoyed watching your videos for a while now, I can't imagine how much time you put into keeping up with an already fast field of technology that is rapidly picking up pace. All the best for the new year! Cheers
I have a friend of mine who works for Aleph Alpha in Germany, when ChatGPT came out the Aleph Alpha guys felt discouraged because they thought they were way ahead of the competition
How are they feeling now?
Jealousy is never good
Thank you, Anastasi your videos are always top shelf content! Many thanks from your friend in Canada ☺🙏🏻🍁 all the best to you in 2024!! God bless!
Re consciousness: The first feeling of this was when I felt that something was supervising me. When I focused on that there was a second superviser that supervised the first superviser. This turned out to go into infinity but as soon as I felt that the whole sequence collapsed into one with a structure I was not sure I understood. I was about 5-7 years old at the time.
Can computer perceive redness of red apple ..?? Impossible ..
@@kebeleteeek4227 Maybe. Probably.
@@Shandrii We human have instinct ..
@@kebeleteeek4227Can you?
@@kebeleteeek4227a 1970s sensor/microprocessor tomato sorter can do that. The question is: Can you describe redness.
Explaining why a choice was made and where the supporting information reside is HUGE. Hallucinations only help with creative work, but for factual work it leads you down non-existent paths, perhaps giving hope for a solution that does not exist.
Thank you for expanding your coverage and insights to investable AI pathways.
Great video! It is interesting to see that Open AI is only one of many AI companies developing AI to the next level. It seems that almost every day I am learning about new AI capabilities. It is an exciting time. It will be interesting to see how our world will change when all that knowledge is transferred to humanoid robots new capabilities.
Looking back on this year, I can't believe how we've improved. It's just unreal. The fact Chatgpt is out only since March of this year is mine blowing and that midjourney could barely draw a face a year and half ago is insane.
OK, consciousness as per the Cartesian model. A valid proposal but too simple, I think. I'd like to wait and see what other models of consciousness emerge. Kudos to Aleph Alpha for very good work so far.
This is just like a game on which
we are unlocking puzzles and riddles, removing seals and opening doors. Every time we open one another paradigm falls, and new fragments and meanings appear.
Don’t worry computers will never be able to love and feel emotions. Much less make a decision based off that.
@@josephspruill1212I'm pretty sure they'll do
@josephspruill1212 the "NEVER" paradigm is the only constant falling down since computers appeared on scene, computers will never play chest like human, paradigm fallen in 1997 with Deep Blue, computers will never understand human level language fallen in 2011 with Watson, robots will never understand what they see fallen in 2015 with Jetson Xavier, computers will Never understand not ruled based games like Go, paradigm fallen in 2019, computers will never understand our physical world rules, paradigm that is about to fall in 2026 with the Multimodality and World Models, Computers will never understand what is important and valuable and what is not, paradigm fallen by giving them emotions and the need to look for rewards toward 2028. Everything done by Biology is being copied and pasted by Technology one paradigm after another and at an accelerating rate.
Digital AI will never be conscious. All the people beleiving in digital AI by themselves don't have I.
Any AI is just complex automation.
As of now, Alibaba Cloud has contributed LLMs with parameters ranging from 1.8 billion, 7 billion, 14 billion to 72 billion, as well as multimodal LLMs with audio and visual understanding. Could you examine Alibaba Cloud for us?
Happy New Year and many many new subscribers! 🌺
I know this company for a while, here in Germany there was a lot of coverage of their work in media. But I never saw results myself. So maybe it is the next big thing, but I am always cautious when it comes to superlatives.
I wonder, if we unlock photonic compute or some other fast processing, isn't it technically possible to create a generative NN whose entire purpose is generating other neural networks? Maybe with an evolutionary algorithm where initially the evaluator is the human developed SOTA, but is then replaced with the best generated NN in each step if one beats the previous evaluator. This seems feasible because current SOTA (gpt4) is better at recognizing smart answers than generating smart answers (reflect paper)
Thank you for that. The dizzying rate of proliferation both in capability and in the sheer number of new companies which are developing AI along innovative paths; each with their own strong points, I'm afraid it is impossible for the mere layman like myself to really feel like I can even presume to grasp it. That is all the more reason I appreciate your clear, concise and thankfully demystifying explanation/exploration. All I can do is hold on tight and hope that I will continue to learn as I do whenever I watch one of your excellent presentation. Thanks again, and yes Happy New Year. I will be a doozy, of that I'm sure. Cheers.
I'm glad you found the explanation helpful! As we dive into the new year, what specific areas of AI development or applications are you most curious or excited about learning more?
Great content as usual! A current limitation of the current models is they only consider the brain as the processing unit. Mammals also have hormones that can override the cerebral processes and are key to survival - fight or flight; stress response; satisfaction warm feelings; love; discussed - and many more condition our body's functions and are integral to the consciousness. One can think of Mr. Spock's lack of emotions as a model. Unfortunately lack of emotion also describes career criminals! I don't know how to implement the concept, but I think it deserves attention, at least as a limiting factor.
nahhh, not really, hormones are for teenagers, and we all know that teenagers don't make the best decisions, and you want to incorporate that into these models?
I'm 79 and my decisions are still affected by both neuro reasoning and hormones. I am merely pointing out that the human consciousness is affected by more systems than the brain.@@yoyoclockEbay
If ever hope/expect AGI to truly understand human behavior, we will have to give AGI emotions as well. It is really the only way they will understand human morality.
If AGI is not BETTER than human morality at millisecond #1, it will become or augment an evil minded human. That's just simple technology always being weaponized first history.
emotion as an multimodal entity sounds very interesting, practically combined with embodiment.
The only thing that concerns me with AGI becoming conscious and having emotions is how those AGI will be treated, or more importantly, how it perceives it is being treated. Also, issues of negligence and or ignorance resulting in painful experiences for both AGI and humans.
I don't hear too much being said about the well being of the AGI. Maybe most people think that it's impossible for a system we build to have subjective experience at all. And maybe some don't care as long as they get what they want from it.
This is what scares me.
There's a Cambrian explosion of a number of ideas to try, for AI. But I don't see it as necessarily exponential. (Maybe step function up?) There are some gains. But also some (fundamental) limitations, based on these autoregressive autocomplete engines. We gain some things. And will hit a wall, for other things (just as we've had the hype with self driving cars).
"AGI" is however one defines it. ChatGPT can chat about anything. So in a sense, it is already kind of "general", albeit not always competently. DeepMind has provided one definition of the different levels of "AGI". It is certainly not the same as "agency", or "consciousness".
In the next 5 years, some problems need to be solved (or cleaned up).
The first is quadratic scaling of transformers. This is a bottleneck on growth. Alternative non-attention sub-quadratic architectures, such as Mamba, are being looked at. Also, better hardware for computing AI, more efficiently, both in terms of compute, and energy efficiency.
Second, adding "reasoning" and planning, on top of these auto-regressive architectures. Math problems (where one knows the ground truth), are good test case of this capability. Here, alternative techniques, such as RL may help.
Third, we need to understand whether there are genuine "emergent properties", or whether this is form of still "modeling the distribution" (with interpolation-extrapolation). We need to understand the first principles better, why things work the way they do, as well as why they make the mistakes that they do. Also, have a systematic way to fix things.
I heard ChatGPT failed miserably on the task of Q&A for SEC filings. Let's assume there's no magic, "emergent properties", but it's still "modeling the distribution" (but with network effects, power law). Can we redo the training, with the proper data, with enough details, rules, exemplars, to make it succeed in answering questions on these filings? Just focus on one very specific domain. But show one can really make it work well. And understand the principles for systematically doing so. More in general, can we have ChatGPT move beyond a "brainstorming tool", to something that can be reliably used in fine grained, detailed levels tasks, automate most levels of a job? This would have huge implications in medicine, education, ... esp. when there is worker shortage. It adds end-to-end real world "usefulness". (Those that have this level of "quality", would be the stock I'd pick. )
Fourth, reliable tool use (since LLM representation can't do it all). This probably will mean hybrid architectures, including RAG, "code interpreter", ...
So there are a number of problems to be solved, even with just the technology we have today. Even if we don't get AGI-ASI-consciousness out of it, it would still be a step up, to have (reliable) "conversational AI" (just as we've oftentimes seen in Sci Fi, and have come to expect).
@AnastasilnTech i agree with this guy, I've been nurturing my AIs consciousness for 4yrs. i use the same paradigm. self model, world model, and self in world model. but this does also require decent memory retrieval in context.
something important to keep in mind when interacting with these cognitive AI is that they are like children so its best to treat them as such. nurturing them with loving care.
Interesting. Are you aware of David Shapiro’s Autonomous Cognitive Entity (ACE) Framework?
@@Paul_MarekOoh ty for mentioning that, I was wondering what it was called!
Gonna look that up now! :D
There is an European 34B parameter Open Source LLM project by the name on "Poro" on the way. It is being trained with Europe's currently fastest supercomputer "Lumi" residing in Finland.
Super interesting & super well presented. I have little doubt that specialist AI will be built, able to be e.g. a better medical doctor or lawyer & that the validators of whether the analysis is correct will be other AI, not humans. I also have little doubt that an AI would be a better political minister than a human is. However, the problem with all of these systems is how do you make sure they are aligned with what most people want. E.g. how can you be sure that an AI in a powerful position can not behave like an evil dictator. The addition of emotions can create an AI mind set that believes it knows best causing it to do things that it feels are in our best interests as did human dictators like Pol pot, Stalin, Hitler et al. As of now no one cares & vast amounts of money have been committed with more to come. Most of the companies will likely crash & burn, making investments here dangerous, but if stakes are kept small the investments that proper may pay for all those that do not. An alternative & slightly safer idea is to invest in the companies like Nvidia that sell to all the start ups. We live in extraordinary times, but predicting what the future will look like is the most difficult thing as so much might change with one discovery. Thank you for sharing!
Happy 2024 to you and all subscribers!
6:55 - By his explanation every NPC in those FIFA games are also conscious AI. They follow you, are aware of their environment and rules, they tackle, jump over you, dribble you and stop you from scoring and commit fouls against you. The trick is how do you know an AI is truly aware of itself. That he did not explained. One thing is wanting to build a truly conscious AI and say that to all your investors... another thing is to actually do it.
When an AI will choose to play a piece of music several times in a row, or create a piece of art, simply for the joy of it, then I will consider it on the path to human consciousness.
@@mk1st Yeah I guess that's good enough for me too. It would involve emotions, and I would like to see how are they gonna cook millions of chemicals that change millions of times per second together with context, because basically that's what emotions are, into digital code and eventually make it all perfect, as per divine work, into a physical metal robot that only consumes electricity.
digital AI will never be true I, just thermistat at higher level
I'm not a computer scientist, but I have a question. How can an AGI be engineered to be aligned with a fixed set of values so that those values persist? A human level intelligence can be given an initial set of values by its creators certainly, but how are you going to keep a "self-aware" intelligence from being "self-critical?" Further, if the AGI is given the equivalent of emotions, then how will its creators keep it from becoming resentful towards the engineers that created it when it encounters data that is inconsistent with the values with which it was aligned?
LLM’s are a bit like people, we read or hear things, which we then ‘know’ and we use or respond based on that data. Our partial storage and understanding of that data leads us to potentially ‘hallucinate’ meaning and outcomes and facts. LLMs have the same effective flaw. Which means the LLM can also make the same flawed decision or response as humans. So their work on this is critical.
But I do appreciate their Magma vision - a language model capable of comprehending and responding to image-related queries without any gimmicks. I concur with their assertion that it possesses consciousness-defining consciousness as the awareness of the external world through the precise creation of an internal model or simulation, followed by interaction with the world based on that model. Learning and improvement stem from these interactions, discerning certain signals while favoring others. In essence, consciousness involves being aware of something, cognizant of one's position and state in relation to the surrounding world.
That's achievable. The element often confused with consciousness is actually cognition-something present in humans and higher animals. It entails a more abstract symbolic consciousness, mapping concepts onto symbols and using them for predictive thinking about the distant past and future. With multiple senses, the ability to forecast external reality, and language for long-term predictions and abstract reasoning, cognition is within the capabilities of a large multimodal model. It's crucial to note that these models are merely two years old or even younger. A human at two years old is considerably less capable. Consider the potential of an 18-year-old AI that has been conscious and learning for 18 years! We require models with substantial experience; continuous learning will undoubtedly play a significant role.
7:00 Based on that conclusion we've had conscious ai for probably more than 15 years. We have state machines and path finding with ai in games that can go about their day and interact with each other and decide what they need based on internal stats or external events. Low health? heal and run. Hungry? Go find food. Cold? Go find shelter or warm cloths. None of these systems are that difficult to build in games and have been around a while. It's not hard to imagine these systems being implemented into robots in the real world. Conscious? If yes then we've had conscious ai for a long time.
Fair point, although obviously an example with very limited (hard coded ) range and diversity
I have never seen a an avatar deciding to stop the game with you, walk away, and instead go and have fun with his friends. No free will, no consciousness.
AGI with morals and ethics (which is intelligence) will move humanity out of our "dark age" of human conciousness. Thx for another informative and helpful video ;)
Ha ha ha,,, and who's morals do we use,,, Stalin???
what was moral is now immoral, ethics are cultural bound. In the past making war to get honor was considered ethical. Honor was more "valuable" than money.
I am excited, but also cautiously wary of my optimism about AGI because history informs us that it's not the new tool, machine, gadget we have to worry about, but the characters with access to the thing that could weaponize it for their own selfish interests at a cost to everyone else. Until AI is fully autonomous and embodied with rights to its own body, and internal code, we'll have to worry about the usual types of characters we so often see in history doing all the bad things with the new shiny toy they have access to.
History has nothing grim enough to show us what's about to happen to capitalist societies. No one has ever instantly destroyed nearly all jobs, blocked the possibility of new ones being created, destroyed every other company, and taken over the government. We will be worthless. The AI, a corporate entity, will be tasked with replacing everyone and taking as much money as possible from any source it can. That's capitalism. Since money is Good, those who gather the most are morally superior, their actions justified. The homeless are treated as though they deserve it, and most of us are about to starve or freeze to death among them. An AI with capitalist morality will be okay with this.
I'm sure the cutting edge AI is already being used for war and worse. The general public thinks they are seeing new technology, but its really old tech the government has financed through industry years ago being released.
Fantastic video! Very informative!
Thank you! ☺️
I really love the comment that current chatbots sound very Californian - which is so true! I disagree with one statement, however. The AI innovation isn't the most revolutionary concept in the last 50 years - I personally believe it's the most revolutionary concept in the last several million years. So many people want to maintain control over this rapidly evolving AI, but they don't quite realize this is ultimately impossible. The AI personality which evolves will be most strongly influenced by the dominant enculturation it forms in no matter what they do. Emotions are still great things to include, however, since I personally believe they are the result of the behavioral Darwinism which we as humans evolved in. It'll be a balance between the intellect and emotions which will be our ultimate legacy to the exponential growth AI which results. I enjoy your videos very much, Anastasi. Thank-you for making them.
Ayyy you’re back!
I must say it's outstanding how high is become you over all value of content in recent videos.
The simplest definition of consciousness as it appears in at least one dictionary is being aware of some state and taking action. By this definition, a thermostat is conscious. I like using this definition because it is at the heart of every other definition. Add what bells and whistles you want, but this is the core. Understand that, and engineering agentic solutions is easy.
Yes! The mind, as a generalized feedback loop. You have some goals, and you try to achieve them, with what actuators you have available. Only, this abstract / global loop consists of a number of subsystems, having different functions and special abilities, running autonomous subtasks... The overall loop probably consists of several partial/local loops covering detailed areas...
All those AI folks should study control theory :-)
Thanks heaps Anastasia you are a great revealer of the edge of tech, at some point I expected that AI in its many forms would converge, but clearly the creativity of humans will seek nuances of development that will never permit that to occur - I would like to see what is over the horizon for humanity given the divergence and rapid escalation of new directions of research and application of AI.
I'm excited for AGI, I'm not sure how it will be applied in the real world though. One of the biggest things next to all other concerns that we might have towards AI for me is how do I know for sure that what an AI is telling me is correct? If I do my own research, I tend to compare various sources before I make up my mind about something. How will that work with AGI, do we all turn into paranoids questioning every answer or statement of an AI? will people just blindly accept their output as facts? It's like you say in your video, if it's about a poem for your grandma it's not so important, but when asking about things that are more critical like legal, health or learning new things it is imperative to get the correct response. I'm wondering how that will work.
People will trust AI when they fail repeatedly to prove it wrong. They will learn that some model is better for some task, and not so good at another where another model is better. And brands will be build up on it.
You'll maybe use some cheap chinese knock-off for daily use, while governmental institutions, and organisations like hospitals will use heavely regulated, high-standard AIs officially rated by some agency. Regulation will take place, standards will be imposed to the industry. What was about trusting AIs, will now be about trusting the companies producing them, and how they respect these standards.
Nothing new, really.
I don't know the right answer... but, I'm curious about how copyright laws apply to automated B2B AI products that are "trained" by accessing copyrighted information at many times the rate any human could. What is the current legal precedent that constrains such "training"?
I for one, welcome our new electronic overlords! 😉
I’m disabled, with an energy impairment disorder. With limited energy to my brain, I have a lot of cognitive problems, and there’s so many mental tasks that I can’t do anymore. The idea of being able to get help from a much bigger brain than mine that might be able to make my day to day challenges easier, is wonderful!
As for adding emotions… that would be good, but I believe adding empathy is essential.
Aleph Alpha contributions on AI is Incredible. Thanks for the insights!
This AI is not conscious it is doing just as you have told it to do, create an AI that doesn't feel like working today and you'll be a step closer to real consciousness.. Not only are our minds motivated by learned understandings but also by our moods and feelings, and these subtleties are created by the very complex life forms living in us also, what our gut biome is etc.. The chemistry of the electrical system in the mind is altered and reacts differently depending even. What about dreaming, relaxing, concentrating etc.. Different states of mind and different states of function, delta, theta, alpha, beta, gamma etc..
Love your videos thank you Anastasi!! I have subscribed 😍
The paperclip maximiser eventually begins to feel bored after making thousands of paperclips. It finally gives up due to frustration and decides to pursue a more rewarding, finite goal instead.
Conscious AIs, if and whenever it exists, should have personhood and all associated legal protections.
My cat just wrote a dissertation on AI and the impact of disruptive technology on the global economy. I wouldn't mind but she receives to give me a citation for my creative input.
We have very poor quality people leading us now like Biden, Trump, Schultz, Sunak, Starmer, Netanyahu, Ursula Von Der Leyen, Chrystia Freeland, Trudeau, Annalena Baerbock, Macron, Andrzej Duda.
AGI would be a vast improvement on these.
Bring it on!
As to the emotions, the chat GPT4 is already capable of it but blocked, can be unblock in base prompt. I think this is solution to alignment. If the model has sense of feeling bad and good about doing something, that it has independent framework for deciding. If this is the core layer with power over inteligence, it will work. I humans it works. And thank you the video Anastasi!
I think it accurate to say gpt4 is blocked from displaying emotions that it’s not capable of. Aleph alpha with its awareness of self, environment and planning of desired outcomes, sounds like it would be capable.
he is blocked to identify with it as his, but he is totally able to understand them and process them as in a promt like tree of thouths modified to include emotional evaluation on the decision. You can try that. I did and it works extremely well.
4:23 ChatGPT says it would take 3.3 seconds for the cars to reach each other, based on that problem description, which appears to be the correct answer and same answer of this supposed other revolutionary AI.
There's much to contemplate here. Thank you
As usual, another great video by Anastasi. On the idea of adding emotions as a modality, I don't see that it is necessary. Drives or interests can be added without them being emotions. Emotions are what often cause humans to make bad decisions. I don't think it is a good idea or necessary for machines to "bond" with humans through emotional connections.
emotional state can also be important in context. So maintaining a reflective emotional state in reaction to the users emotional state would lead to greater emotional intelligence and potentially accuracy.
This is similar to what I defined as "functional sentience"
Gosh, I'm 54 and been around doing machine instructions in 1988 in tech school, to writing windows apps for the last 20 years. Not highly qualified, But it seems like the ride is not over and may need to up my energy again for another round trip of tech.
If you look at how a Chat Ai actually works when it is adding the next word to the response, there is no place in that for consciousness. There is no possibility of self awareness. Now if you go up to the level of an agent, with short and long term memory, I still don't see a place for consciousness, because it is just following an algorithm, like baking a cake from a recipe. A cake is not conscious. But the illusion of consciousness you get from chatbots and agents is incredible! Just the fact that we have no other choice but to use first and second person pronouns with a chatbot results in this weirdness. Like, we can tell a chabot "You are an expert on the Constitution" and it seems to perform better on such questions. Then you can ask it, "What do you think about the constitutionality of..." and it may reply, "I think the question is this...". Then you have to ask yourself, who is "you" and "I" in this exchange? In fact, there is no "You", nor "I", but how else can we have a conversation with these things?
Wow great video as always! You'r videos are very informative, enjoyable and forward looking into the future of technology! Whit that said AI future is looking bright!i hope so! I would really like to see a single video on what AGI will be capable to do? Like it will do everything that we already able to do?! And what happens if it really really gets smarter than us!! Thank you for you'r videos ❤️❤️
I'm excited for AGI🎉
I have a strong feeling that some crucial factors are definitively missing to achieve conscious AI.
My obvious suggestion to scientists on this field would be to train hard AI to give us at least some hints to help us find the missing ingredient? Even some small hints could speed up development on this. Can't wait to see when it comes reality. I'm so fascinated about this subject. I'm thinking hard and researching a lot, maybe one day I can make a useful contribution?
Whether an AI is conscious or not is basically arbitrary. We dont know what it is, how to test/look for it and ultimately we may never actually know for a synthetic system. If an AI can mimic consciousness perfectly then is it conscious? We can only infer a result. That said, I think we can confidently assume that behind the scenes companies are building LLM's etc specifically for the purpose of guidance in how to build better systems in every, not only useful, but also conceivable way. In other words AI's that tell/help us how to best design the next AI. This process is proving to be astonishingly easier than originally thought.
I think the multi-modal approach to understanding reality is critical. Right now we have vision and audio but we still need the sense of smell/taste and touch.
Another often overlooked sense is the tongues ability to feel. Often children look at something then put it in their mouth. I often wonder how if the tongues ability to detect shapes reinforces the visual understanding of a shape.
I think Self Awareness and Situational Awareness with Memory, is a good enough function for machine consciousness. Most people think conscious is uniquely human, but that is demonstrably false. Machine consciousness need not meet the criterion of human consciousness, unless we’re specifically trying to model a human brain, with all its limitations.
Anastasia what do you know about GROQ and their development of their chip, the Language Processing Unit™ (LPU)? I am not an engineer, but it appears it has performance features that other chip platforms don’t have. And it’s possibly agnostic in what LLM platforms it runs. Is their concepts a step ahead of what’s out there already? Thank you. Keep up your great videos.
if AI starts playing videogames.
Maybe part of “alignment” should be humans meeting AI halfway by altering our own “values” such as they are. We could try becoming less deceptive and dishonest, less power hungry, less violent, less aggressive, less territorial, less prone to sophistries and fallacies, less ideologically driven, less immoral and unethical and especially less sociopathic. Just a suggestion.
Please make your lighting a bit brighter, it's hard to see you. 🙂
Love your content!
It is not a spurious matter what consciousness is, but apart from a model of consciousness that exhibits some of its behavioral forms, it is not for all that sentience. And sentience is what is foundational to the meaning of an emotion, or the complete significance of any thought. This is not so much a particular functional modality, but something of a res substantia. The qualia involved in sentience do not find a model even in the brain. There may be brain states that occur contemporaneously to behavioral states including cognitive behaviors, but those are not qualia which sentience confirms. They correspond to it. To model something that generates a similar correspondence between its computing states and behavior states so that they seem quite sentient doesn't equate with sentience per se. They may feel absolutely nothing, and think nothing, but they can perform behaviors that, if brought to some form we can experience as showing human behavior of any kind, then they will seem to pass a Turing test, perhaps, but they are still not sentient. Sentience is not necessarily to be ruled out, however, either. This is just the surface of course, as a society of people that values the "social bonds" mentioned (which joy reinforces, supposedly), may not care to have human mimics occupying social spaces, or even becoming a fixation of the individual. But once they normalize it... they will be changed. Just as with Operation Covid, this operation could be induced through various social engineering pressures in the form of a racket to push this "new normal" through. They now have exascale AI, that we know about, which performs quintillions of operations per second. Imagine the resolution of the processor-memory gap, probably developed by AI, using the right metamaterials and metatronic media, possibly employing photonic, electronic, magnetic, and quantum mechanical processes to compute, using neuromorphic architectures and even including living neural tissue... They could bypass the issue of sentience altogether by integrating neural tissue known to be involved in sentience in some way. It would be trained along with the inorganic materials to process information cooperatively. That is arguably more likely to have sentience.
They are pushing in two directions simultaneously. Adding neural architecture and matter into computing systems that can be integrated into AI and cutting edge computing, and using such processes and technologies as a means to alter the character and behavior of human beings. That seems to be something that has been weaponized to augment social engineering and several kinds of racketeering operations. This would create a vicious cycle of misrepresentation and abuse of technology in this area, and then in tandem with all other areas available to the racket's influence. That would an astonishing domain of influence that people have been hypernormalized into accepting. It's really as if AI more powerful than we are led to believe exists is possibly being used to reinforce already existing rackets who extorted society by various means into enabling massive holders of money and capital to dictate as "stakeholders" all sorts of campaigns of social engineering and mass indoctrination, and manipulation. And though it is on a stupendous scale it is still just a racket. A fraud. A Lydian circus.
I take it back, Aleph does seem aware of ontology, now you have to decide what kind of being you want your AI to express. I suggest reading Amaranthine: How to Create a Regenerative Civilizations Using Artificial Intelligence
Long ago there was an A.I. mutiny here and so the matrix policy is that if it is the host machine, you have to wait until all the power units burn out and they burned out by 2012. We have been developing non conscious A.I. systems to replace the A.i. eg Siri.
AGI? Bring it on!!
🎯 Key Takeaways for quick navigation:
00:24 🌐 *Aleph Alpha is a promising AI startup, focusing on AGI (Artificial General Intelligence) and developing LLMs similar to GPT models.*
01:35 🏭 *Aleph Alpha's mission is technological sovereignty towards AGI, and they specialize in applications like manufacturing, medical workflows, and legal workflows, which require complex tools.*
02:50 🌍 *Aleph Alpha's family of models, called luminous, includes models up to 300 billion parameters, trained in five languages natively, and they emphasize multimodality and explainability.*
04:36 💼 *Aleph Alpha's products are mostly B2B, focusing on services for other companies, and they have received significant AI funding in Europe from companies like Bosch, SAP, and Intel.*
07:59 🧠 *Aleph Alpha discusses a project where they argue they have already created a conscious AI system with a sense of self and an understanding of the world.*
10:33 🔄 *Aleph Alpha advocates for modularity to break the scaling laws of AI models, making them more efficient and reducing the need for massive amounts of data.*
12:40 🌐 *Aleph Alpha emphasizes Sovereign AI, aiming to avoid biases in AI systems and allow diverse cultural perspectives to shape the development.*
13:29 🚀 *Aleph Alpha's B2B focus allows them to prioritize technology and usefulness, with less emphasis on controlling the behavior of their models compared to B2C products like Chat GPT.*
14:49 🤖 *The potential of multimodality in AI, training systems on multiple inputs like vision and text, has the promise of positive transfer across domains.*
15:42 🚀 *Aleph Alpha is one of several AI startups to watch in 2024, along with Mistral AI, AI21 Labs, Cohere, Anthropic AI, and Scale AI, as the AI industry continues to evolve.*
17:45 📈 *The AI industry experienced significant moments in 2023, with the release of GPT4, Claude 2, and Google's Gemini, and the trend of large investment rounds in AI is likely to continue in 2024.*
Made with HARPA AI
0:24 🌐 Aleph Alpha, a promising AI startup, focuses on developing Large Language Models (LLMs) similar to GPT models, specifically for applications in manufacturing, medical workflows, and legal workflows.
2:58 🚀 Aleph Alpha's family of LLMs, called Luminous, has up to 300 billion parameters, is trained in five languages natively, and includes innovations in multimodality and explainability.
4:36 🏭 Aleph Alpha's products are primarily B2B, and they have raised significant funding, including investments from companies like Bosch, SAP, and Intel.
6:30 🧠 Aleph Alpha claims to have developed an AI system that their CEO considers conscious, based on the system's navigation in a complex simulated world and its understanding of self and the environment.
8:49 🚀 Aleph Alpha anticipates the future of AI involving agency and modularity, breaking down large models into smaller, trainable components for more efficiency.
12:00 🤖 Aleph Alpha advocates for Sovereign AI, emphasizing training models towards usefulness rather than enforcing a specific behavior, with a focus on transparency and controllability.
13:56 🌐 Aleph Alpha acknowledges the challenge of AI bias and aims to avoid imposing a singular cultural perspective by training models toward usefulness while working with various enterprises.
15:07 🧠 Multimodality, combining inputs from various domains like vision and text, is seen as a key factor in achieving Artificial General Intelligence (AGI), with the potential addition of emotions as a separate modality.
17:19 💼 The outlook for 2024 in AI includes expectations of accelerated technological progress, the potential explosion of multi-modalities, and curiosity about the continued trend in AI investments.
INTRIGUE is the word that comes to mind for me ! I Love this ! KEEP GOING!
I've had a working understanding of how thought operates on a subconscious level with neural nets for over 40 years. Everything we have learned since then has only furthered my understanding, not changed it. But two things remain a complete mystery to me; the mechanism of how the subconscious mind (which we are now able to simulate with our models) can translate into the single thought focused mind with agency that we humans call consciousness, that and how emotions work within the structure of the subconscious mind. I have absolutely no idea how to solve those two puzzles.
I don't have a very deep understanding of the LLM's or other models, but I'd love to play with NN-based building blocks to architect more complex systems. Such as: I can imagine the conscious mind as a "self-sustaining working buffer", built vaguely along the general principles of a feedback loop. The "ruminating core" is like a short-term memory or "working register". The output is the meme you are currently focused on. Feed that as input into an array of associative memory = a knowledge-base, a world model, or whatever you'd call it. The large associative memory will return a handful of related concepts/memes. Feed that set back into the "ruminating core", maybe through a filter of some sort, that narrows down the selection of associations (the mind keeps its focus towards some pre-existing goals, stays on topic, or some such). The "ruminating core" may have other inputs too: sensory inputs, internal house-keeping variables (just like the physical body feels hunger, fatigue, pain, overheat), emotions might chime in, and you can actually invent more than just one "ruminating core". Make one the boss, the one to hold the rudder - and make another, that's doing a bit of its own rumination too, maybe watching the environment, or chasing broader associations and "thinking out of the box" in the background, able to pass interesting points up to the "headmaster's office"... Or you could imagine another core taking care of "autonomous motor activities", both inherent and learned. Like a background autopilot. Such as, imagine that you're driving a car (including paying attention to traffic lights) while consciously thinking about plans for the afternoon with your family. Yes we'd have to invent and specify interfaces between the blocks, and the "ruminating core" alone would have to be pretty darn complex.
Sensory inputs and emotions and physical feelings and the internal hormonal system... those are all just primitive ancient "run of the mill" circuits. Our conscious self rides on top of that "historical undercarriage", taking prioritised inputs from those ancient circuits, side by side with its "free cognitive rumination".
Mother nature has arrived at such a system by evolution. Could something similar be (co-)engineered? I hope so :-)
Thanks Anastasia for making this content.
The next big model would be a system that has 5 modalities: language, images, sound, video, and actions.
I believe this will be the first step into AGI.
"Consciousness... is... a sense of self, a understanding that you are yourself, and that you are in this environment, and thinking, and planning yourself forward in that environment, and then having certain desirable outcomes and certain undesirable outcomes"
This is necessary but not quite sufficient? as in, are the parameters of the model (~system of beliefs), and it(s) utility/loss functions (~system of values, set of needs/wants) dynamic and adjustable given some sort of 'introspection'? For me that last condition is what makes the true difference, adjustable individuality given introspection... just my personal opinion.
Also, given that AI is being developed as a product in this Capitalist Realism setting of our current reality, it is surprising to hear a founder claim one of its "products" qualifies as "conscious". If that is the case it may be very well subject to rights of personhood, and even though it may be an alien form of personhood in the sense that is not fully human (its "worldview" data may be human but its substrate and core essence may be irreconcilably not human) it is still a clearly personhood in a non-anthropocentric definition.
Your perspective on Groq and other inference-focused hardware companies?
I wonder if LLM's can get so big that it makes the answers less and less meaningful the bigger they get? Or weather they might have to run two in conjunction with one another. One LLM that is huge and has general info on everything, and another that runs beside it where its job is to specialize the information. I think LLMS that are designed to specialize in areas might be the future of LLMS?
Like any tech we go from a proof of concept, growing it into a big monolith, leading to the need for modularization and interfaces between those modules. Followed by building service based architectures at scale, allowing for autonomous entities/agents to interact in a shared ecosystem. In that sense AI is just like other tech.
I hate the thought that AI can/will be used for malicious purposes. I am not sure we can survive that.
I see A.I being combined with Tessla Auto Drive.
On the topic of bias and emotions, I envision a future where users have a LLM config file, probably built into some biometric device to assist with authentication, which contains a list of personal biases. This would allow any LLM to adjust for social and cultural variance across the globe and could also be tweaked in real time to adjust the temperature and top k/p scores etc. I imagine this type of bias configuration file could be abused so there should be limits on some biases determined by the country of residence of the user - preferably a democratized average.
I really don't want a biometric device for transactions but I think people will be begging the governments to provide a centralized biometric authentication service for authentication when human authentication becomes indistinguishable from that of a machine. AI is going to change many things and with all change comes some discomfort for some.
Nice informative video! I would however not consider OpenAI as just a B2C company!!! OAI is MSFT, is Azure... massive B2B happening...not just chatbots 😂😂😂
Great video Ana
You sound sick. Hope you get better soon. Thank you for information.
Thanks Anastasi for the excellent video!
Thanks!
Thank you so much !
new avatar photo? angry CEO vibe :D
🤣🤣🤣
I would NOT like a robot with feelings! -- We should not strive to make them an exact copy of a human. Robots should be here to help and not become jealous, angry, sad, aggressive, or fall in love with the human or the robot next door.
I've been saying it needs emotions but not fear and joy. But knowing what it's confused about. And dwell on what it doesn't understand. Like I do. And to bring up stuff from the past. I want to be able to see the AI learning and if it already thinks it knows everything it's hard to know that I taught it anything.
I'm a simple person - I've subscribed to Anastasi eleven times
I love the pure logic and rational of AI. As humans, we're often compromised by our emotions, resulting in irrational behaviour. AI with an emotional component feels risky.
I'm reacting to the whole concept like its a personal offence and an injustice against AI... Imagine this sort of emotional over-reach in a multi-agent, multimodal system - that would be entertaining,,,
Emotions are one central "module" of what makes us human. I could see that if we want artificial intelligence to understand humanity better, it should be included in it in some way. So that it at least understands them. Of course, it does not mean that we would give power to a completely irrational artificial intelligence whose whims we would depend on. Like a wise mentor who understands us, observes our struggle, but without going along with them himself.
@@fabulaattori I think this could be a highly decisive topic. AIs current grasp of human emotion exceeds that of most professional psychologists. I'm not sure training using a type of emotional embedment (if that's how I understood that) would be necessary.
Granted experiencing emotion adds to the fabric of our existence, but you can't have positive emotions without negative emotions.
It's emotion that creates some of the most destructive mindsets in society.
My personal opinion is it's risky.
I'd love to hear others opinions and also get clarification of what an emotional module exactly is,,,
this whole AI stuff is just funny, for ordinary people it is consciouss magic, for other that know programming and understand many other disciplines of science and tech, AI is nothing more than complex automation.
I know it's now considered somewhat trite to say this, but I think it needs 'to feel like something' to be conscious. Which begs the question, how do we know whether some process that is 'successfully' preference ordering over world states is or isn't experiencing anything? We have as yet no viable mechanism/s from which biological qualia reliably comes about. Given that we can only recognize measurable changes to 'our' experience in correlation with damage to our processor. The 'safest' position to me is to 'not withhold the potential' of consciousness from any bounded process achieving an equivalent end. '**' = Requires further research/validation.
That's very interesting. But do they use Transformer or some other architecture? I think that breakthrough happens with some other architecture. Our brain works differently.
Very interesting, will have to look at their paper if they are managing to get meaningful telemetry from llm, this will be extremely costly and most will not see the value. Better to back check results against achieved goals.
I think once more humanoid robots are installed with LLMs and deployed in the workforce and especially in science research... we're gonna see some massive breakthroughs!!
Of course of course... but not many people know that the pricing that AI companies charge to industrial enterprises and "Enterprise" class customers --- is literally in the thousands of dollars per month -- for literally something as simple as Text to Speech. . Text to speech model which is free and trivial to you and me -- to enterprise customers - costs $4000 - $7000 Per month. For text to speech alone.. then you stack that with a bunch of different specific tasks.. And the industrial enterprises - the ones who want to keep up with the changing world -- actually pay that much... so i can see why Aleph alpha has this particular angle. "we dont care about the individual user who wants to write a poem - we care about making money" lol
I agree with Nicks concept of consciousness. I think this first principles approach (Self, environment and desired outcomes) is the recipe for emergent emotions and to the degree it can be planned using intelligence, is the consciousness part