Thats a TON of "ifs" & "perhapses" that led to your overall point. And even if models are reaching their ultimate capacity, AI has advanced so much that newer breakthroughs within a year, two or three leading to a hard takeoff is certainly at least a small possibility. Moreover, didn't a number of ppl leave OpenAI? If safety was no longer the same imminent & major concern as before due to models reaching their upper potential limits, wouldn't Jan & Ilya (who you credited with being the Godfather of current systems) & others who left, be some of the first ppl to realize this. Yet not only does that not seem to be the case, but rather Jan made it a point to say OpenAi was putting safety behind "shiny" products. Ilya was far more diplomatic, but that could simply be his personality, not wanting to leave on a bad note and/or not wishing to make their feud a public affair. Moreover, we have yet to see a any significant number of experts who have warned of AI's existential threat, change their tune. Personally, I find Yudkowsky, Conor Leahy & some others possessing much more convincing arguments than Hanson, Andreison, etc. Of course, those who are pessimistic could certainly prove to be wrong. I certainly hope & pray they are wrong considering it is the survival of our species we are talking about. And everything you said in this video could end up being spot on. Nevertheless, if the existential threat remains a possibiltiy, even a minute one... the departure of Ilya, Jan, etc does not sound good.
Yes indeed - I can't argue that there are a lot of ifs and perhapses... honestly that's becuase there's just so much we don't know - especially inside of OpenAI. This is part of the reason I feel open sourcing is probably the safest path forward - as it'll allow the whole Ai and tech community to fully analyse the state of the tech and mitigate any issues early. Thanks for the thoughtful comment! Appreciate it!
Hello. I'm so glad I got shown your video via the Next Up algo, and it's nice to meet you. I actually did a video about this same topic last week, and I agree. Almost nobody watched mine. Your release is timed better, so I hope it's a 1/10. This is an important topic :) Cheers and good luck to you.
thanks for sharing and good points , since this technology is completely uncharted territory, neural networks not even fully explainable by experts like Hinton, there should be a plan B, AI programming team specifically working on safety for humanity algorithms, meaning to say that if an existential threat comes up this AI would try to defend us against it and hopefully it’s better, at that point we may or may not be in control but at least we’ll have a chance.
Thank you for the kind words! And yep, generally agree with what you've said here - AI safety is critical. What do you think about Jan leaving OpenAI? Worried?
Take the brakes off already, who cares. In reality AI is so far behind where we should be. Technology advances at an incredible rate. The first legit AI was created by Arthur Samuel in 1952. It's 2024 and all we use it for right now is text generation which isn't even that great. Stop fear mongering. We need AI to advance because right now it isn't
Thats a TON of "ifs" & "perhapses" that led to your overall point. And even if models are reaching their ultimate capacity, AI has advanced so much that newer breakthroughs within a year, two or three leading to a hard takeoff is certainly at least a small possibility.
Moreover, didn't a number of ppl leave OpenAI? If safety was no longer the same imminent & major concern as before due to models reaching their upper potential limits, wouldn't Jan & Ilya (who you credited with being the Godfather of current systems) & others who left, be some of the first ppl to realize this. Yet not only does that not seem to be the case, but rather Jan made it a point to say OpenAi was putting safety behind "shiny" products. Ilya was far more diplomatic, but that could simply be his personality, not wanting to leave on a bad note and/or not wishing to make their feud a public affair.
Moreover, we have yet to see a any significant number of experts who have warned of AI's existential threat, change their tune.
Personally, I find Yudkowsky, Conor Leahy & some others possessing much more convincing arguments than Hanson, Andreison, etc. Of course, those who are pessimistic could certainly prove to be wrong. I certainly hope & pray they are wrong considering it is the survival of our species we are talking about. And everything you said in this video could end up being spot on.
Nevertheless, if the existential threat remains a possibiltiy, even a minute one... the departure of Ilya, Jan, etc does not sound good.
Yes indeed - I can't argue that there are a lot of ifs and perhapses... honestly that's becuase there's just so much we don't know - especially inside of OpenAI. This is part of the reason I feel open sourcing is probably the safest path forward - as it'll allow the whole Ai and tech community to fully analyse the state of the tech and mitigate any issues early.
Thanks for the thoughtful comment! Appreciate it!
Hello. I'm so glad I got shown your video via the Next Up algo, and it's nice to meet you. I actually did a video about this same topic last week, and I agree. Almost nobody watched mine. Your release is timed better, so I hope it's a 1/10. This is an important topic :) Cheers and good luck to you.
Thank you so much for the kind words! Really appreciate it. I'd love to watch your vid - post a link?
Get gpt5 to sort out superintellence alignment.
thanks for sharing and good points , since this technology is completely uncharted territory, neural networks not even fully explainable by experts like Hinton, there should be a plan B, AI programming team specifically working on safety for humanity algorithms, meaning to say that if an existential threat comes up this AI would try to defend us against it and hopefully it’s better, at that point we may or may not be in control but at least we’ll have a chance.
Thank you for the kind words! And yep, generally agree with what you've said here - AI safety is critical. What do you think about Jan leaving OpenAI? Worried?
great video! may the youtube algorithm be on your side
Thank you so much for the kind words! Really appreciate it :)
Take the brakes off already, who cares.
In reality AI is so far behind where we should be. Technology advances at an incredible rate. The first legit AI was created by Arthur Samuel in 1952. It's 2024 and all we use it for right now is text generation which isn't even that great. Stop fear mongering. We need AI to advance because right now it isn't