Good talk. Apparently OpenAI for example, has decided not only to make deals with Microsoft and Apple, but also to bring the curtain down further for the public on transparency etc. by appointing Paul Nakasone to their board. I'm not sure if I want to live in a word, where a gray-haired former NSA director is deciding what goes in and what comes out, with all consequences for the whole public world.
When US developed its first atomic bomb and exploded it in the wilds of the first Southwestern States followed by selecting Hiroshima and Nagasaki, people reacted to its destructive power with fear and trepidation. The military came up with the feel good slogan of "hide and cover" in schools as an an assurance for the masses. Most of us who are old enough to remember, know that governments are inclined to appease us because we "can't handle the truth." I am afraid that, in the hands of governments the possibility of hearing another version of "hide and cover" in the time of need for public assrances will soon become necessary for AI, in all likelihood. The likes of Trump are commonplace in this world. And even you can't deny it.
You know Temur, Your Videos are really good. But here are very few Views the lack of perfect SEO, Tags, and Ranking Keywords. If you want we will discuss about it details. Thanks
She raised good points about energy costs, exploitation of hidden labor, etc... But she lost me when she mentioned Scarlett Johansson, as the ChatGPT-4o Sky voice didn't even sound like Johansson's voice, and OpenAI says it's the voice of a different actor (much less well known, obviously), who last I heard was still anonymous. Elon Musk can be an anti-woke jerk at times, but if you get facts wrong on something like Johansson's spurious complaint, that makes you sound like you've been brainwashed by woke creatives, really not any better than him. Balance requires respect for facts, regardless of what narrative you align with. As for solving the issues around energy, water, etc., AI systems will empower our scientists and engineers to do just that, in one way or another. This has been the story of invention and innovation since we began on our technological kick a few hundred years ago. Technology solves old problems, and creates new ones. When the new problems get bad enough, the next new technology comes along and solves them, and then creates new problems, and so on... Nick Bostrom talks about "technological maturity," as if human civilization could reach some steady state where it has maxed out on all the technologies that the laws of physics allow. I doubt such a thing will ever happen, but if it will someday, it's still far off in the future. For now we have to expand human intelligence by externalizing mental functions to machines that can perform much faster than human minds can. That will help us perfect cooling systems, make GPUs smaller and more energy efficient, develop commercially viable fusion and space propulsion systems that will allow us to mine asteroids for lithium (or whatever the new lithium is in the next technological cycle). Why is it up the creators of AI technology what's going to be built? Because they're the ones building it. Duh... I mean, you're not Ilya Sutskever... Stay in your lane. But if do you want to build something else, then build it! Go to school, get the necessary degrees, get hired at a tech company or start your own, and do the work. Or get involved in the regulatory process, or in giving constructive feedback to the tech companies. Everyone, or at least everyone's descendants, will live in the world shaped by the technologies emerging now. We all have a role in determining how this goes, but the most direct role is in actually building and implementing the technology. Don't tell the tech companies "You can't build *that*!" Don't try to kill the goose that laid the golden egg. You can help guide the process even if you're not an engineer at a tech company. That's what OpenAI's iterative deployment strategy is about. And I'm not saying iterative deployment is the best solution, but it's one strategy among many that might help us collectively guide things to a place that works for everyone. Just beware of Luddism, of techno-skepticism. The Luddites have always been wrong about the limits of what's possible, or at least have been selfishly unconcerned with creating a better world for future generations. To achieve our true potential we need to make what seems impossible into a reality. Today we can communicate instantaneously between continents, travel around the world in a matter of hours, perform complex calculations on a small device that fits in the palm of your hand, etc... This is because we've never said "That's impossible, therefore we won't try." We can't go back to thinking like Medieval peasants, focused only on the tasks and the limitations that they know. The times in which we are living are an *intermediate* stage in the development of technology, knowledge, civilization, and culture. We need to realize that much more is possible, and we need to reach for all that might be within our grasp. That is our moral imperative if we care about solving the problems of poverty, war, environmental degradation, etc... Thank goodness for capitalist race conditions, as they are probably the only driving force sufficiently powerful to get humankind over the hump of its self-doubt and level up civilization.
Anyone else running a model locally?
Yes.
@@terjeoseberg990 May I ask which model?
@@marshallmcluhan33, I’m trying to train my own diffusion model for image generation.
@@terjeoseberg990 Cool, are you using 1.5 as the base?
@@marshallmcluhan33, This…
Coding Stable Diffusion from scratch in PyTorch.
Good talk. Apparently OpenAI for example, has decided not only to make deals with Microsoft and Apple, but also to bring the curtain down further for the public on transparency etc. by appointing Paul Nakasone to their board. I'm not sure if I want to live in a word, where a gray-haired former NSA director is deciding what goes in and what comes out, with all consequences for the whole public world.
When US developed its first atomic bomb and exploded it in the wilds of the first Southwestern States followed by selecting Hiroshima and Nagasaki, people reacted to its destructive power with fear and trepidation. The military came up with the feel good slogan of "hide and cover" in schools as an an assurance for the masses. Most of us who are old enough to remember, know that governments are inclined to appease us because we "can't handle the truth." I am afraid that, in the hands of governments the possibility of hearing another version of "hide and cover" in the time of need for public assrances will soon become necessary for AI, in all likelihood. The likes of Trump are commonplace in this world. And even you can't deny it.
You know Temur, Your Videos are really good. But here are very few Views the lack of perfect SEO, Tags, and Ranking Keywords. If you want we will discuss about it details. Thanks
She raised good points about energy costs, exploitation of hidden labor, etc... But she lost me when she mentioned Scarlett Johansson, as the ChatGPT-4o Sky voice didn't even sound like Johansson's voice, and OpenAI says it's the voice of a different actor (much less well known, obviously), who last I heard was still anonymous. Elon Musk can be an anti-woke jerk at times, but if you get facts wrong on something like Johansson's spurious complaint, that makes you sound like you've been brainwashed by woke creatives, really not any better than him. Balance requires respect for facts, regardless of what narrative you align with.
As for solving the issues around energy, water, etc., AI systems will empower our scientists and engineers to do just that, in one way or another. This has been the story of invention and innovation since we began on our technological kick a few hundred years ago. Technology solves old problems, and creates new ones. When the new problems get bad enough, the next new technology comes along and solves them, and then creates new problems, and so on... Nick Bostrom talks about "technological maturity," as if human civilization could reach some steady state where it has maxed out on all the technologies that the laws of physics allow. I doubt such a thing will ever happen, but if it will someday, it's still far off in the future. For now we have to expand human intelligence by externalizing mental functions to machines that can perform much faster than human minds can. That will help us perfect cooling systems, make GPUs smaller and more energy efficient, develop commercially viable fusion and space propulsion systems that will allow us to mine asteroids for lithium (or whatever the new lithium is in the next technological cycle).
Why is it up the creators of AI technology what's going to be built? Because they're the ones building it. Duh... I mean, you're not Ilya Sutskever... Stay in your lane. But if do you want to build something else, then build it! Go to school, get the necessary degrees, get hired at a tech company or start your own, and do the work.
Or get involved in the regulatory process, or in giving constructive feedback to the tech companies. Everyone, or at least everyone's descendants, will live in the world shaped by the technologies emerging now. We all have a role in determining how this goes, but the most direct role is in actually building and implementing the technology. Don't tell the tech companies "You can't build *that*!" Don't try to kill the goose that laid the golden egg.
You can help guide the process even if you're not an engineer at a tech company. That's what OpenAI's iterative deployment strategy is about. And I'm not saying iterative deployment is the best solution, but it's one strategy among many that might help us collectively guide things to a place that works for everyone.
Just beware of Luddism, of techno-skepticism. The Luddites have always been wrong about the limits of what's possible, or at least have been selfishly unconcerned with creating a better world for future generations. To achieve our true potential we need to make what seems impossible into a reality. Today we can communicate instantaneously between continents, travel around the world in a matter of hours, perform complex calculations on a small device that fits in the palm of your hand, etc... This is because we've never said "That's impossible, therefore we won't try." We can't go back to thinking like Medieval peasants, focused only on the tasks and the limitations that they know. The times in which we are living are an *intermediate* stage in the development of technology, knowledge, civilization, and culture. We need to realize that much more is possible, and we need to reach for all that might be within our grasp. That is our moral imperative if we care about solving the problems of poverty, war, environmental degradation, etc... Thank goodness for capitalist race conditions, as they are probably the only driving force sufficiently powerful to get humankind over the hump of its self-doubt and level up civilization.
Interesting.