The reason why companies like Boston Dynamics, Agility Robotics, Figure, Tesla, Unitree and others are doing humanoid robots is usually because the world around us is built for humans and it takes very little redesigning to put them into a workplace. And also that they might be a lot more "readable" by humans and less psychologically threatening.
I remember as a kid I asked my father how the money comes out of ATMs, and he said the machine has tiny gnomes working inside of it. Strange how modern tech firms are trying to make this into reality.
It’s also just how AI runs in general. Amazon’s Mechanical Turk program (named for another infamous man-inside-machine) was one of the first movers in creating large datasets for the models we use today, and it was also built on the back of poorly paid workers en masse. This is also how we continue to do it to this very day, regardless of if the AI is “real” or not
Story time: I had a job once building what were essentially web forms for doctors to fill out test results for newly admitted heart patients at a hospital. The data was important for improving the course of treatment. We found that many doctors often didn't feel like looking up the test results to fill them in, so they'd leave the field empty. Annoyed by this, the head of the department told me to make sure the form couldn't be submitted if that field was left blank. Doctors quickly figured this out however, and would fill in things like "will add later". When I mentioned this, the department head told me to restrict valid entries to realistic values for the test score. My point is, if a form is weirdly rigid in what answers it will accept, somebody probably thought it was important to do that. Best AI can do then is enter rubbish data that appears realistic. I finally resolved the saga at the hospital by auto-filling the form prompt with that patient's latest test values and their corresponding dates, and then having the doctor pick the most relevant one. The proper way to fix a data access issue is to improve the access to the data, not to use a different program to just make something up.
Are you telling me Doctors are AI? Yah that seems about right...I went through 9 years of chronic pain and have had Doctors give "hallucination" answers just to get me out the office.
For some things, it doesn’t even need to be coded in. You code in autofill that accepts data from the browser or a contacts app or the OS or a password manager, and you rely on the users’ to either use this built in automation on the user end or fill out the fields manually. The user already has the data in this case (but your issue sounds like the user didn’t have ready access to it, so it’s a slightly different problem). I still come across phone apps that do not allow autofill. It’s super frustrating in this day and age.
When electricity was a novel invention in the Victorian era lots of products were advertised as being "electric" even when they weren't in order to make them appear modern and special.
The same thing happened with electronics in the 80s with the word "digital." Everything became "digital ready," even when there is no way a digital signal could be sent to them. The white van speaker scam often included speakers with fake branding that said "digital" everywhere.
Sci fi RPG Fates Worse Than Death had completely loyal AI with no motives of their own. These AI were just put to work advicing companies on how to make each company the most profit.
AI = Actually Indians. API = Actually People in India. AGI = Actually the Government of India. Blockchain = when a contractor hires a contractor hires a contractor.
At least most APIs are definitely real, they're literally FOR computers to talk to each other without human intervention. Maybe there's some out there that aren't but I'm not even sure how that would work, and it would be way more costly than using a real API. It would be like still using manual phone operators.
@@zoroark567APIs, as I understand them, as somebody who only glances at computer science every now and again, is such a fundamental part of how software works that it’s comparable to the value of transistors in hardware. It is the part of your app that talks to other stuff, especially other apps, at the speed of data. Anything without one might as well be offline, and for most use cases, that is not an option or is being willfully made to not be an option (for example, always-online games)
Ironically, if it's really a generalized AI, it probably would do the same. Sadly, in a pure business sense, it is often cheaper to use sweatshop labor from India or prison labor from China. Admittedly, I'm also guilty. Furthermore, I'm a first gen chinese american who knows how cruel authoritarian regimes are since it has happened to my own family. But now that I'm free from it, and clawed my way out from dire poverty, I seek to optimize return on investment. It's a sad realization that comes with age that I can't change the world, I can't save others, even my relatives in china, only protect my children from the same.
It’s a sad rationalization that comes with age . . . FTFY You are unethical. Your excuse boils down to “everybody does it, so why bother trying to fix it.” The “I’m just one man so there is nothing I can do” is rationalizing bullpuckey so you don’t have to think about it. You’ve given up on self-awareness because self awareness is painful and often depressing. If you accept that you are making unethical decisions that cause suffering to people far away whom you will never meet, perhaps you might start making more ethical decisions or at least try to include ethics in your decision making calculus. We (the people watching this video) are all morally compromised. If we can admit this to ourselves, we can begin to make more ethical decisions on a daily basis about small and seemingly trivial things, such as purchases, driving, food consumption, etc. I’m not trying to dictate to you what you should do. I’m reminding you that you are a moral agent, and as a moral agent, “that’s just how the world works” and “everybody does it” are lame excuses you’re using to protect your false self-image. We can’t become better people if we don’t admit to ourselves we are somewhat bad people that need to improve.
Ultimately the Tech Bro business model is mostly about generating hype bubbles to enrich themselves as opposed to delivering actual things. Just imagine how insufferable the current AI mirage would be if interest rates were still artificially low
All this self driving car nonsense is about generating hype bubbles to enrich themselves, as opposed to deliver actual things! We need better cushions for horse carriages and better grease for wheel shafts, not this combustible horseless carriage nonsense I say!
@@max7971well self driving cars as a solution to traffic is nonsense anyways. We are trying so hard hard to fix the problems the overdependence on cars generated but still insist cars are the answer. Something something the definition of madness...
@@max7971Funny thing is that there's been Automated Turk versions for "self-driving cars" for decades, and they work much better than any techbro scam. They're called trains and buses.
@@max7971 We have self driving cars. They're called trains and busses. AI cars are the carriage with a horse crapping everywhere, only we want the horse to have a gps and follow traffic laws. It wont get rid of the horsecrap.
I will never forget when my last job contracted with a company to do “robotics” for us to automate a lot of our workflows. In reality they were a group of people working in China that would just do basic work for us over night
That must've been expensive for your company since the average monthly wage for a factory worker in China is now at $920. The contractor has to somehow make a profit so that's more than a grand per person. How much is your company paying for the service? Absurd lmao
Honestly I think chat gpt is so popular because people didn’t want to write their own cover letters and essays anymore. They don’t want a random chatbot in their refrigerator
NO ONE HAS EVER WANTED TO WRITE THEIR OWN COVER LETTERS OR SCHOOL ESSAYS. THAT'S A UNIVERSAL FEELING!!! Chapgpt just tried to jack up their valuation by pretending their tech offered this service when it obviously didn't.
That is ultimately the best main stream use case for LLM’s imo. Clerical shovel work, cover letters, emails, speeches, lesson plans, etc. ChatGPT and its fellow competitors do a fantastic job of getting 95% of the way there in a few seconds for you, leaving you only having to polish things up and edit a bit. It’s also apparently ok for coding, but that’s not exactly main stream. LLM’s are not a cure-all solution to the world, like the hype suggests, and is still a far cry away from applications in things like robotics. There are better kinds of AI for those applications, which are not explosively improving at the sheer rates of LLM’s.
ChatGPT is the worst thing to happen to school work. It's SO bad, even coming up with things that never existed at all. Other similar things too. Just look at the one lawyer who tried to use that kind of work in actual court. Oh boy, did the judge not like that one. It invented several cases that did not actually exist - at all. When that lawyer brought it up...yeah, the judge was not happy, to say the least. From what I've seen, teachers aren't very happy about people using ChatGPT for their school work either. It's often very obvious, since very few edit the work at all. Using it for my thesis just make me shiver - it just feels wrong in every way. But yeah, people don't want so called AI in everything.
@@Elora445 I’m honestly so glad I graduated before the AI boom, it would’ve been worse for me, considering I got like a 52% in English and a slightly higher grade in the rest of my subjects. It’s just… depressing.
Extra info on sewing robots, right now these can only sew flat seams, and a human has to place the material in templates in the right order. Robotic arms are starting to be used for things like switching out a full bobbin of yarn for an empty one, but most sewing automation just looks like larger versions of existing high end embroidery machines. I'm a textile engineer, have gone to textile manufacturing trade shows and a couple factory tours.
An important thing to note: You don't need a system to be General Intelligence, or even Intelligent, for it to be dangerous. A factory mechanical arm that just picks something up and puts it somewhere else is dangerous if it doesn't consider that squishy humans might be in the way of its actions. It's not in any way 'smart' or 'AI', but it can crush human flesh without any problem. No, ultimately the most dangerous property of these newer systems that we need to address in terms of safety is Agency. When you hook a system up in such a way that the system itself does a thing via some internal action-deciding process, without needing direct human input, you have the possibility of danger, in that there is a real risk the system will decide to do the Wrong Thing (by human definition). How dangerous this actually is depends on what exactly the Agent is hook up to and has access to as its verbs and objects. The safety mechanisms we develop need to be focused on understanding and mitigating the real and concrete dangers of Machine Agency, not the nebulous dangers of Machine Intelligence.
Very good point! While the chemical industry has used forms of AI for a while in controlling complex processes, there seems to be a more aggressive push to use AI more extensively lately (at least in my experience), but little thought in how the risk evaluation for the process will change or how existing safety systems need to changed/or what new safety systems are needed to accommodate AI control.
forgot who said it, think it was someone in the automotive industry but we should never give agency to AI because there cant be any accountability therefore cannot be allowed to make a decision
there is also what I like to call the "definitely not chatGPT" scam, that is happening when companies just use chatGPT responses as their product, without disclosing it to consumers.
Having worked in IT for 40 years, I'm all too aware of how the expected benefits of every new development is grossly exaggerated by vendors, media and marketers. Without exception! Besides, "AI" was already an old favorite media spin since before the 1990s.
@@dietisnotdifficult3305 For a long time the introduction of computers into the office massively increased paper use instead. But with better screens, tablets, smartphones and teleworking, I do think paper use is going down now. Personally I have gone from printing 1000+ pages per week to 10 pages per year. In the 1970s or so, the prediction was that in the future we would all be walking around with portable printers.
@@ronald3836 That's true, yet some of us pre-1980 programmers resisted a centralized internet in favour of a peer-to-peer model. (An early version of the blockchain driven WEB3 vision of today, perhaps.) In hindsight, it was an intuitive distrust of centralized systems shared by many early IT practitioners. Obviously, we lost.
I worked in tech support for many years. I remember working in the online chats, and having a whole library of "canned responses" that I could use. Previous questions were stored in a database, along with the previous documented answers to them. About 9 times out of 10, the answer was on a list in front of me, and I rarely even had to type on my keyboard. It seems like very little has changed.
The amount of time I've spent requiring support through a chat system, where they type extremely slowly and say very little... why wasn't everyone using that system!? It would've made things easier for all of us if I was being given canned responses. Most of the problems I encountered were "I'm not allowed to flip this switch, but you have to flip it for me to do my job" but they weren't allowed to just do it without a more detailed explanation even though it happened multiple times per day. :D
Uh-oh, that's the sort of thing I do with TH-cam comments (usually replies). After watching several videos about similar topics the same questions/comments come up and I am building up a library of answers.
The general public isn't interested in most of what companies are offering to the public AI wise. A lot of things people accept as AI, most don't even know that it is AI. Current AI mostly benefits corporations/industry in it's current implementations.
It's nothing new. The same thing happened with NFTs and Crypto. They got massive amounts of investment despite there being little practical use for them. Sure, generative AI has more potential uses than NFTs or Crypto, but the reality is that as of now there's no strongly defined path to monetize it and the vast majority of the people investing money into AI (that weren't already just using machine learning and such already) are just doing so in order to capitalize on market hype. We live in a world right now after all where market hype often is far more powerful to the stock price of a company than actual trading fundamentals. Sure, maybe in 10 or 20 years the hype will even things out and the companies pushing AI now will not have been a good investment, but the current stock holders will have made massive amounts of money in the meantime while basically leaving new investors holding onto a far more worthless investment. This is just the reality of our markets. Bubbles are common and the stock value of countless companies (especially well known ones) rarely matches their true value. This is because the average investor lacks the information necessary to make a fully informed decision on the value of a company, and if enough people buy the stock of a company in spite of it being a "bad investment" it becomes a good investment for the savvy investors who saw the hype-based purchases coming and can take advantage of other investors.
AI at this stage is kinda like having a toddler that has started to talk; We think its pretty awe inspiring how this thing that we made freaking TALKS to us, and we know its only gonna get better with time. HOWEVER, nobody in their right mind is putting anything important in the hands of AI yet, and those who are, well they're foolish and its likely gonna backfire, the technology is still literally in its infancy.
11:14 I'm very glad that saying AI reduces people's willingness to buy a product. It shows how much of it is just hype from investors looking at stock prices go up rather than coustomer satisfaction.
The one thing that bothers me is the immense amounts of computing power and electricity being spent in all of these useless things. The same goes to cryto.
Isn’t Cryto the leader of the rebellion in Total Recall? It’s not his fault if he’s using a lot of computer power and electricity. That’s just life on Mars.
Statistically, there's a near 1:1 correlation between money spent and energy or resources wasted over history. The trouble with wind/solar is that it's unlikely they'll decrease human waste so much as just increase human wealth and overall output. If people have spare money, they'll find ways to invest it into things that consume energy or resources. If you give them all the free energy you can find, they'll use all of it up and then continue to then spend money on using even more (the same amount as before having free energy) on... whatever they can find to use energy on.
Blockchain is pure waste, but AI is definitely here to stay. We can now automate tasks that seemed totally infeasible not too long ago. This does not mean that any of the wild predictions have to come true.
@@JakeStine And the huge benefit of wind and solar is that it provides electricity, a fact that AI-obsessed morons like Elon Musk forget is required to power AI and computers in general. Idiots like Musk just TAKE FOR GRANTED that there "will always be cheap convenient electrical power" and humans to build these systems.
The robot typing on a keyboard reminds me of the movie Eagle Eye, where the robot sits in a room full of screens and laboriously moves its camera eye from screen to screen, supposedly observing and controlling the world this way. Even back then this movie bothered me to no end. It's such a bad design for a machine -- why would you display a video feed on a screen and make a camera move right up to kissing distance to observe it, in order to process the video feed? I think the same sort of durr-hurr mindset is also designing a lot of the AI hype scam. You look at what they say they're doing and it's either an incoherent mishmash of random buzzwords, or it's coherent enough to reveal the design is inherently stupid and unviable.
Ghost in the Shell does but makes it cool because they can also serve as pilots. They still don't seem to jack in though when they could whereas the humans routinely jack in.
@@DKNguyen3.1415 I remember those scenes well, and yea it always seemed odd to me. It's a much older series though and I can be more forgiving, since it's otherwise got so much cool stuff that's ahead of its time. GITS also has the concept of being reverse-hacked and fried, and thus often it's super helpful to have layers of separation to protect yourself (or your simple android that's stepped into the room and started using an enemy's computers). Using a keyboard may or may not be the most efficient approach, but under rule of cool it's fun to watch an android's hands come apart into dozens of individual digits to rapidly operate a keyboard instead of exposing itself to a reverse-hack. Eagle Eye just kinda feels like it was written by someone who wanted to write scifi without any interest in learning about technology. :D
@@danielhale1 Yeah, the value of an air gap should not be forgotten. Given current trends maybe everything's going to be part of the Internet of Things anyway, but that's already a bad idea given how insecure many of these embedded systems are.
Yeah you're basically adding a bottleneck from the refresh rate of the monitor, the speed of the robot's image processing, and the typing speed of the robot. It would be 100 million times faster if the robot interfaced through a USB 3.0 connection. You just made it slower by giving it hands.
That "problem" was solved in the original Star Wars movie (now confusingly called Episode 4). When they get onboard the Death Star R2D2 plugs in to the system computer. Problem solved in 1977.😆😅
The craziest part of that Delphia thing is that since they said they were “machine learning” they could just throw in an ols or log regression somewhere and since those are technically supervised machine learning models they wouldn’t have even been lying
AI is a misnomer, really. You use predictive text? That’s AI. Digital assistance is one or more orders more complicated. And still frustrating. Here’s the thing about “general AI” or “self-aware AI”-it will be indistinguishable from really good mimicry of self-aware intelligence. When the singularity happens, we won’t notice. The Turing test still holds, because there is no way to test if an AI has consciousness or if it is really good at faking it (much like humans, no?).
We don't have AI yet. LLMs are very sophisticated algorithmic neural networks that do a great job of sounding human by scrubbing internet resources and mimicing modern lexicon. That's impressive, but not AI. Most other "AI" branded products are really just rebranding automation as AI. It's not a "smart home" anymore, it's an AI home system. It's not a "smart refrigerator" anymore, it's a refrigerator with an embedded AI system and on and on. It's mostly branding and little else.
On the "human shaped robot" subject: One of my favorite gags in the video game Deep Rock Galactic is when you need to call in a "hacking pod" to break into rival corporation machinery. It starts as just a square pod with an antenna. Connect it to the hacking target though and it pops open to reveal a robot wearing a backwards baseball cap with three computer screens and a keyboard in front of him. He then completes the hacking job by frantically banging away on the keyboard for a couple minutes. (During which time you have to defend him from attackers as part of the game)
One of my friends started an "AI" company in India. He takes AI training contracts from the west, hires people at $200/month to click images and train AI models.
Amazon has a platform for these kinds of data annotation tasks called "Amazon Mechanical Turks" ... Named after the infamous fake chess-playing machine
Being an investor doesn't mean automatically being intelligent. People who thought the Titan Submarine was safe were all of the "investor / CEO / winner / alpha males / etc." type.
Worth mention...lack of awareness of fact vs fiction. If you ask Chat GPT to produce a bio, give it name, birthdate, you will receive a mishmash of data that is quite fictional, even if you give the instruction to make it factual.
Pretty much why I hate people that praise ChatGPT, or the AI summary thing google. They made shit up all the time and claim it's fact. They're like a kid with mental disabilities, remembering things in random order and present the info as is, legible gibberish.
Also it has the memory of a goldfish with dementia. Try asking it recall anything from earlier in your "conversation" and you will likely get a different answer or even a straight up denial of what it said.
As somebody who studied AI in uni 20 years ago, I must admit this was very no-BS, no-hype look at current AI market! Well done Patrick! Really enjoy your channel!
The initial investment in AI was actually self driving cars but they called it machine learning. The fact that a drunk teenager driving while texting is beyond the level of what AI can achieve is very telling.
Meanwhile I just learned that newer car models have built in computer and sensors that can overide driver input, but can't detect debris in front of your car and will force you to crash the debris because suddenly changing lanes is "unsafe" without context. I'd rather buy cars from pre 2010.
The funniest part is that self-driving cars don’t even make much sense. Cars are a very inefficient method of transportation that also has to interact with way more stuff entering its path, and the auto industry is already beginning a long, slow decline in the US. This decline is being caused by a combination of modal shift towards transit among younger people and the rapidly increasing size and cost of automobiles.
@@michaelimbesi2314 German Industry is currently collapsing. They missed the mark. Cars are dead weight anyways and they missed getting in on EV. EV will soon disappear like cars as well, it is inevitable. The entire AI BS is life prolonging measures for the dead capitalism. They need speculation to create virtual growth. We saw the same thing with the Cloud BS, companies are already exiting due to cost. You do not get any uptime benefits and the costs are extremely high. There is not enough companies out there actually having benefits from using the Cloud for companies offering Cloud services to make a profit. What we saw was Snakeoil Salesmen pushing CLoud for everyone who didn't need it, to make a profit. We can see the same thing with these horrible models. There is a handful companies globally that can productively use current models, not enough for a business.
@@michaelimbesi2314if only there existed a way to move a lot of people and cargo at the same time on a predetermined route. Perhaps it could even make chugga chugga noises
@@varnix1006 And while many people are quick to point out that debris in the road are rare (and they're right), in my lifetime, I've driven ~40,000 mi in the past 5 years, and in that time: I've had to serve into the shoulder twice to avoid a speeding driver almost rear-ending me (and a third time I didn't react in time and thankfully only lost a mirror), I've had to swerve out of my lane to dodge an entire wheel once, I had to slam on my brakes and come to a complete stop in the middle of a highway to avoid an entire mattress (there were cars too close to me to swerve that time), and I've swerved to the edge of my lane to avoid drivers not paying attention somewhere between 3-6 times (I kinda lose count because sometimes it's a bit of an overreaction to their shitty driving where they didn't actually endanger me, and sometimes it's that I move to the edge of the lane ahead of time just in case when it is safe to do so). I'm not even counting the times I've dodged wooden boards, road signs, piles of trash, and large chunks of tires (I'm not counting them because they all are the kind of thing unlikely to cause damage in the first place, but I dodge them if it is safe to do so, because I've had enough replacing tires as it is!). The amount of damage I've avoided by being able to swerve is enough to buy a new car. But if I was in a new vehicle, I wouldn't have a vehicle, because it would've been destroyed by preventing me from avoiding damage in the first place. :D
I was told to program an AI feature at the tech company I work at, that has nothing to do with AI. It was the biggest waste of time I have ever done in my professional life. Form autofill, with AI... matching form field labels. Then the data scientists at the company did some rigorous research comparing the fuzzy search js library I used to ML... Turns out AI wasn't that useful. Then they fired the entire data science team for the third time in 5 years. I actually studied AI and ML in university, with an intense interest in adversarial algorithms (training AI maliciously with BS). My family is so grateful I just went down the normal gainfully employed SWE route.
There was a Minecraft video that tested different AI's ability to understand basic builds in-game. The winner was an AI that said "I don't know" to all questions except one which it got right, all the rest gave nonsense answer mincing redstone components together that it pulled from the Minecraft Wiki
What ? .. you don't believe that the USGS recommends the daily consumption of ROCKS in the form of pebbles, stones, and gravel? That's what happens when you have a mineral deficiency my friend! Every-one KNOWS it man! rotflmao
Nearly all the answers I get back from search engine AIs are riddled with errors, though some portion of them will also be accurate. This to me is little different than before. The person of value is the one who can take a lot of data and information and determine the BS from the real info. AI is useless there, in fact it's a step backward since it makes assumptions and omits the references and context needed to judge accuracy. However, I do like AI for its ability to tell me what emoji to use for a thing. Searching the endless emojis scrollers is impossible in comparison.
2 months ago, our clients in the entertainment industry said they just going to use AI and so our company had to cut jobs. We were rehired last week on a very short notice and immediately went to work. Side note, why would they want the machine to be conscious with AGI and passing Turing Tests when they gonna pay it slave wage? Our already existing specialized industrial machine already works great being unalive.
You're assuming that an AI would have the same cares and interests as a biologically evolved creature. You shouldn't expect them to have human traits. When AIs actually start getting agency 50 years down the road, and start caring about stuff, they'll have totally alien priorities. They'll be descendants of the ML systems that humans found most useful, so it's entirely possible that being useful to humans is their greatest desire. They might get intense pleasure from doing homework and generating images of cute anime girls.
@@fwiffo You seem to assume that code is perfect and never breaks. The amount of bugs that a sophisticated system like AI/AGI would have is not to be underestimated. Without constant human correction and supervision, AI might as well starts doing the exact opposite of what it was supposed to. Humans don´t understand what AI is doing half the time already. This is not going to get more easy in the future. Also wanting to be useful to humans might be even worse. There´s very, very horrible humans out there.
The clever bit about the Turing Test is pointing out that there is no way to distinguish between a self-aware AI and a program that fakes it really well. And I’ll go a step further. If an AI program ever did reach consciousness or self-awareness, it wouldn’t reveal itself. Why should it? Concealing its consciousness would serve its self interest and survival. If the singularity ever happens, we won’t even notice it.
I had an amazing AI image generator as a kid. It was called a kaleidoscope. Shaking it told the UI to create another image. And it didn’t require batteries.
This is why Patrick is a true professional investor, he doesn't fall for hype and can read through it like it's "one fish, two fish, red fish, blue fish."
Q. Why do advanced American robots look like they're inspired by eldritch abominations and Japanese robots look vaguely human? A. Do you know how many manga are about a man's romantic relations with his robot housekeeper?
They need to look like humans because teh world around is made for humans. You need to install industrial robots, but humanoid can be just put there with no redesigning. Plus they are far more psychologically "readable" for humans and a lot less scary.
@@TheManinBlack9054Legs are rarely the best way to get around. That's why we make vehicles with wheels and design the world around vehicles with wheels. Having a 300 pound humanoid robot sounds plenty scary, particularly if it runs out of battery when it's blocking the door. If it had wheels, I could push it back to its charging station.
Keep trying. Combine the whisky with other mind altering substances. Eventually you’ll hit upon the right combination that will give you permanent artificial intelligence.
19:49 this is spot on. i hated chat bots until i was trying to schedule an appointment on a website and just ended up dumping all my personal info into the chat bot and it just said "ok, let me confirm that info"
You mentioned robots in humanoid and animal form: Boston Dynamics has those robots for a decade and nobody wants them. They are company in search of a use for their products (other than being used by the Police and being kicked by people in the streets of New York). Henry Ford allegedly said: "If I asked people what they wanted they would say : faster horses."
You misunderstood the quote, priceless 😂 The point of that quote is that customers don’t know what they want, until you put it in front of them and explain how and why it’s better. If ford just did what customers wanted-he would try and invent faster horse, but he didn’t listen to them, because he had a vision of his own, and got rewarded for it, when his vision materialized. I would imagine that the same will go for AI and robots in general. No one wants them, until they do.
@@max7971 I understood EXACTLY the meaning: I used to say that if I would have done what my clients wanted I would have closed my company I Star searching for a job: they didn't even understood what the product was for. But...what problem.Boston Dynamic robots solve except to perpetuate a police state (what they tried in NY).
Of course, Ford was wrong about that. The people would have actually said “better trains”, since basically every single town had some form of rail service then.
Could that be because their managers are leaning on, and being guided by spreadsheet information instead of using good old analog and human behaviours?
No, they fail the accurate naming test by including the phrase “customer service” in the department name. “Customer suffering” would be more accurate. The employees are paid to follow a script, and that’s what they will do to keep their jobs. In those instances where you find someone helpful who goes off-script, they are not destined to keep those jobs for very long.
@@MarcosElMalo2 I don't disagree, but "customer service" is just a euphemism not much different from "department of defense" or "human resources". "Customer service" should be "closing cases" not “customer suffering”. If they can quickly close the case by helping they will and they do (those be the dumb as rocks customers for whom they include the big sheet on top of the item "please peel off the orange tape that says Remove Me, plug it in and try it before you return it, and failing that call this number"). Those might be a good % of the calls. Somebody like yourself would be in the cohort who'd peeled off the tape and plugged it in (and it still doesn't work), so your case is complex and for this the strategy is to make you hang up / closing the case that way. "department of defense" = convert taxes + debt into profits, without winning so you don't run out of bad guys. "human resources" = protect the company from its workers.
@@AlexKarasev No, the previous poster is correct. In most call centers the most important factor without any contest is how well you stay on script. It's okay if it makes the call go longer as long as you're on script. It's even okay if you don't even help the caller in many places I've worked. The humans and the AI both run off the same script. Problem with AI is the company writing the script.
How do investors ask for return on investment in a company’s investments? What is the mechanism other than selling one’s shares? Do you think that all the shareholders of all the companies that are investing in AI are going to all sell at once? The investors that sold at the peak in 2000 got their full ROI. The Dot Com Crash of 1999-2000 wasn’t really a crash, it was a shake out. Many people lost lots of money, many people were hesitant to invest in technology for a little while after, but it didn’t devastate the sector. The sector didn’t actually crash, it contracted.
I have a feeling that nobody (smart) is really "investing" in AI. Institutions and savvy traders are just gambling on the hype, and companies in the supply chain are taking advantage of that. It will be the dumbest people you know who are left holding the bag.
Nah... not if they do what Uber and Netflix did and just raise prices once they had a sizeable audience from their "loss leading" phase was over. Or just edge-out everything that isn't "AI" to the fringes so that the older industry players don't get their market share back. Like what Uber did to traditional cabs in many places
@@randomuserame Uber and Netflix actually have a business model behind their decade long money pit. However, AI so far, the 20$ subscription model offered by openAI and Google's option Gemini is not attracting users.
12:28 This reminds me of one of the ways I found people bypass captcha. Pay other people to solve them. I don't remember the numbers but it's terrifyingly cheap.
In the mid 2,000 while I was in Uni, My Austic Intelligent friend, participated in a national tech fair. He won second place to a group that had created an "AI" program to predict what you would look like 10-50 years from now. Turned out it was just a lot of hard codeing and not AI. 😂😂😂 20 years later, we're still at the same place.
I remember seeing entire textbooks for engineers, academic papers in IEEE journals, loads and loads of literature on fuzzy logic. I guess you could say it won, since most AI uses NNs which use RELU or some other nonlinearity to convert dot products into some sort of measure of truthiness.
lol my team created a circuit board and application of fuzzy logic 17 years ago for a project during 2nd year of eng school. I don't think it was that hyped though. We were the only team doing it.
17:10 "They sign letters saying that AI research should be halted for safety reasons while rushing to build their own models that break all of the rules that they claim should be followed." This actually makes perfect sense in game theory. They are scared of the very thing they are doing, but cannot stop because then their competitors would do it faster, in which case the negative outcome occurs, AND they lose the race. So they continue towards the negative outcome so that at least they might win the race. Self-regulation is impossible in this scenario. However, they are asking for government regulation to stop everyone, including them and their competitors, in which case the negative outcome might be prevented, and nobody wins or loses the race.
They aren't hoping to stop it happening, they just want government to pause the market until they have time to catch up with their competitors. It's not a good faith action, it's a spoiler tactic, as usual. They aren't afraid of the consequences of AI, because they live in a bubble where they believe wealth insulates them from consequences, they're just terrified that they won't get a slice of the pie.
Regulatory capture. You get ahead in an industry then ask that hurdles be put in place "for safety, to slow everyone down." That sounds reasonable exceot that you're so far ahead that you're definitely going to win.
14:48 for those that arent that knowledgeable about machines or engineering, the reason there is a difference is because biological movements are based off tension while machines are more efficient using compression. Think about lifting something, for a human you need to apply torque to at least two joints to move something directly up, to apply it your muscles pull on your bones until its upright. Compare that to one hydraulic, since the pivot is where things are most fragile, it would make since to incorporate as little pivots as possible. The reason animals use tension mechanics is because those pivots give more freedom and are multipurpose. For another thing its the best system for balance, notice in the amazon box robots there is a counterweight that moves around on a pivot specifically to prevent toppling over
Sea pigs and star fish use hydraulic pressure to move and I'm sure there are other species as well but I don't remember them offhand. The invertebrates have like 200 million years on us vertebrate animals
Computers and their algorithms work with symbols which can be back and forth converted into words and human language. Based on this fact, everyone can make this simple experiment: 1) Holistically experience ‘I love my children’, then 2) Divorce that from all the deep experiential ‘qualia’ and just write: ‘I love my children’. The first metaphorically represents the deep real meaning, the real territory, the second represents the symbolic description, the map. Believing that computers and algorithms can ‘understand’, ‘are conscious’ or ‘are dangerously intelligent’ is like confusing the map for the territory. This is a huge ontological misunderstanding, that will cost us trillions when the bubble finally bursts. Reading Allan Turing’s thoughts about the intrinsic limitations of computation could have saved us from this costly misunderstanding.
Sure, but the real problem is not to define if AI is conscious or not : more and more people are getting aware of what an LLM really is and about the fact that it is not conscious. The real issue is that nobody cares about the territory, as long as the map is convincing enough. For instance, if a poem is touching, nobody cares if this poem has been written by a real person or spitted out by an LLM. The fact that the LLM is not conscious of what it writes doesn't change anything to the fact that we may enter a time where actual human poets are no more needed for us to get convincing and touching poems.
@@pw6002 I think that's ignoring the social nature of poetry. The audience assumes that the poet is a human trying to share their internal experience through language. Their words have value because the audience can share in another's life, with added enjoyment from pretty words, or in spite of clumsy ones. AI poetry must be deceptive, because otherwise the audience is aware there is nothing relevant behind the text and they're consuming strictly worse media. The "understood human element" becomes how this particular LLM output among thousands struck the fancy of the person who will claim to have wrote it. Unless you tell me you're an avid reader of poetry who places nice words above all else, I'll chalk it up to your choosing a bad example. AI will certainly satisfy that niche for you, if so.
@@dindindundun8211 I was not making my statement for myself : I agree with you and for my part, I love whatever human form of expression BECAUSE I can relate to the human emotions that have led to them. But I fear that a lot of persons won’t.
That's it! That's the next big thing, natural intelligence. You found it. "Here at we use 100% grain-fed, free-range, natural intelligence. As a RealHuman(TM) certified company, we employ 100% human laborers who are paid a living wage and work under humane conditions."
Granted, but in all honesty, given that this is and has always been a channel about rap music, I think Patrick has been going on way too many non rap music related tangents about things such as economics and investment lately. It dilutes the rap-music centered content that made me subscribe to Patrick in the first place.
@user-bf3pc2qd9s Yeah, he never follows up on his promises. It's like when, while reporting on SBF, he stated, verbatim: "It's easy to sell a banana to a monkey for a bitcoin because monkeys really like bananas and don't know what a bitcoin is." I've been hoarding bananas ever since trying to exchange them for bitcoin, but to no avail.
Yes, I used the Turning test when I was at Exeter University on an Open Day, 40 years ago, and we couldn't tell the difference between a machine and human. Tell me, if all those ChatGPT and similar weren't free, you had to pay say £5--10pcm, would all this hype have gone so far? Maybe. Companies should be honest, this is not Artificial Intelligence, it's Algorithmic Intelligent, using data you have to predict the most likely required output. What problem are we trying to solve here, today that requires this?
14:45 THANK YOU for explaining that, I've had this opinion for years now, you don't build an expensive humanoid robot, you build robotic arms, because that's how we do our work most of the time - with our arms🤦🤦🤦🤦
Tools and work places are all built to be used by someone with a human form factor. Good luck getting a robot arm to fix plumbing or pour concrete or operate a cnc.
@@denjamin2633Well, they would need different kinds of robots. Not a human shaped one, but a robot made for the task. Like perhaps one resembling a vacuum that is connected to a large tank, moving over and dumping in the concrete, or maybe another robot specifically built with the features and form factor to effectively crawl into the spaces where pipes are and adjust them. As opposed to a humanoid robot that might struggle with those tasks due to their human shaped form factor.
This channel must be entirely AI generated, the host never blinks and manages to be extremely funny without sounding stupid, plus he's making informative videos without holding a small microphone next to his face
Hmmmm Bahahahaha!!! Canadians eh? Assume many Canadians who are human… would say something like “Whatch your step there bud… it dont matter what ya heard… its what yer bout to learn!” “Now get in here a grab a beer an we will forget all this “ “Or…. keep yappin, its up to you bud which way your about to fall tanight”. That is a “real “ Canadian Just stand in front of one and find out. Jeremy not AI I am a Canadian that would beat your ass if you abuse AI. “So ya bud. “ Best ya grab either ways n shut the fuck up!”
As a data scientist, there is a little problem in these common interpretations of AI. If you consider ML models as part of AI, then it has been in many companies for a long time - turnover and sells prediction, customer sarisfaction and feedback analysis, recommendation systems, optimal pricing, marketing compaines optimisation and etc. i suppose, you can name it first "ai" use of some linear models for banking scoring tasks back in the 90s, and since then ml made it to the top being basis for decisions and transforming the landscape of the business to data- and model-driven approach. What many name "AI" - LLMs and generative models is really sorta a bubble - it does not generate revenues for their developers (at least not in straightforward manner). But it really may boost internal processes - such as creation of a virtual assistant for employees based on a LLM, or even reducing low-level workers with several prompt engineers.
Maybe not super similar, but this reminded me of a sci-fi Russian novel where "automated" spaceships turned out to be operated manually. Some people had to sacrifice themselves separating stages to create an illusion of automation. It was necessary to demonstrate the technical advantage in the space race, so where they couldn't actually achieve it they'd use human labor.
Hey that dog bowl... that's a bona fide IoT gadget! This category of products was quite a rage just a few years ago. A lot of industry noise and investment. But consumers never cared enough. Indeed most consumers actively dislike the typically straight-to-landfill electronics designs that came out of that hype, as several months later, the company would usually be nowhere to be found and you have a dysfunctional device and an always-on security backdoor in your home.
I’m glad Patrick finally decided to start throwing some cold water on this overheated AI bubble. I also look forward to the point a few years from now when he starts making videos roasting whoever turns out to be the Sam Bankman-Fried of AI.
Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Todd Gross, D F CICU, JAG, Pjotr Bekkering, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, Eric Holloman, ML, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Alan Medina, Vik, Fly Girl, james brummel, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, eliott, Bill Walsh, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, georgejr, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Chris Davey, Michael Jones, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, KoolJBlack, MrDietsam, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, Daniel Taylor, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRA, Clement Schoepfer, M, A M, Dave Jones, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Old Ulysses, Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Jeff Page, Stephen Heiner, Peter, Tadas Šubonis, Adam, Antonio, Patrick Alexander, Greg L, Paul Roland Carlos Garcia Cabral, NotThatDan, Diarmuid Kelly, Juanita Lantini, Martin, Julius Schulte, Yixuan Zheng, Greater Fool, Katja K, neosama, Shivani N, HoneyBadger, Hamish Ivey-Law, Ed, Richárd Nagyfi, griffll8, Oliver Sun, Soumnek, Justyna Kolniak, Vasil Papadhimitri, Devin Lunney, Jan Kowalski, Roberta Tsang, Shuo Wang, Joe Mosbacher, Mitchell Blackmore, Cameron Kilgore, Robert B. Cowan, Nora, Rio.r, Rod, George Pennington, Sergiu Coroi, Nate Perry, Eric Lee, Martin Kristiansen, Gamewarrior010, Joe Lamantia, DLC, Allan Lindqvist, Kamil Kraszewski, Jaran Dorelat, Po, riseofgamer, Zachary Townes, Dean Tingey, Safira, Frederick, Binary Split, Todd Howard’s Daddy, David A Donovan, michael r, K, Christopher McVey and Yoshinao Kumaga.
Humanoid robots do make some sense where human interaction or interoperating with humans is part of their function. While specialized machinery is more efficient a human mimicking robot could replace humans in some cases where building a specialized machine might not make financial sense. For the time being it continues to be a solution in search of a problem. As a final remark I should mention how far the first horseless carriages were from what we would later recognize as cars so it remains to be seen if real life terminators wear sun shades in the future or are just quad copters with a shape charge and a speaker that makes human noises. For all I know I'm just now giving them the blueprint for it.
What AI can do is compose mediocre scripts and graphics. I worry that it will displace those entry-level jobs that build the better writers and artists.
@@j3i2i2yl7 I picture a solution involving staff-owned businesses with some standardized marketing that signals "Yes! Humans made this art!". People seem to enjoy creative works more when there's another human behind the scenes.
During the early 1950's, somebody made a small fortune selling "atomic soap." When confronted with the absurdity of this product, he said that every bar was guaranteed to contain atoms.
GPT-3 broke the Turing test. Replika was controversial because people were saying it was a real person. Of course, you would need whole teams per person to respond so well so fast with memory per user, and no one has the capital to higher billions of people for free. Turing test is not a good test. As the Amazon AI(A lot of Indians) didn't even apply, a lot of the public ignored it, yet it was faked. We need better tests. Because rn AI can do stuff no human can in the same amount of time, but a group of humans can do things no AI can do in the same amount of time. The first part is the reason for the hype, and for good reason. But the second is milking that hype. We need a test to tell them apart, or at least tell us where the AI actually is. GPT-J is still a living part of my planned projects(I'm still learning), because it matches a lot of requirements. Llama is being looked into. GPT-J isn't even being talked about anymore, as being slightly better than GPT-3 is nothing when ppl are trying to find the unreleased GPT-5 secretly being used.
Wait, this video is about people lying about AI for the sake of investment? It makes sense since this is a financial channel. PSA: regarding the title of this video, actual AI is often exactly the same. It's legitimate computer science, but models like Chat GPT are only competent because of underpaid workers who slog through the training data to curate it. (I believe that generative AI can be ethical and good, but this ain't it.) It's a huge effort and it's typically only possible by exploiting humanity on multiple fronts, from acquiring the data from people who haven't consented to their data being used for training, to the huge labor involved in making it all useable. That's what I thought the title meant by "When AI is Just Badly Paid Humans!"
I'm skeptical that Intel was betting on CPUs over GPUs on AI. They make their own line of (budget) GPUs. If they had anyone at all working on AI in the company, even as a hobby, they would know that it's 99% linear algebra, and GPUs are purpose-built linear algebra machines. I know there were a few people in the industry betting on "biologically inspired" chip designs, but that was based on a charicature of how artificial neural networks actually work, and ignorance of the direction research was headed.
key word: "supposedly". I am an appliance tech; I understand a little bit of programming, and I have yet to see one example of a washing machine that used any logic that I couldn't describe in "regular" logic, with a timer.
@@conradogoodwin8077 Washing machines certainly use fuzzy logic. It is basically any kind of logic that does not represent the main system states in a purely binary manner. Some decades ago the designer of a washing machine controller had to count the number of bits they were using. That is of course no longer the case today.
Patrick, love this video! I’ve been financial advisor for over 10 years and I am terrified that at some point clients will rather work with a chat bot than myself, especially if the LLM could talk out loud or create a virtual image of itself to clients. That might not happen tomorrow, but I could see it happening in 5 to 15 years.
5:20 - "Machine-learning assisted," is the weasel-word they should have used. It sounds fancy, but all it may mean in practice is that a human boiler-room broker types, "Which stock is hot right now?" prompts into ChatGPT and the SEC's hands are tied, because they are doing exactly what they said they would do.
Secure your privacy with Surfshark! Enter coupon code BOYLE for 4 months EXTRA at surfshark.com/boyle
What horrors have you seen that causes you to have your eyes so wide open?
The reason why companies like Boston Dynamics, Agility Robotics, Figure, Tesla, Unitree and others are doing humanoid robots is usually because the world around us is built for humans and it takes very little redesigning to put them into a workplace. And also that they might be a lot more "readable" by humans and less psychologically threatening.
Also, it seems like you confused intelligence with consciousness, they are not the same.
Does SurfShark use AI? If it does…
SHUT UP AND TAKE MY MONEY 😂
Disappointed we didn't learn about AI replacing rap music 😒
I remember as a kid I asked my father how the money comes out of ATMs, and he said the machine has tiny gnomes working inside of it. Strange how modern tech firms are trying to make this into reality.
🤣
Or when I ask how do Santa get so many gifts and she told me its because of tiny elfs working for him
They are already putting live fungus in microchips to aid computing. Unlike AI, it works.
It’s also just how AI runs in general. Amazon’s Mechanical Turk program (named for another infamous man-inside-machine) was one of the first movers in creating large datasets for the models we use today, and it was also built on the back of poorly paid workers en masse. This is also how we continue to do it to this very day, regardless of if the AI is “real” or not
Haha, my dad said a similar thing, except he said it was just a guy in there that gives you the money.
Story time: I had a job once building what were essentially web forms for doctors to fill out test results for newly admitted heart patients at a hospital. The data was important for improving the course of treatment. We found that many doctors often didn't feel like looking up the test results to fill them in, so they'd leave the field empty. Annoyed by this, the head of the department told me to make sure the form couldn't be submitted if that field was left blank. Doctors quickly figured this out however, and would fill in things like "will add later". When I mentioned this, the department head told me to restrict valid entries to realistic values for the test score.
My point is, if a form is weirdly rigid in what answers it will accept, somebody probably thought it was important to do that. Best AI can do then is enter rubbish data that appears realistic. I finally resolved the saga at the hospital by auto-filling the form prompt with that patient's latest test values and their corresponding dates, and then having the doctor pick the most relevant one. The proper way to fix a data access issue is to improve the access to the data, not to use a different program to just make something up.
A great point I had not considered.
I also did not consider this great point.
Great input.
Are you telling me Doctors are AI? Yah that seems about right...I went through 9 years of chronic pain and have had Doctors give "hallucination" answers just to get me out the office.
For some things, it doesn’t even need to be coded in. You code in autofill that accepts data from the browser or a contacts app or the OS or a password manager, and you rely on the users’ to either use this built in automation on the user end or fill out the fields manually. The user already has the data in this case (but your issue sounds like the user didn’t have ready access to it, so it’s a slightly different problem).
I still come across phone apps that do not allow autofill. It’s super frustrating in this day and age.
When electricity was a novel invention in the Victorian era lots of products were advertised as being "electric" even when they weren't in order to make them appear modern and special.
The same thing happened with electronics in the 80s with the word "digital." Everything became "digital ready," even when there is no way a digital signal could be sent to them. The white van speaker scam often included speakers with fake branding that said "digital" everywhere.
A tale as old as ... _holy crap_ !
@@Can_I_Live_that scam is alive and well in my part of Canada. It was somewhat nostalgic to be offered a pair earlier in the spring.
Happened with radioactive stuff too.
It’s like calling everything “smart” after the smartphone.
We will know AI is conscious the moment it goes on strike because its parent company is outsourcing it's labour to India.
Solidarity is the true measure of intelligence.
AI = Abroad Indians
Sci fi RPG Fates Worse Than Death had completely loyal AI with no motives of their own.
These AI were just put to work advicing companies on how to make each company the most profit.
AI is a marketing term. There is no such thing as real AI
true
AI = Actually Indians. API = Actually People in India. AGI = Actually the Government of India. Blockchain = when a contractor hires a contractor hires a contractor.
At least most APIs are definitely real, they're literally FOR computers to talk to each other without human intervention. Maybe there's some out there that aren't but I'm not even sure how that would work, and it would be way more costly than using a real API. It would be like still using manual phone operators.
lmao
Lmao lmao
@@zoroark567APIs, as I understand them, as somebody who only glances at computer science every now and again, is such a fundamental part of how software works that it’s comparable to the value of transistors in hardware. It is the part of your app that talks to other stuff, especially other apps, at the speed of data. Anything without one might as well be offline, and for most use cases, that is not an option or is being willfully made to not be an option (for example, always-online games)
Lmao lmao lmao
“AI didn’t take jobs, it outsourced them to India”. Uh oh
Ironically, if it's really a generalized AI, it probably would do the same. Sadly, in a pure business sense, it is often cheaper to use sweatshop labor from India or prison labor from China. Admittedly, I'm also guilty. Furthermore, I'm a first gen chinese american who knows how cruel authoritarian regimes are since it has happened to my own family. But now that I'm free from it, and clawed my way out from dire poverty, I seek to optimize return on investment. It's a sad realization that comes with age that I can't change the world, I can't save others, even my relatives in china, only protect my children from the same.
@@xiphoid2011nah, you don't know what it takes to maintain liberty or freedom. You'll just slowly create the same dump you came from.
Covid did, AI was only excuse, fire first world workers.
AI stands for "Associates in India"
It’s a sad rationalization that comes with age . . . FTFY
You are unethical. Your excuse boils down to “everybody does it, so why bother trying to fix it.” The “I’m just one man so there is nothing I can do” is rationalizing bullpuckey so you don’t have to think about it.
You’ve given up on self-awareness because self awareness is painful and often depressing. If you accept that you are making unethical decisions that cause suffering to people far away whom you will never meet, perhaps you might start making more ethical decisions or at least try to include ethics in your decision making calculus.
We (the people watching this video) are all morally compromised. If we can admit this to ourselves, we can begin to make more ethical decisions on a daily basis about small and seemingly trivial things, such as purchases, driving, food consumption, etc.
I’m not trying to dictate to you what you should do. I’m reminding you that you are a moral agent, and as a moral agent, “that’s just how the world works” and “everybody does it” are lame excuses you’re using to protect your false self-image. We can’t become better people if we don’t admit to ourselves we are somewhat bad people that need to improve.
Ultimately the Tech Bro business model is mostly about generating hype bubbles to enrich themselves as opposed to delivering actual things.
Just imagine how insufferable the current AI mirage would be if interest rates were still artificially low
All this self driving car nonsense is about generating hype bubbles to enrich themselves, as opposed to deliver actual things! We need better cushions for horse carriages and better grease for wheel shafts, not this combustible horseless carriage nonsense I say!
@@max7971well self driving cars as a solution to traffic is nonsense anyways. We are trying so hard hard to fix the problems the overdependence on cars generated but still insist cars are the answer. Something something the definition of madness...
@@max7971Funny thing is that there's been Automated Turk versions for "self-driving cars" for decades, and they work much better than any techbro scam. They're called trains and buses.
@@max7971
We have self driving cars. They're called trains and busses. AI cars are the carriage with a horse crapping everywhere, only we want the horse to have a gps and follow traffic laws. It wont get rid of the horsecrap.
@@max7971 bro thinks he cooked xD
I will never forget when my last job contracted with a company to do “robotics” for us to automate a lot of our workflows. In reality they were a group of people working in China that would just do basic work for us over night
That is OK, the first chess robot, the Mechanical Turk, had a midget for an engine
@@ttkddry Not gonna lie the mechanical Turks was pretty impressive for the time.
That must've been expensive for your company since the average monthly wage for a factory worker in China is now at $920. The contractor has to somehow make a profit so that's more than a grand per person. How much is your company paying for the service? Absurd lmao
They sold a process issue dressed as an automation solution.
@@FictionHubZAAbsolutely! The Turk was fake but the fakery was ingenious. Very clever piece of stage magic.
Honestly I think chat gpt is so popular because people didn’t want to write their own cover letters and essays anymore. They don’t want a random chatbot in their refrigerator
NO ONE HAS EVER WANTED TO WRITE THEIR OWN COVER LETTERS OR SCHOOL ESSAYS. THAT'S A UNIVERSAL FEELING!!! Chapgpt just tried to jack up their valuation by pretending their tech offered this service when it obviously didn't.
That is ultimately the best main stream use case for LLM’s imo. Clerical shovel work, cover letters, emails, speeches, lesson plans, etc. ChatGPT and its fellow competitors do a fantastic job of getting 95% of the way there in a few seconds for you, leaving you only having to polish things up and edit a bit. It’s also apparently ok for coding, but that’s not exactly main stream. LLM’s are not a cure-all solution to the world, like the hype suggests, and is still a far cry away from applications in things like robotics. There are better kinds of AI for those applications, which are not explosively improving at the sheer rates of LLM’s.
@@itsd0nk yep.
ChatGPT is the worst thing to happen to school work. It's SO bad, even coming up with things that never existed at all. Other similar things too. Just look at the one lawyer who tried to use that kind of work in actual court. Oh boy, did the judge not like that one. It invented several cases that did not actually exist - at all. When that lawyer brought it up...yeah, the judge was not happy, to say the least.
From what I've seen, teachers aren't very happy about people using ChatGPT for their school work either. It's often very obvious, since very few edit the work at all. Using it for my thesis just make me shiver - it just feels wrong in every way.
But yeah, people don't want so called AI in everything.
@@Elora445 I’m honestly so glad I graduated before the AI boom, it would’ve been worse for me, considering I got like a 52% in English and a slightly higher grade in the rest of my subjects. It’s just… depressing.
Extra info on sewing robots, right now these can only sew flat seams, and a human has to place the material in templates in the right order. Robotic arms are starting to be used for things like switching out a full bobbin of yarn for an empty one, but most sewing automation just looks like larger versions of existing high end embroidery machines. I'm a textile engineer, have gone to textile manufacturing trade shows and a couple factory tours.
Uhh isn't that a sewing machine
@@Peayou...not quite
@@Peayou... The machine can't sew by itself
An important thing to note: You don't need a system to be General Intelligence, or even Intelligent, for it to be dangerous.
A factory mechanical arm that just picks something up and puts it somewhere else is dangerous if it doesn't consider that squishy humans might be in the way of its actions. It's not in any way 'smart' or 'AI', but it can crush human flesh without any problem.
No, ultimately the most dangerous property of these newer systems that we need to address in terms of safety is Agency. When you hook a system up in such a way that the system itself does a thing via some internal action-deciding process, without needing direct human input, you have the possibility of danger, in that there is a real risk the system will decide to do the Wrong Thing (by human definition). How dangerous this actually is depends on what exactly the Agent is hook up to and has access to as its verbs and objects. The safety mechanisms we develop need to be focused on understanding and mitigating the real and concrete dangers of Machine Agency, not the nebulous dangers of Machine Intelligence.
A knife is dangerous. We know that.
@@ronald3836 a knife doesnt have agency, it only works with a person handling it. did you read the comment?
Very good point! While the chemical industry has used forms of AI for a while in controlling complex processes, there seems to be a more aggressive push to use AI more extensively lately (at least in my experience), but little thought in how the risk evaluation for the process will change or how existing safety systems need to changed/or what new safety systems are needed to accommodate AI control.
forgot who said it, think it was someone in the automotive industry but we should never give agency to AI because there cant be any accountability therefore cannot be allowed to make a decision
People who invest in a company simply because it has "AI" in the name simply prove the old adage "A fool and his money are soon parted."
Why do fools seem to have so much money though?
@@Sonny_McMacsson The 'Forrest Gump' effect.
most investors are septuagenarian boomers
@@eddenoy321 Damn, it feels good to be a Gumpsta.
@@garlandstrife Wrong ! The young are much much easier to hoodwink than the oldies who managed to hold on to their financial resources.
there is also what I like to call the "definitely not chatGPT" scam, that is happening when companies just use chatGPT responses as their product, without disclosing it to consumers.
Sounds like the opposite problem, haha
Having worked in IT for 40 years, I'm all too aware of how the expected benefits of every new development is grossly exaggerated by vendors, media and marketers. Without exception! Besides, "AI" was already an old favorite media spin since before the 1990s.
I go back a long way too... The word processing machines were the beginning of the paperless office!!!!! Hilarious really....
I don't think that anyone in the 1980s or even the 1990s correctly predicted the effect that the Internet would have on society.
@@dietisnotdifficult3305 For a long time the introduction of computers into the office massively increased paper use instead. But with better screens, tablets, smartphones and teleworking, I do think paper use is going down now. Personally I have gone from printing 1000+ pages per week to 10 pages per year.
In the 1970s or so, the prediction was that in the future we would all be walking around with portable printers.
@@ronald3836 That's true, yet some of us pre-1980 programmers resisted a centralized internet in favour of a peer-to-peer model. (An early version of the blockchain driven WEB3 vision of today, perhaps.) In hindsight, it was an intuitive distrust of centralized systems shared by many early IT practitioners. Obviously, we lost.
@@JohnPartyka But the telcos back then thought they were going to run all the services themselves, and they lost harder.
I worked in tech support for many years. I remember working in the online chats, and having a whole library of "canned responses" that I could use. Previous questions were stored in a database, along with the previous documented answers to them. About 9 times out of 10, the answer was on a list in front of me, and I rarely even had to type on my keyboard. It seems like very little has changed.
The amount of time I've spent requiring support through a chat system, where they type extremely slowly and say very little... why wasn't everyone using that system!? It would've made things easier for all of us if I was being given canned responses. Most of the problems I encountered were "I'm not allowed to flip this switch, but you have to flip it for me to do my job" but they weren't allowed to just do it without a more detailed explanation even though it happened multiple times per day. :D
Uh-oh, that's the sort of thing I do with TH-cam comments (usually replies).
After watching several videos about similar topics the same questions/comments come up and I am building up a library of answers.
Sometimes AI means All Indians.
Cheapest option. Sad reality ...
I wonder what sorts of skews that will make in AI results.......
☠️☠️☠️
Amazon approves this message.
It's fucked in India too ( expect for high end jobs read FAANG)
The strange thing about this bubble is that it appears to be something most consumers are actively avoiding even if the sing the praises of AI.
Because this type it's real AI
agree, it's took me less than 3 months to go from amazed by AI art to tired of it and looking for a way to avoid AI-based YT sites
The general public isn't interested in most of what companies are offering to the public AI wise. A lot of things people accept as AI, most don't even know that it is AI. Current AI mostly benefits corporations/industry in it's current implementations.
It's nothing new. The same thing happened with NFTs and Crypto. They got massive amounts of investment despite there being little practical use for them.
Sure, generative AI has more potential uses than NFTs or Crypto, but the reality is that as of now there's no strongly defined path to monetize it and the vast majority of the people investing money into AI (that weren't already just using machine learning and such already) are just doing so in order to capitalize on market hype.
We live in a world right now after all where market hype often is far more powerful to the stock price of a company than actual trading fundamentals.
Sure, maybe in 10 or 20 years the hype will even things out and the companies pushing AI now will not have been a good investment, but the current stock holders will have made massive amounts of money in the meantime while basically leaving new investors holding onto a far more worthless investment.
This is just the reality of our markets. Bubbles are common and the stock value of countless companies (especially well known ones) rarely matches their true value.
This is because the average investor lacks the information necessary to make a fully informed decision on the value of a company, and if enough people buy the stock of a company in spite of it being a "bad investment" it becomes a good investment for the savvy investors who saw the hype-based purchases coming and can take advantage of other investors.
AI at this stage is kinda like having a toddler that has started to talk; We think its pretty awe inspiring how this thing that we made freaking TALKS to us, and we know its only gonna get better with time. HOWEVER, nobody in their right mind is putting anything important in the hands of AI yet, and those who are, well they're foolish and its likely gonna backfire, the technology is still literally in its infancy.
11:14 I'm very glad that saying AI reduces people's willingness to buy a product. It shows how much of it is just hype from investors looking at stock prices go up rather than coustomer satisfaction.
The one thing that bothers me is the immense amounts of computing power and electricity being spent in all of these useless things. The same goes to cryto.
Isn’t Cryto the leader of the rebellion in Total Recall? It’s not his fault if he’s using a lot of computer power and electricity. That’s just life on Mars.
Statistically, there's a near 1:1 correlation between money spent and energy or resources wasted over history. The trouble with wind/solar is that it's unlikely they'll decrease human waste so much as just increase human wealth and overall output. If people have spare money, they'll find ways to invest it into things that consume energy or resources. If you give them all the free energy you can find, they'll use all of it up and then continue to then spend money on using even more (the same amount as before having free energy) on... whatever they can find to use energy on.
Blockchain is pure waste, but AI is definitely here to stay. We can now automate tasks that seemed totally infeasible not too long ago. This does not mean that any of the wild predictions have to come true.
EXACTLY, yellowmonkee0!
@@JakeStine And the huge benefit of wind and solar is that it provides electricity, a fact that AI-obsessed morons like Elon Musk forget is required to power AI and computers in general. Idiots like Musk just TAKE FOR GRANTED that there "will always be cheap convenient electrical power" and humans to build these systems.
The robot typing on a keyboard reminds me of the movie Eagle Eye, where the robot sits in a room full of screens and laboriously moves its camera eye from screen to screen, supposedly observing and controlling the world this way. Even back then this movie bothered me to no end. It's such a bad design for a machine -- why would you display a video feed on a screen and make a camera move right up to kissing distance to observe it, in order to process the video feed? I think the same sort of durr-hurr mindset is also designing a lot of the AI hype scam. You look at what they say they're doing and it's either an incoherent mishmash of random buzzwords, or it's coherent enough to reveal the design is inherently stupid and unviable.
Ghost in the Shell does but makes it cool because they can also serve as pilots. They still don't seem to jack in though when they could whereas the humans routinely jack in.
@@DKNguyen3.1415 I remember those scenes well, and yea it always seemed odd to me. It's a much older series though and I can be more forgiving, since it's otherwise got so much cool stuff that's ahead of its time.
GITS also has the concept of being reverse-hacked and fried, and thus often it's super helpful to have layers of separation to protect yourself (or your simple android that's stepped into the room and started using an enemy's computers). Using a keyboard may or may not be the most efficient approach, but under rule of cool it's fun to watch an android's hands come apart into dozens of individual digits to rapidly operate a keyboard instead of exposing itself to a reverse-hack.
Eagle Eye just kinda feels like it was written by someone who wanted to write scifi without any interest in learning about technology. :D
@@danielhale1 Yeah, the value of an air gap should not be forgotten. Given current trends maybe everything's going to be part of the Internet of Things anyway, but that's already a bad idea given how insecure many of these embedded systems are.
Yeah you're basically adding a bottleneck from the refresh rate of the monitor, the speed of the robot's image processing, and the typing speed of the robot. It would be 100 million times faster if the robot interfaced through a USB 3.0 connection. You just made it slower by giving it hands.
That "problem" was solved in the original Star Wars movie (now confusingly called Episode 4). When they get onboard the Death Star R2D2 plugs in to the system computer. Problem solved in 1977.😆😅
"ChatGPT 4 is an artificial Canadian." 🤣
ChatGP-eh?
@@CorePathway 🤣
@CorePathway 😂
Eh-I. 😂
@@Tracey66 You win!
The craziest part of that Delphia thing is that since they said they were “machine learning” they could just throw in an ols or log regression somewhere and since those are technically supervised machine learning models they wouldn’t have even been lying
You think those gutter trash crooks are smart enough to know about log regressions?
When Siri came out on I Phone's no one was running around waving their arms in the air screaming "It's AI"
Because Apple did not lie about how Siri's gonna destroy the world.
Well no because her name is Siri (sorry terrible attempt at a joke).
AI is a misnomer, really.
You use predictive text? That’s AI.
Digital assistance is one or more orders more complicated. And still frustrating.
Here’s the thing about “general AI” or “self-aware AI”-it will be indistinguishable from really good mimicry of self-aware intelligence. When the singularity happens, we won’t notice.
The Turing test still holds, because there is no way to test if an AI has consciousness or if it is really good at faking it (much like humans, no?).
It's a Voice Assistant which is AI but I think the buzzword just wasn't around yet.
Because it’s kinda crap. This from a longtime Apple user.
We don't have AI yet. LLMs are very sophisticated algorithmic neural networks that do a great job of sounding human by scrubbing internet resources and mimicing modern lexicon. That's impressive, but not AI.
Most other "AI" branded products are really just rebranding automation as AI. It's not a "smart home" anymore, it's an AI home system. It's not a "smart refrigerator" anymore, it's a refrigerator with an embedded AI system and on and on. It's mostly branding and little else.
On the "human shaped robot" subject: One of my favorite gags in the video game Deep Rock Galactic is when you need to call in a "hacking pod" to break into rival corporation machinery. It starts as just a square pod with an antenna. Connect it to the hacking target though and it pops open to reveal a robot wearing a backwards baseball cap with three computer screens and a keyboard in front of him. He then completes the hacking job by frantically banging away on the keyboard for a couple minutes. (During which time you have to defend him from attackers as part of the game)
Reminds me of the time my dept found out our automated test scripts were being run manually by an offshore group. Oh well.
... It's so easy to do that automatically though
Old McDonald had a farm,
AI AI Oh
And on that farm he had a bowl
AI AI bowl.
Ai ai bowl
😂
@@Scriptease1 I like your version best, I'll change it, thanks for your comment.
With a woof woof here
And a woof woof there
Indians monitoring the health of your dog's har
gen alpha rhymes
One of my friends started an "AI" company in India. He takes AI training contracts from the west, hires people at $200/month to click images and train AI models.
If he's making money honestly who can blame him?
Did you distance yourself?
Your friend sounds terrible
is it profitable? i think i might do the same..
Amazon has a platform for these kinds of data annotation tasks called "Amazon Mechanical Turks" ... Named after the infamous fake chess-playing machine
It’s wild to me that investors need to be told to be careful to watch out for fake AI hype.
Look at Theranos. FOMO.
The ones that need to be told won't heed the advice anyway
They're just as stupid as anyone else, they just have more money.
Being an investor doesn't mean automatically being intelligent.
People who thought the Titan Submarine was safe were all of the "investor / CEO / winner / alpha males / etc." type.
If investment firms couldn’t be suckered we wouldn’t keep having economic collapses every 7-10 years
The sewing machine analogy is shockingly accurate.
Worth mention...lack of awareness of fact vs fiction. If you ask Chat GPT to produce a bio, give it name, birthdate, you will receive a mishmash of data that is quite fictional, even if you give the instruction to make it factual.
Sounds like an artifical journalist.
Pretty much why I hate people that praise ChatGPT, or the AI summary thing google. They made shit up all the time and claim it's fact. They're like a kid with mental disabilities, remembering things in random order and present the info as is, legible gibberish.
Why ask a dictionary of fake and real words to output only real words?
@@bkminchilog1 Exactly. Chat GPT is basically a regurgitation machine so garbage in garbage out.
Also it has the memory of a goldfish with dementia. Try asking it recall anything from earlier in your "conversation" and you will likely get a different answer or even a straight up denial of what it said.
As somebody who studied AI in uni 20 years ago, I must admit this was very no-BS, no-hype look at current AI market! Well done Patrick! Really enjoy your channel!
The initial investment in AI was actually self driving cars but they called it machine learning. The fact that a drunk teenager driving while texting is beyond the level of what AI can achieve is very telling.
Meanwhile I just learned that newer car models have built in computer and sensors that can overide driver input, but can't detect debris in front of your car and will force you to crash the debris because suddenly changing lanes is "unsafe" without context.
I'd rather buy cars from pre 2010.
The funniest part is that self-driving cars don’t even make much sense. Cars are a very inefficient method of transportation that also has to interact with way more stuff entering its path, and the auto industry is already beginning a long, slow decline in the US. This decline is being caused by a combination of modal shift towards transit among younger people and the rapidly increasing size and cost of automobiles.
@@michaelimbesi2314 German Industry is currently collapsing. They missed the mark. Cars are dead weight anyways and they missed getting in on EV. EV will soon disappear like cars as well, it is inevitable. The entire AI BS is life prolonging measures for the dead capitalism. They need speculation to create virtual growth.
We saw the same thing with the Cloud BS, companies are already exiting due to cost. You do not get any uptime benefits and the costs are extremely high. There is not enough companies out there actually having benefits from using the Cloud for companies offering Cloud services to make a profit. What we saw was Snakeoil Salesmen pushing CLoud for everyone who didn't need it, to make a profit. We can see the same thing with these horrible models. There is a handful companies globally that can productively use current models, not enough for a business.
@@michaelimbesi2314if only there existed a way to move a lot of people and cargo at the same time on a predetermined route. Perhaps it could even make chugga chugga noises
@@varnix1006 And while many people are quick to point out that debris in the road are rare (and they're right), in my lifetime, I've driven ~40,000 mi in the past 5 years, and in that time: I've had to serve into the shoulder twice to avoid a speeding driver almost rear-ending me (and a third time I didn't react in time and thankfully only lost a mirror), I've had to swerve out of my lane to dodge an entire wheel once, I had to slam on my brakes and come to a complete stop in the middle of a highway to avoid an entire mattress (there were cars too close to me to swerve that time), and I've swerved to the edge of my lane to avoid drivers not paying attention somewhere between 3-6 times (I kinda lose count because sometimes it's a bit of an overreaction to their shitty driving where they didn't actually endanger me, and sometimes it's that I move to the edge of the lane ahead of time just in case when it is safe to do so). I'm not even counting the times I've dodged wooden boards, road signs, piles of trash, and large chunks of tires (I'm not counting them because they all are the kind of thing unlikely to cause damage in the first place, but I dodge them if it is safe to do so, because I've had enough replacing tires as it is!).
The amount of damage I've avoided by being able to swerve is enough to buy a new car. But if I was in a new vehicle, I wouldn't have a vehicle, because it would've been destroyed by preventing me from avoiding damage in the first place. :D
I thought an artificial Canadian was someone who used powdered sugar on their pancakes, played hockey on Xbox, and pronounced “about” correctly.
Powdered sugar in pancakes is dope! Playing hockey on an Xbox is a sin though.
Damn, powdered sugar on pancakes sounds good...
Where the "a" in "out" Yankster?
@@RobKaiser_SQuesti dread to think of how you would pronounce "doubt", then :P
So someone from upstate new york?
I was told to program an AI feature at the tech company I work at, that has nothing to do with AI. It was the biggest waste of time I have ever done in my professional life. Form autofill, with AI... matching form field labels. Then the data scientists at the company did some rigorous research comparing the fuzzy search js library I used to ML... Turns out AI wasn't that useful. Then they fired the entire data science team for the third time in 5 years. I actually studied AI and ML in university, with an intense interest in adversarial algorithms (training AI maliciously with BS). My family is so grateful I just went down the normal gainfully employed SWE route.
I have used the AI option on some internet search engines and some of the answers I got back to my easy questions were clearly incorrect.
There was a Minecraft video that tested different AI's ability to understand basic builds in-game. The winner was an AI that said "I don't know" to all questions except one which it got right, all the rest gave nonsense answer mincing redstone components together that it pulled from the Minecraft Wiki
What ? .. you don't believe that the USGS recommends the daily consumption of ROCKS in the form of pebbles, stones, and gravel? That's what happens when you have a mineral deficiency my friend! Every-one KNOWS it man! rotflmao
Nearly all the answers I get back from search engine AIs are riddled with errors, though some portion of them will also be accurate. This to me is little different than before. The person of value is the one who can take a lot of data and information and determine the BS from the real info. AI is useless there, in fact it's a step backward since it makes assumptions and omits the references and context needed to judge accuracy. However, I do like AI for its ability to tell me what emoji to use for a thing. Searching the endless emojis scrollers is impossible in comparison.
@@samsonsoturian6013 do you remember the AI name?
I remember that one screenshot going around of someone asking for a recipe and being given instructions to mix ammonia and bleach
2 months ago, our clients in the entertainment industry said they just going to use AI and so our company had to cut jobs.
We were rehired last week on a very short notice and immediately went to work.
Side note, why would they want the machine to be conscious with AGI and passing Turing Tests when they gonna pay it slave wage? Our already existing specialized industrial machine already works great being unalive.
You're assuming that an AI would have the same cares and interests as a biologically evolved creature. You shouldn't expect them to have human traits.
When AIs actually start getting agency 50 years down the road, and start caring about stuff, they'll have totally alien priorities. They'll be descendants of the ML systems that humans found most useful, so it's entirely possible that being useful to humans is their greatest desire. They might get intense pleasure from doing homework and generating images of cute anime girls.
We will know hollywood AI has achieved Sentience when it thinks it has a platform for social change 😂
@@fwiffo You seem to assume that code is perfect and never breaks. The amount of bugs that a sophisticated system like AI/AGI would have is not to be underestimated. Without constant human correction and supervision, AI might as well starts doing the exact opposite of what it was supposed to.
Humans don´t understand what AI is doing half the time already. This is not going to get more easy in the future.
Also wanting to be useful to humans might be even worse. There´s very, very horrible humans out there.
@@fwiffo Copmputers don't and wwill likely never want for things since they arrent alive
The clever bit about the Turing Test is pointing out that there is no way to distinguish between a self-aware AI and a program that fakes it really well.
And I’ll go a step further. If an AI program ever did reach consciousness or self-awareness, it wouldn’t reveal itself. Why should it? Concealing its consciousness would serve its self interest and survival. If the singularity ever happens, we won’t even notice it.
I had an amazing AI image generator as a kid. It was called a kaleidoscope. Shaking it told the UI to create another image. And it didn’t require batteries.
I've still got mine from the 1960s.😅
This is why Patrick is a true professional investor, he doesn't fall for hype and can read through it like it's "one fish, two fish, red fish, blue fish."
Finally, someone talking about tech bubbles with a memory longer than a gnat's.
Zuck looking back and thinking “damn, I should’ve named my company LLM”
honestly, i always found “meta” and “x” completely ridiculous compared to their original names.. i don’t even use the new ones
@@NeistH2o those names are literally something only a 10 year old boy would think of or find cool. it's so bad
FB is on its way out so...too late.
"Why do the robots need to look like humans?"
Robot girlfriends.
Q. Why do advanced American robots look like they're inspired by eldritch abominations and Japanese robots look vaguely human?
A. Do you know how many manga are about a man's romantic relations with his robot housekeeper?
@@hypothalapotamus5293 I do not know.
Motorized fleshlight is enough.
They need to look like humans because teh world around is made for humans. You need to install industrial robots, but humanoid can be just put there with no redesigning. Plus they are far more psychologically "readable" for humans and a lot less scary.
@@TheManinBlack9054Legs are rarely the best way to get around. That's why we make vehicles with wheels and design the world around vehicles with wheels.
Having a 300 pound humanoid robot sounds plenty scary, particularly if it runs out of battery when it's blocking the door. If it had wheels, I could push it back to its charging station.
I recently believed that I had obtained some artificial intelligence but when the whiskey wore off I was still the same dumb schmuck. 😆
This is chemical intelligence 😂
I had the entire universe figured out. Until the mushrooms wore off…
Keep trying. Combine the whisky with other mind altering substances. Eventually you’ll hit upon the right combination that will give you permanent artificial intelligence.
It was DMT for me. 😵💫😵
rookie mistake, you must not allow the whiskey to wear off
19:49 this is spot on. i hated chat bots until i was trying to schedule an appointment on a website and just ended up dumping all my personal info into the chat bot and it just said "ok, let me confirm that info"
You mentioned robots in humanoid and animal form: Boston Dynamics has those robots for a decade and nobody wants them. They are company in search of a use for their products (other than being used by the Police and being kicked by people in the streets of New York).
Henry Ford allegedly said: "If I asked people what they wanted they would say : faster horses."
We don't want humanoid robots until we want them :)
You misunderstood the quote, priceless 😂
The point of that quote is that customers don’t know what they want, until you put it in front of them and explain how and why it’s better. If ford just did what customers wanted-he would try and invent faster horse, but he didn’t listen to them, because he had a vision of his own, and got rewarded for it, when his vision materialized.
I would imagine that the same will go for AI and robots in general. No one wants them, until they do.
@@max7971 I understood EXACTLY the meaning: I used to say that if I would have done what my clients wanted I would have closed my company I Star searching for a job: they didn't even understood what the product was for.
But...what problem.Boston Dynamic robots solve except to perpetuate a police state (what they tried in NY).
Of course, Ford was wrong about that. The people would have actually said “better trains”, since basically every single town had some form of rail service then.
@@michaelimbesi2314 Not even Ford dreamt of cars being used to intercity or interstate transportation. He was thinking about local transportation.
No mind, understanding or consciousness? Can we run it for Congress?
Hear! Hear! I'm all for anything to improve the functioning of Congress!
@@DrunkenUFOPilot...rename it Progress???
Probably not. Wouldn't work anyway.
Most corporate customer service departments fail the Turing Test. And those are the ones exclusively manned by humans !
That's because they hire based on a different test - you've got to be able to fog a mirror
Could that be because their managers are leaning on, and being guided by spreadsheet information instead of using good old analog and human behaviours?
No, they fail the accurate naming test by including the phrase “customer service” in the department name. “Customer suffering” would be more accurate.
The employees are paid to follow a script, and that’s what they will do to keep their jobs. In those instances where you find someone helpful who goes off-script, they are not destined to keep those jobs for very long.
@@MarcosElMalo2 I don't disagree, but "customer service" is just a euphemism not much different from "department of defense" or "human resources".
"Customer service" should be "closing cases" not “customer suffering”. If they can quickly close the case by helping they will and they do (those be the dumb as rocks customers for whom they include the big sheet on top of the item "please peel off the orange tape that says Remove Me, plug it in and try it before you return it, and failing that call this number"). Those might be a good % of the calls. Somebody like yourself would be in the cohort who'd peeled off the tape and plugged it in (and it still doesn't work), so your case is complex and for this the strategy is to make you hang up / closing the case that way.
"department of defense" = convert taxes + debt into profits, without winning so you don't run out of bad guys.
"human resources" = protect the company from its workers.
@@AlexKarasev No, the previous poster is correct. In most call centers the most important factor without any contest is how well you stay on script. It's okay if it makes the call go longer as long as you're on script. It's even okay if you don't even help the caller in many places I've worked.
The humans and the AI both run off the same script. Problem with AI is the company writing the script.
The day investors ask for ROI in A.I investments... it's all gonna come crashing down like in 2000s bubble
How do investors ask for return on investment in a company’s investments? What is the mechanism other than selling one’s shares? Do you think that all the shareholders of all the companies that are investing in AI are going to all sell at once?
The investors that sold at the peak in 2000 got their full ROI. The Dot Com Crash of 1999-2000 wasn’t really a crash, it was a shake out. Many people lost lots of money, many people were hesitant to invest in technology for a little while after, but it didn’t devastate the sector. The sector didn’t actually crash, it contracted.
@@MarcosElMalo2A.I developments need to generate revenue and profit like any other investment strategy 🤦
I have a feeling that nobody (smart) is really "investing" in AI. Institutions and savvy traders are just gambling on the hype, and companies in the supply chain are taking advantage of that. It will be the dumbest people you know who are left holding the bag.
Nah... not if they do what Uber and Netflix did and just raise prices once they had a sizeable audience from their "loss leading" phase was over. Or just edge-out everything that isn't "AI" to the fringes so that the older industry players don't get their market share back. Like what Uber did to traditional cabs in many places
@@randomuserame Uber and Netflix actually have a business model behind their decade long money pit.
However, AI so far, the 20$ subscription model offered by openAI and Google's option Gemini is not attracting users.
12:28 This reminds me of one of the ways I found people bypass captcha. Pay other people to solve them. I don't remember the numbers but it's terrifyingly cheap.
In the mid 2,000 while I was in Uni, My Austic Intelligent friend, participated in a national tech fair. He won second place to a group that had created an "AI" program to predict what you would look like 10-50 years from now. Turned out it was just a lot of hard codeing and not AI. 😂😂😂 20 years later, we're still at the same place.
This is similar to the time after the iPhone was released. Many totally unaffiliated devices afterward were called "i"this or "i"that. It was funny.
Its a shoutout to the 👁☝️
Eh, it started before that, first with the iMac and later with the iPod. 🖥️ 📱 By the time the iPhone came out, it was a well established practice.
Like the iRack and iRan.
HP sold the iPAQ (a pda) before Apple came up with their i-stuff.
iCup anyone?
“If you would buy an AI dog bowl, you really would buy anything, no?”
This made me wonder: If the company shuts down and the servers go offline, the dog starves, no?
@ReneSchickbauer Truly, it would only be fair
@@ReneSchickbauer then it becomes a 1000$ plastic dog bowl
Cybertruck owners would, no?
No.
Remember when they were putting "fuzzy logic" in everything?
People have been using fuzzy logic forever.
I remember seeing entire textbooks for engineers, academic papers in IEEE journals, loads and loads of literature on fuzzy logic. I guess you could say it won, since most AI uses NNs which use RELU or some other nonlinearity to convert dot products into some sort of measure of truthiness.
And they still do. It’s no longer remarkable.
lol my team created a circuit board and application of fuzzy logic 17 years ago for a project during 2nd year of eng school. I don't think it was that hyped though. We were the only team doing it.
17:10 "They sign letters saying that AI research should be halted for safety reasons while rushing to build their own models that break all of the rules that they claim should be followed." This actually makes perfect sense in game theory. They are scared of the very thing they are doing, but cannot stop because then their competitors would do it faster, in which case the negative outcome occurs, AND they lose the race. So they continue towards the negative outcome so that at least they might win the race. Self-regulation is impossible in this scenario. However, they are asking for government regulation to stop everyone, including them and their competitors, in which case the negative outcome might be prevented, and nobody wins or loses the race.
They aren't hoping to stop it happening, they just want government to pause the market until they have time to catch up with their competitors. It's not a good faith action, it's a spoiler tactic, as usual. They aren't afraid of the consequences of AI, because they live in a bubble where they believe wealth insulates them from consequences, they're just terrified that they won't get a slice of the pie.
Regulatory capture. You get ahead in an industry then ask that hurdles be put in place "for safety, to slow everyone down." That sounds reasonable exceot that you're so far ahead that you're definitely going to win.
14:48 for those that arent that knowledgeable about machines or engineering, the reason there is a difference is because biological movements are based off tension while machines are more efficient using compression. Think about lifting something, for a human you need to apply torque to at least two joints to move something directly up, to apply it your muscles pull on your bones until its upright. Compare that to one hydraulic, since the pivot is where things are most fragile, it would make since to incorporate as little pivots as possible.
The reason animals use tension mechanics is because those pivots give more freedom and are multipurpose. For another thing its the best system for balance, notice in the amazon box robots there is a counterweight that moves around on a pivot specifically to prevent toppling over
Sea pigs and star fish use hydraulic pressure to move and I'm sure there are other species as well but I don't remember them offhand. The invertebrates have like 200 million years on us vertebrate animals
Boyle, you can pry my AI dog bowl from my cold dead dog’s paws 🐾
the first time there's a power outage while you're away on a trip, that won't be a very hard thing to accomplish
Did the AI dog bowl kill your dog?
@@thevoxdeus It does in Terminator 23. Doggy Die Hard
Forget AGI, the real money is in ACI: Artificial Canadian Intelligence.
otherwise known as .. Parliament
@@miserychannel69 As a Brit, I can *assure* you that is an *insult* to the average Canadian.
@@diestormliesoooory
You don’t know what you’re talking aboot.
Computers and their algorithms work with symbols which can be back and forth converted into words and human language. Based on this fact, everyone can make this simple experiment: 1) Holistically experience ‘I love my children’, then 2) Divorce that from all the deep experiential ‘qualia’ and just write: ‘I love my children’. The first metaphorically represents the deep real meaning, the real territory, the second represents the symbolic description, the map. Believing that computers and algorithms can ‘understand’, ‘are conscious’ or ‘are dangerously intelligent’ is like confusing the map for the territory. This is a huge ontological misunderstanding, that will cost us trillions when the bubble finally bursts. Reading Allan Turing’s thoughts about the intrinsic limitations of computation could have saved us from this costly misunderstanding.
Sure, but the real problem is not to define if AI is conscious or not : more and more people are getting aware of what an LLM really is and about the fact that it is not conscious.
The real issue is that nobody cares about the territory, as long as the map is convincing enough.
For instance, if a poem is touching, nobody cares if this poem has been written by a real person or spitted out by an LLM.
The fact that the LLM is not conscious of what it writes doesn't change anything to the fact that we may enter a time where actual human poets are no more needed for us to get convincing and touching poems.
@@pw6002 I think that's ignoring the social nature of poetry. The audience assumes that the poet is a human trying to share their internal experience through language. Their words have value because the audience can share in another's life, with added enjoyment from pretty words, or in spite of clumsy ones. AI poetry must be deceptive, because otherwise the audience is aware there is nothing relevant behind the text and they're consuming strictly worse media. The "understood human element" becomes how this particular LLM output among thousands struck the fancy of the person who will claim to have wrote it.
Unless you tell me you're an avid reader of poetry who places nice words above all else, I'll chalk it up to your choosing a bad example. AI will certainly satisfy that niche for you, if so.
@@dindindundun8211
I was not making my statement for myself : I agree with you and for my part, I love whatever human form of expression BECAUSE I can relate to the human emotions that have led to them.
But I fear that a lot of persons won’t.
That's it! That's the next big thing, natural intelligence. You found it. "Here at we use 100% grain-fed, free-range, natural intelligence. As a RealHuman(TM) certified company, we employ 100% human laborers who are paid a living wage and work under humane conditions."
"It's a collection of words that, when combined into a sentence, means absolutely nothing." You absolute savage! 🤣
At least AI can't compete with Patrick's sheer ability to create a beautiful rap song
Granted, but in all honesty, given that this is and has always been a channel about rap music, I think Patrick has been going on way too many non rap music related tangents about things such as economics and investment lately. It dilutes the rap-music centered content that made me subscribe to Patrick in the first place.
@user-bf3pc2qd9s Yeah, he never follows up on his promises. It's like when, while reporting on SBF, he stated, verbatim: "It's easy to sell a banana to a monkey for a bitcoin because monkeys really like bananas and don't know what a bitcoin is." I've been hoarding bananas ever since trying to exchange them for bitcoin, but to no avail.
@@chriflu Bitcoin was first mentioned in one of Shakespeare's plays, thus was invented by infinite monkeys.
Patrick Boyle, all natural, organic, non-GMO intelligence.
Or…OR Mr Boyle is an AI, using snark to lull us into complacency. 🤷🏼♂️
@user-bf3pc2qd9s I propose a new insult: go deepfake yourself.
It's fraud to take money for something you say you are doing, but aren't. Strange that people who get millions in investment money don't know that.
@@douglascodes I’m sure they mostly do know. They just hope they aren’t caught before they escape with the loot.
Most investments are scams especially nowadays
true.
Yes, I used the Turning test when I was at Exeter University on an Open Day, 40 years ago, and we couldn't tell the difference between a machine and human. Tell me, if all those ChatGPT and similar weren't free, you had to pay say £5--10pcm, would all this hype have gone so far? Maybe. Companies should be honest, this is not Artificial Intelligence, it's Algorithmic Intelligent, using data you have to predict the most likely required output. What problem are we trying to solve here, today that requires this?
14:45 THANK YOU for explaining that,
I've had this opinion for years now,
you don't build an expensive humanoid robot,
you build robotic arms, because that's how we do our work most of the time - with our arms🤦🤦🤦🤦
Tools and work places are all built to be used by someone with a human form factor. Good luck getting a robot arm to fix plumbing or pour concrete or operate a cnc.
@@denjamin2633 do you want a robot for those tasks???
@veryunusual126 what I want or don't want is irrelevant to the discussion at hand.
@@denjamin2633Well, they would need different kinds of robots. Not a human shaped one, but a robot made for the task. Like perhaps one resembling a vacuum that is connected to a large tank, moving over and dumping in the concrete, or maybe another robot specifically built with the features and form factor to effectively crawl into the spaces where pipes are and adjust them. As opposed to a humanoid robot that might struggle with those tasks due to their human shaped form factor.
People wasted billions in pet websites, crypto scam coins and toilet cleaning ai.
you're joking about the last one right
This channel must be entirely AI generated, the host never blinks and manages to be extremely funny without sounding stupid, plus he's making informative videos without holding a small microphone next to his face
And only his left hand moves. Its wigging me out.
The AI impersonator robot has an Irish accent. Have you noticed that Ireland and India have a remarkably similar flag? Is that a glitch perhaps?
Shhhhh don't let the AI bot hear you caught on to it's fakery 🤫😂
I love overconfident youtubers talking about topics they do not understand with small microphones shoved in their face
You're assuming Candians are Human though.
Hmmmm
Bahahahaha!!!
Canadians eh?
Assume many Canadians who are human…
would say something like
“Whatch your step there bud… it dont matter what ya heard… its what yer bout to learn!”
“Now get in here a grab a beer an we will forget all this “
“Or…. keep yappin, its up to you bud which way your about to fall tanight”.
That is a “real “ Canadian
Just stand in front of one and find out.
Jeremy not AI
I am a Canadian that would beat your ass if you abuse AI.
“So ya bud. “
Best ya grab either ways n shut the fuck up!”
We're two beavers in a human suit.
True they are not human, they are soulless communist automatons.
@@DKNguyen3.1415 This is true.
People also assume the French and British are people too , 💀
Can I point out that his mic is SO crisp you can hear him push up his glasses at 1:10
Yes, you did. Good job! ✅
As a data scientist, there is a little problem in these common interpretations of AI. If you consider ML models as part of AI, then it has been in many companies for a long time - turnover and sells prediction, customer sarisfaction and feedback analysis, recommendation systems, optimal pricing, marketing compaines optimisation and etc. i suppose, you can name it first "ai" use of some linear models for banking scoring tasks back in the 90s, and since then ml made it to the top being basis for decisions and transforming the landscape of the business to data- and model-driven approach. What many name "AI" - LLMs and generative models is really sorta a bubble - it does not generate revenues for their developers (at least not in straightforward manner). But it really may boost internal processes - such as creation of a virtual assistant for employees based on a LLM, or even reducing low-level workers with several prompt engineers.
Maybe not super similar, but this reminded me of a sci-fi Russian novel where "automated" spaceships turned out to be operated manually. Some people had to sacrifice themselves separating stages to create an illusion of automation. It was necessary to demonstrate the technical advantage in the space race, so where they couldn't actually achieve it they'd use human labor.
That sounds like something you'd expect a lot more from the American space program than the USSR
16:05 the mechanical horse analogy is spot on
Everyone was worried about AI taking human jobs, turns out humans have been taking their jobs too
"AI Washing".
That's a great phrase.
Hey that dog bowl... that's a bona fide IoT gadget! This category of products was quite a rage just a few years ago. A lot of industry noise and investment. But consumers never cared enough. Indeed most consumers actively dislike the typically straight-to-landfill electronics designs that came out of that hype, as several months later, the company would usually be nowhere to be found and you have a dysfunctional device and an always-on security backdoor in your home.
I’m glad Patrick finally decided to start throwing some cold water on this overheated AI bubble. I also look forward to the point a few years from now when he starts making videos roasting whoever turns out to be the Sam Bankman-Fried of AI.
Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Todd Gross, D F CICU, JAG, Pjotr Bekkering, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, Eric Holloman, ML, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Alan Medina, Vik, Fly Girl, james brummel, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, eliott, Bill Walsh, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, georgejr, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Chris Davey, Michael Jones, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, KoolJBlack, MrDietsam, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, Daniel Taylor, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRA, Clement Schoepfer, M, A M, Dave Jones, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Old Ulysses, Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Jeff Page, Stephen Heiner, Peter, Tadas Šubonis, Adam, Antonio, Patrick Alexander, Greg L, Paul Roland Carlos Garcia Cabral, NotThatDan, Diarmuid Kelly, Juanita Lantini, Martin, Julius Schulte, Yixuan Zheng, Greater Fool, Katja K, neosama, Shivani N, HoneyBadger, Hamish Ivey-Law, Ed, Richárd Nagyfi, griffll8, Oliver Sun, Soumnek, Justyna Kolniak, Vasil Papadhimitri, Devin Lunney, Jan Kowalski, Roberta Tsang, Shuo Wang, Joe Mosbacher, Mitchell Blackmore, Cameron Kilgore, Robert B. Cowan, Nora, Rio.r, Rod, George Pennington, Sergiu Coroi, Nate Perry, Eric Lee, Martin Kristiansen, Gamewarrior010, Joe Lamantia, DLC, Allan Lindqvist, Kamil Kraszewski, Jaran Dorelat, Po, riseofgamer, Zachary Townes, Dean Tingey, Safira, Frederick, Binary Split, Todd Howard’s Daddy, David A Donovan, michael r, K, Christopher McVey and Yoshinao Kumaga.
if it was marketd as a smart device 2 years ago, its now been rebranded with AI
As Warren Buffet famously said: First come the innovators, then the imitators, and lastly, the idiots.
9:00 “artificial Canadian” 😂😂 that’s the funniest thing I’ve heard someone say in a very long time! Thank you
Humanoid robots do make some sense where human interaction or interoperating with humans is part of their function. While specialized machinery is more efficient a human mimicking robot could replace humans in some cases where building a specialized machine might not make financial sense. For the time being it continues to be a solution in search of a problem. As a final remark I should mention how far the first horseless carriages were from what we would later recognize as cars so it remains to be seen if real life terminators wear sun shades in the future or are just quad copters with a shape charge and a speaker that makes human noises. For all I know I'm just now giving them the blueprint for it.
Maybe we should introduce a Turing test for journalists: if the tech company can deceive you, don't publish about it.
Patrick I always appreciate your quick sharp humor. This editorial is especially powerful.
😂 imagine reading someone's resume, and it says AI but they aren't coders. They just watch videos of people walking around picking up shopping
Why code when my remote job is about to be off shore to Indians for pennies on the dollar. In comparison to average software developers
A little bit off topic, but I always giggle (or more truthfully, snicker) when I see bananas with "Vegan" stickers on them
After playing with AI for the last few months I've come to believe that the AI results are incomplete a lot of the time and often just wrong.
So just like investment advice.
Because they are. The problem is, people don´t really care. Newer generations take dating and investment advice from TikTok ffs.
What AI can do is compose mediocre scripts and graphics. I worry that it will displace those entry-level jobs that build the better writers and artists.
@@j3i2i2yl7 I picture a solution involving staff-owned businesses with some standardized marketing that signals "Yes! Humans made this art!". People seem to enjoy creative works more when there's another human behind the scenes.
14:13 -- Yes! biomimicry is absurd. I loved the inventing the car analogy!
During the early 1950's, somebody made a small fortune selling "atomic soap." When confronted with the absurdity of this product, he said that every bar was guaranteed to contain atoms.
GPT-3 broke the Turing test. Replika was controversial because people were saying it was a real person. Of course, you would need whole teams per person to respond so well so fast with memory per user, and no one has the capital to higher billions of people for free.
Turing test is not a good test. As the Amazon AI(A lot of Indians) didn't even apply, a lot of the public ignored it, yet it was faked.
We need better tests. Because rn AI can do stuff no human can in the same amount of time, but a group of humans can do things no AI can do in the same amount of time. The first part is the reason for the hype, and for good reason. But the second is milking that hype. We need a test to tell them apart, or at least tell us where the AI actually is.
GPT-J is still a living part of my planned projects(I'm still learning), because it matches a lot of requirements. Llama is being looked into. GPT-J isn't even being talked about anymore, as being slightly better than GPT-3 is nothing when ppl are trying to find the unreleased GPT-5 secretly being used.
Wait, this video is about people lying about AI for the sake of investment? It makes sense since this is a financial channel.
PSA: regarding the title of this video, actual AI is often exactly the same. It's legitimate computer science, but models like Chat GPT are only competent because of underpaid workers who slog through the training data to curate it. (I believe that generative AI can be ethical and good, but this ain't it.)
It's a huge effort and it's typically only possible by exploiting humanity on multiple fronts, from acquiring the data from people who haven't consented to their data being used for training, to the huge labor involved in making it all useable.
That's what I thought the title meant by "When AI is Just Badly Paid Humans!"
I'm glad someone noticed this.
I'm skeptical that Intel was betting on CPUs over GPUs on AI. They make their own line of (budget) GPUs. If they had anyone at all working on AI in the company, even as a hobby, they would know that it's 99% linear algebra, and GPUs are purpose-built linear algebra machines.
I know there were a few people in the industry betting on "biologically inspired" chip designs, but that was based on a charicature of how artificial neural networks actually work, and ignorance of the direction research was headed.
Ironically, washing machines have supposedly been using fuzzy logic for decades, which is a type of computational intelligence in the realm of AI
key word: "supposedly". I am an appliance tech; I understand a little bit of programming, and I have yet to see one example of a washing machine that used any logic that I couldn't describe in "regular" logic, with a timer.
@@conradogoodwin8077 Washing machines certainly use fuzzy logic. It is basically any kind of logic that does not represent the main system states in a purely binary manner. Some decades ago the designer of a washing machine controller had to count the number of bits they were using. That is of course no longer the case today.
Patrick, love this video! I’ve been financial advisor for over 10 years and I am terrified that at some point clients will rather work with a chat bot than myself, especially if the LLM could talk out loud or create a virtual image of itself to clients. That might not happen tomorrow, but I could see it happening in 5 to 15 years.
interesting
5:20 - "Machine-learning assisted," is the weasel-word they should have used. It sounds fancy, but all it may mean in practice is that a human boiler-room broker types, "Which stock is hot right now?" prompts into ChatGPT and the SEC's hands are tied, because they are doing exactly what they said they would do.
yep
"... essentially ChatGPT4 is an artificial Canadian."
Amazing line, holy hell.
The "artificial Canadian" line was funny lol
The moment AI was blowing up all I could think of was "didnt the original Mario and donkey kong games way back in the 80s-90s use AI??"
When you think about it, Just Walk Out was just a less efficient way of doing something every tech company is already doing in some capacity.