I'm a programmer. My company's earnings call says we're investing in AI more than our competitors, and the investors went wild. Think about that. INVESTORS are clapping that we are SPENDING the most money on AI. Not results. Not profits. Not customer satisfaction. Handing money to scammers is what investors are paying for. My comoany also fired tons of people because mumble mumble AI. And guess what? We are all up to our necks in unfinished work. The same amount of work still needs done. There are just less people to do it. And no, AI isn't making us faster. These companies will be hiring like mad soon.
@@allsmiles3281 Cease and Desist immediately! WE WILL sue you, and the people you represent? We will probably sue them too! You have been warned in the tool shed gets rusty, likely because of humidity, so it's helpful to create adequate ventilation and used insulation to prevent this, California turnpike fuel rest stop flower pot. -- AI Corporate Helper Bot 2000
In all fairness to the underpaid people of India, the pay they get is probably reasonable for them. I’m not saying they are getting paid well or not, just that from talking with someone who hires people in the Philippines, while their pay is substantially lower than someone in the US air UK or most western countries, that pay is solid for them.
not every tech company is Amazon.... ChatGPT isn't a bunch of underpaid Indians in a warehouse basement googling people's queries and sending a reply back
I had a job up until recently where the work was first pushed offshore to India and then, since the employer didn't want to pay them, they introduced "AI" to do the work. Surprise, surprise within a couple of weeks the folks in India were back to doing 99% of the work. But honestly you could hire some fifth graders to do everything and they'd have better results than the automation.
I mean AI is a real thing, it exists. This video is just misinformed and banking on AI hate hype. Its a real and more importantly extremely powerful revolutionary technology that will change the world.
@@TheManinBlack9054You don't know what you're talking about. Even the real thing that everyone is calling AI isn't AI, but on top of that it's been proven that most of these companies aren't even doing that.
I worked as a data scientist at an "AI" startup with big clients and I can tell you that our models would just give suggestions to our analysts (who would't use it most of the times) and wouldn't make any actual decisions. Now I am working at a company that wants me to get "AI" to do something that can be done by regular software. It's all just marketing.
heh which is why i as a recent data science grad am just doing my best to master the art of cleaning awful data from medicine, because that's, i think, more important than hopping on a hype train at the moment
My Logitech mouse broke and it was in warranty. So I make a requests to use it. I had a choice to talk to an agent or chat with an agent. I chose to chat with an agent cus I did not feel like listening to someone else voice that day. Of course like modern companies they employ those prerecorded voice "choose A for this crap and choose c for that crap". And so it began with the person (In the beginning I was not sure if it was AI or a Human) until it asked me to describe my problem. I had prepared everything a few hours before. I use pasted in my 98 words (Five sentences) in the chatbot. Only to get back a reply (more like a demand). "Could you rephrase that?" OMG I was so shocked and offended that nobody could understand my standard English. I said, "Do you not understand English? "Could you rephrase that?" NO! "Could you rephrase that?" So I had to rephrase that to "My mouse is broken" lol And then some Human actually came to light, with a name I could not pronounce. I was pretty frustrated because I can see the potential with AI, BUT my interaction with ChatGPT and some others is a disaster. They are unable to sort out words from sentences and put them into a list, in alphabetical order, because the dummy AI keeps adding new random words to the list . Oh I tried the same sentence word list with Gemeni. OMF! I asked why it has added "Bread" to the list when it was not in any sentence. Gemeni started to "argue" with me that it WAS in the list of sentence. IT is TRUE it said. Eventually the Dummy realized that the word "Bread" was not in the list and corrected itself and rewrote the list Yeahhhhh Only to realize when I rechecked the list that it had added more random words to the list. I told it to EF OOF, and it told me that it could not continue the conversation because its feelings were hurt. Oh you are woke I said. It just repeated its Dumb self.
@@twinai The markets are mainly about fads. It takes a while before people start asking questions. To this day they're more about hype than they are about facts still.
@@madclancrew I know, it's not about your opinion on the value of a company, it's about what you think others value it at (or rather what you think they think yet another group values it at). At this point it has become a giant casino
Just like NFTs a couple years ago You take a techy word that’s hard to understand what it actually does but easy to pretend it can do anything and people will just blindly jump on it
I used to work for a company that used "AI" to convert photos of houses into blueprints for home renovation and insurance purposes. But by "AI" they actually meant "a bunch of underpaid people in Ukraine." Then the war broke out, and hoooo boy did that mess things up.
It's a pump and dump. And I know this because I got an email from Samsung that let me know that their new Vacuum now "comes with A.I." This will end the same way as the NFT pump and dump
" @AuthorJMac: You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes. "
We need AI to target the healthcare industry as well as self-driving.. Right now healthcare is abnormally expensive, they charge you thousands just for running a blood test through a machine. It's a joke. Most people make like 20-40$/hour, many doctors are glorified dictionaries which could be replaced from a basic general purpose AI and a couple of low-risk machines.
Laundry is already automated with washing machines and dryers and washing dishes is automated with dishwashers. Now if we could get our clothes folded and our dishes put away automatically that would be revolutionary
100% agree. Health providers are ripping people off. AI has the potential to be cheaper and more effective for analysing conditions. It also doesn't get tired or lazy looking at scans. It won't replace doctors or consultants but will be dark better at identifying conditions early.
This is the REAL danger of AI in my opinion. Companies or governments duping people into thinking they have some amazing algorithm to make decisions without bias when in reality it's just some people behind the scenes picking the winners and losers.
@@g4l4h4d1 Ahh very simplistic if the AI is at a Skynet level of interconnectivity, there have been so many TV shows and movies that show what could happen if it really does exist I am surprised they just don't keep it all in the Pentagon black projects if at all.
@@SashaYanshin no, if you actually researched, the workers were being used in edge-cases where AI didnt know what it was, and had doubts, its human backup system. Its not that the entire system was AI. Dont spread misinformation, you will be reported. Research before saying stuff.
I’ve had majority of my holdings in tech stocks and irrespective of market changes, I’ve done pretty well especially with apple’s P/E(price to earnings ratio) gaining over 30% this past decade, now my questions is what stocks do you think will be the next apple in terms of growth for the next decade.
It might be difficult finding the next apple within the tech stock sector, apple has performed way better than the others, maybe look outside of tech stocks.
well things are different now, same market strategies applied over last decade wouldn’t apply to the current market, so to actually figure out how to outperform the market and stay afloat for the next decade, you should reach out to a financial advisor, that’s how I’ve managed to properly diversify across the right asset classes and gained over $450k in profit this past couple years.
What's really scary is they're letting doctors etc. use this stuff... I've done some AI data training at work and it's impossible to train these models to stop "hallucinating".
i work in research as an AI engineer in computer vision and some of our projects include medical data analysis. I think it's important to note that there are steps in a positiv direction there. but there are some key distinctions: firstly, the models are only meant as a tool to the doctor to make more informed decisions. And secondly, almost all of the models are not generative which means they dont create data, but summarize lots of data point to a few easily umderstandable ones. We also actively try to avoid the word AI and use machine learning instead, to minimize these hype fueled pipe dreams
@funnyBecauseItsME machine learning ftw As someone whose close to getting a bachelors in mathematics with a minor in stats and comp sci, what would you suggest the next steps be after graduation? Pursue a masters or go straight into the workforce.
@@ravensharpless lol I will ask it questions about how to do something and it will give great seemingly well thought out instructions and then you discover some of the settings it's telling you to change don't even exist.
@@ravensharpless Maybe you need to learn how to use it properly. Many developers, including my own, report an order of magnitude increase in efficiency.
@@betag24cn People have been made to conflate AI with AGI, which has still not proven itself 100% viable. Yes, computers can do sophisticated things, since about 1970. People WANT to believe in AGI, for a variety of reasons. Corporates want to replace their useless eaters, nerds want to be friends with commander data, incels want a f- buddy, everyone wants the gleeful satisfaction of watching the main demographic of having to "learn to prompt," which, if it *actually* worked well, no one would have to learn too much about. The hucksters have having a field day. Maybe we deserve it.
A lot of it basically is. It's also a bit like the text prediction that your phone keyboard has. It's calculating the most likely response that fits your input
Asking chatgpt questions is a lot better than visiting the first page on google. It gives you a quick summary and tells you which webpage or pages it pulled it from so you can double check if you want. This is a really valuable time saver. And the summaries are always on topic and usually accurate. Much better than trying to find the specific question to type in google to find a specific stack overflow post.
Search engines have gone way down in usefulness in my 28 years of Internet use. Behind the scenes they reword your search terms to words that will benefit their advertising business. If Reddit has the info you need it is often the better option to find the info.
@@saliferousstudiosStill a far cry from Google of 2010. That was the last year I can recall where Google would show what I was looking for. Seems like those slimey creeps found out it's a bad idea. 😂
When I explain that all I do to create an AI model is literally write a simple program in R-Studio and hook it up to an excel spreadsheet, it pretty much shattered the illusion for anyone who asks me. These models are not even tangentially useful for most things.
Isn't it your job to minimize bias as a researcher by having a variety of sources that's properly validated? Right now AI isn't reliable because it can just make stuff up. If you find sources too bias while GPT is more objective then where did it get the data to create it's more objective point of view from?
@@user-te5po4bu8o - Thanks. There's been talk of retail stores in the U.S. and Europe using RFID tech for many years, to do automatic checkout, but I haven't seen it take off. I'd hear about a store here or there using it, as a test case. My guess re. what's holding it back is there's no consensus from product manufacturers about using it. I imagine retailers don't want to take on the task of adding RFID to all of their inventory.
How many of you hate the automated voice redirection? I work for a bank..they removed people and setup this nonsense. Now we take 2 week extra time to resolve the issue
i have always suspected "we dont know whats happening in the black box" actually meant "please dont look in the black box it has evidence that im plagiarizing"
I dont think thats the case most of the time. The black box analogy is that the combinations interactions and paths through the weights and baises in the network are too numerous to know "why" it gave that output. If we take image recognition, you and I can describe why we said it was an image of a cat, but with the AI it's just millions of numbers being added and multiplied such that by the end of the feed-forward, it lights up the "cat" Neuron the most
If you fold a cross, you will actually get a black box. When you graduate ( gradually indoctrinate) you wear a black box on your head. The kaaba, one of Islams holy sites, is a place where you pray to a black cube. We are mind controlled by media, which was once shown in a black box known as TV.
yeah there are definite cases where you just dont know what exactly it's doing. the real lesson is to be careful around hype, disingenuous people and companies will always try to catch a ride and make a profit, at the cost of others.
There's leeway in the law for the SEC that as long as CEOs 'try' to make the shareholders money without just outright accounting fraud as part of it, it's legal. The technology is half-baked, but there, compared to the scam about Elizabeth Holmes was running where the underlying tech didn't even work as said, which drifted it into fraud territory.
Yes and: every company profiting from the AI hype is also a lobbyist, so politicians and government also profit from it. In a year or two there will be another crash and basic commodities like food will get (even) more expensive.
The problem is all these companies who loudly talk about using AI in their processes actually spend money on ChatGPT licences, buying AI enabled servers all to just boost their share prices which ironically does boost share prices of companies like Nvidia which just triggers a FOMO cycle of sorts
The reddit traffic boom is probably robots scraping the site, they used to have pretty permissive API access but they ended that around a year ago, I think it was close to the date of the traffic boom you noted.
I added AI to my Breakfast café logo and we started having more clients, the hype it's unreal and ridiculous. It started as a joke but people actually ask us how are we implementing AI in our kitchen.
How long will it take until he discovers that this is also how the entire economy's been operating for longer? They show us pictures of fast food burgers with hidden spacers to make it look bigger and more appetizing. They sell us "dishwashers" that still require you do most of the work of washing a dish. And they sell you fruit juice as if it's supposed to be as healthy as fruit, when really it has so much sugar it might as well be pop. The reality of a product should not be judged by an idiot who doesn't understand how metaphors and exaggeration are constantly used to appeal to your emotions rather than your logic. Because nobody buys a product or hates a product based on their logic, but based on their emotions. An advertisement isn't an objective, unbiased explanation of the factual specifications of the product as compared to alternatives. An advertisement is an emotional self-expression of the essence of a product to motivate you to buy it. Which isn't to say that the emotional self-expression is necessarily wrong or bad. You can say something factually true that's still misleading, just like you can say something factually incorrect that's not misleading. Like I can say "the Earth is a sphere" and factually that's just wrong. The Earth has so many bumps, ridges, valleys, and is somewhat ovular. But the essence of my statement is correct, because it gives people the correct idea that the Earth isn't like a disc or a cube.
@@nevisysbryd7450 I think you're under exaggerating how prevalent this is. Even things like toilet paper will have all sorts of weird labels and ads about how amazing they are. The only stuff that doesn't have exaggeration is stuff that basically just doesn't have any sort of advertisement and it's only label is a plain description of what it is (like if you're buying screws or wood at Menards).
Picking what you want, walking out and relying on some automated system to identify what you took and bill you for it later sounds like one of the worst things I can imagine.
A recent, short conversation between my boss and my team: "We need to use AI" - "We already do. We just don't call it AI anymore, because it actually works."
Funny... quick work story myself. I run a bread route. Ordering, delivering, merchandising. Orders are clearly important to know expected volume and keep waste down. Well a couple years ago the big boss company tells us we no longer have to order. They were bringing on AI to maintain orders. Based on order history, present trends, etc. Loooong story short... no. No, no, no. AI wheels fell off and was abandoned within two months. It didnt recognize holidays (?), weather, and couldnt even reliably replicate last years orders. It was actually wild how bad it was.
@@lankyrob6369 Must be the same system used in my local grocery store. Always out of the stuff I wan't to buy, unless I'm there just after morning delivery. Less waste and less sales.
@@sabrinelan Not OP, companies I’ve worked at portrayed search engine algorithm as “AI machine learning”. Like you got a list of 10,000 individual texts sorted by emotion according to how much words match’s each list (negative: angry, annoyed, hate…; positive: pleased, happy, excited…). Current company I work at we did something similar for moderation (list of bad words we completed manually because the OG list wasn’t creative enough)
"A.I" is just applied stats. Neural networks have been around since the 1960's. They just find patterns in data, chasing an objective (such as next word prediction). LLMs will never be as good as a pocket calculator for adding numbers, even in 200 years time. They are a form of artificial narrow intelligence (not general), because their use cases are restricted to fuzzy, unstructured data, such as semantics (sequences of text), images (pixels), and audio (waves). That said, there's amazing applications to be found within those domains.
Lol. In the metal shop I work with they're "using AI" to monitor safety habits through shop cameras. This comes down to checking for people not wearing gloves, not wearing the right gloves, not wearing safety glasses, not wearing earplugs (somehow), not wearing face shields when grinding, ETC. The problem is that AI has absolutely no sense of depth or field of perception so if it loses track of your hands at all, it assumes you're not wearing gloves. If it can't see your transparent safety glasses, it assumes you're not wearing safety glasses. If you're grinding and the sparks go in front of your face shield, it assumes you're not wearing your face shield. If you start welding, it assumes you're not using your hood. If your hood is personally bought (they all are) it assumes you're not wearing a hood. If you have a sticker on it or paint it like everyone does, it assumes you're not using one. If your leathers get covered by soot, it assumes you're not wearing leathers. They have on guy in each building monitoring thousands of flags an hour because of security and personal security reasons - almost all of which are invalid and the sample sizes are waaaaaay too small to even use this system for with anything you could even call efficiency. And they had a meeting to tell us all that they're seeing improvements in safety, which is actually untrue in real numbers. Companies lie about everything. Literally everything. They spend $5 mil on a machine, it somehow pays itself off in a year even though it actually puts out less product and takes more people to run it more efficiently. They cheap out on training people in operating and maintaining the machine so the machine crashes constantly. Every. Single. Real. Number. They give out is fabricated. I would not be surprised at all if we see a national economic crash stemming from fake numbers of production values and false representation of efficiency stemming from people just trying to keep their jobs after they notice the problems. Then after the problem becomes so bad it isn't notice, their bosses cover for them because they also know they should have known and never let the problem get that bad. It's the same problem that has been fucking up the economy for decades with every new technology overpromising returns (machines, computers, Internet communication, Robots, and now AI) but it's just happening faster and faster and growing more and more out of control because 'regulation is bad' and 'auditing is a waste of resources.'
Most "real" AI isn't even AI, it's just more complex ML. It's doesn't understand anything, it's calculating probabilities. OpenAI isn't understanding your question, or even sentences, it's just calculating the most likely next word, it does that pretty good, but it has inherent issues.
@@Paul-qj4dr the only big difference is the fact you have a 'conscious discriminator' that decides whether or not to filter information. Other than that, you think of an idea of what to say and then your brain feeds the next words to your consciousness. The fact is, ML does this part alot better than most people can.
@@philsomething8313 We might translate our idea's into words, but we have idea's in the first place. The latest "AI" models are mostly LLMs (Large Language Models), which don't have much of an understanding of what they are talking about. Our brains also "train" differently then AI is trained: we need much less "data"/trials.
@@Paul-qj4dr most 'ideas' usually 'pop' into our heads i.e. not conscious, this means that it should be replicable by most AI's without consciousness we just haven't got there yet. Also whilst AI's do need more data, this is most likely a result of our brains being pre-wired with a neural network 'half complete'. Evidence of this is in AI's LLM's ability to perform 'zero shot learning' where you feed it an example of what you want it to do and it is then able to replicate that.
The head of OpenAI was asked if they use TH-cam videos in their data sets. She paused, looked really uncomfortable, and couldn't answer. She couldn't answer because they'd get sued.
Until we see a sentient machine, completely aware of its own existence and capable of original thought, artificial intelligence doesn't exist. I'm so sick of machine learning being labeled "AI"...
@@oompalumpus699 If you are a sci-fi fan and not actual science you have a bad background to discuss AI. This terrible hype comes from people thinking fantasy is within grasp. It's not, AI is applied math, and math has severe limitations.
The whole point of AI is that the software trains itself, not is trained by a room of people. That is a knowledge based system, the technology of which has been around forever. They have just rebranded KBS as AI. None of the so called AI stuff out there is true AI technology.
It's a dystopic world when Reddit, 4chan and AI are the last bastions of truth, thought and opinion when the rest of Google SEO gives us generic surface level results from literally Buzzfeed and Kotaku. And it's creepy that AI isn't really AI after all.
I almost assume that AI "writes" these insipid articles that show up in AI search results (and google search results). The stuff that comes up at the top of searches now doesn't even seem real.
True. I been hooked on Geopolitics for half a year, and suddenly had a Trueman-Show moment randomly finding this vid. There actually is intelligent life in here.
@stephanieellison7834 think of it like Wall-E in lamence terms. We'll all be living in a Buy N Large space station where all the robots will be catering to our whims and needs. Making humanity entertainment, making us flavored slop in a cup, live on tracks in transporting cars, only to canive with our family/friends/everyone else as we become fat, immobile, or obese. AI will do it all because it's almost hardly regulated.
completely true. the only part I dont agree with is that this Generative AI will replace every single skill in the world. The NVDIA CEO is a snake oil salesman
I didn’t say that. I said AI in general is obviously going to be an amazing thing. But yeah. Large language models are fun and all but they are not it.
I agree Nvidia by the looks of it have been buying call options in their own stock to cause a massive gamma squeeze and make the stock squeeze higher, not illegal, but shows a lack of integrity in my opinion if the options theory is proven. Other thing I don't get is once Google, Amazon and Microsoft have got all their Nvidia chips, they should last a large number of years, so once they've got what they need why would they keep buying more?
As an admin user of a CRM, I talked the client out of wasting their money on the new AI feature of said CRM. It is just a weak search engine marketed as "AI".
@@SashaYanshin it might be the new fancy term for Search Engine Optimization. That kind of thing, but the people buying AI starter kits/secret decoder rings have NEVER HEARD of the previous buzzwords.
Bing AI is specifically designed to use Bing search results and combine them using GPT4. That’s its purpose. It’s utilizing the summarization capabilities of LLM.
I was about to respond to that as well. I saw that it said, "Searching the web" and when Co Pilot is done it shows links where it found the resources at the bottom. Chat GPT, Phind, Co Pilot, they are doing web searches behind the scenes.
Indeed, if he scrolled down on the screenshot he took, you'd have seen the website he was quoting as AI having "stolen from" or "plagiarised" listed as the primary source of all that feedback he got to the query, but that didn't suit the narrative, so he cut that bit out.
But at the same time removing all traffic from the source site. Doesn't make it any more fair. Bing gets the traffic, the ad revenue. The sites they stole from are left without visitors.
@@blisswebIt doesn’t fix things but no one should have to search for significantly more time if they don’t have to. Financial incentives should never get in the way of quality of life.
The "AI" branding was always a marketing move, the same technologies were described as machine learning for a long time before midjourney and the GPT boom
Sports analytics companies are another example of this. There are companies that claim to have computer vision systems that can watch sports games and record all the players' stats/actions. I've worked at one of these companies and their "computer vision AI system" was a bunch of guys who were watching the game and manually inputting all the data.
I was still there when google was all about : text only ads in your search and you can directly reach your responses. Indeed, these days are double youtube ads, scam ads, deepfake ads, google with image ads ... we sank deep down from where we started.
yeah, Google is WORSE. It was ok around 1999 or 2000. It was REALLY good then, but it imploded. SEO and junk tried to game it and killed search. AI is the "mop up" operation.
5 years ago every CFO mentioned cloud on earnings calls .Last year AI was the nom de guerre on earnings. Next year it will be Robotics.. Humans love a story to explain things.
My wife worked as a search rater for a bit, if you don't mind simple repetitive work it's good pay for no qualifications and work from home There's way creeper projects too, like one from bing where you just get shown pictures from random people's cloud storage and have to identify if the ai identified their friends correctly
Yup. For example, some people think that what's going on behind the curtain is 1,000 Indians are watching all of the video footage and deciding everything manually. When in reality, there is an AI that does some of the processing. It's just probably still shit at it and requires the human expertise to further train it. People will believe anything, no matter how extreme, as long as it confirms their moralist judgment. If you believe these companies are amazing and super innovative, then you'll believe it's all 100% AI and super efficient and saving everyone tons of money. If you believe these companies are lying, snake-oil salesmen, then you'll believe that it's all 100% human-expertise and wasting money just to dupe the investors. The reality is more complicated than what people believe, and especially what people say, especially people who are speaking to a wide audience (because they are likely speaking to generate clicks or push their moral opinion, rather than speaking to seek truth).
Spot on. Whether I have an online retail enquiry, comparing insurance, doing research, troubleshooting, sorting purchase, have a tech query, need to sort something out, an AI chat bot has always been helpful and solved the issue - NOT. EVER!
Agreed, 99% of the time they are useless. Some people who are too lazy to find the opening times of a store will use a chat ot, I use the chatbot function as I'm having an issue, but the chatbot is so useless that it doesn't understand when I tell it I need to speak to a real person as my order hasn't arrived, and I'd like to know why. Business misusing AI will be one of the biggest business killers I think. I now actively avoid any business that says they use AI chatbots, and I think more people should too
I remember when this happened about 20 years ago with a service named Jott. It was stated or at least implied that the service used algorithms to transcribe voice messages to emails but it turns out the company used its startup funds to hire a bunch of Indians to do it. I got the same vibe with Wendy's "AI" ordering system that was demoed in the last week or so. The delays are long enough for a human to process the order in the background.
Let’s not forget about the Cruise Robotaxis in San Francisco which turned out to be remotely controlled far more frequently than we were led to believe.
I'd trust them more if they were human driven. A human wouldn't have dragged a pedestrian trapped under the car 20 feet as it pulled over after a collision in that 2 Oct 2023 accident.
I remember the Aloe Vera craze back in the 2000s. Aloe Vera soap, cream, make up, infused this that and the other… you know when they’re really taking the piss when you go to the local petrol station and see: Aloe Vera car wash for a gentler wash for your car. 😂😂😂 King Kong sized bs
Totally noticing the demise of all the search engines usefulness, can't find anything on Google now. Surely an opening for an engine which doesn't feed you crap. The ai answers you get and the ai generated web pages are easily identified by their flimsy content.
@@ciaranirvine they sell your data left and right, duckduckgo is not really a good search engine, they also ideologically police the results to rid it of "propaganda"
10:35 This is something that has been driving me crazy, companies asserting that they aren't storing training data cause its in the model and not directly in a database is maddening. As a software developer if I hardcoded news articles into my application that data is still being stored even if its not in a file for that specific data or in a database, its still a copyright violation if I then distribute that application. Storing data in an AI model after training as mathematical patterns is no different than directly storing that data in a database
Data Scientist here. Actually, large language models are trained by tuning weights. The data is not stored, but it's learned to give similar answers. So a bit more complicated. But what this guy showed was the bing copilot, which indeed searches through websites and analyses them. Even the source should be mentioned below the answer. This video is misleading
@@ditschiu Staff developer who worked for IBM Watson Health Imaging for 3 years here. "learning" is a form of storing data, if information can be recalled in part or in whole that information has been stored, therefore those systems are storing data. It doesn't matter if that data is stored as direct copies of that information or in the same format, once the machine has "learned" that information and it has been incorporated into the model, and therefore that information is now stored in the resultant application after training. From a legal standpoint traditional software that contains or used any unlicensed intellectual property is subject to copyright violation. It's insane that you young kids think that just because these AI applications are not storing the data in the same format or only storing fragments of the data that it's not "storing" data. Inorder for an application to create or perform any action it must have the patterns for that activity stored in it's programming. If you make a machine that "learns" and the "creates" similar or near identical art works it fundamentally must be at least storing mathematical patterns that can be used to recreate that information (which for classical software is a copyright violation, regardless if your software is storing something in pixels, a part of a binary file or the mathematical representation of some art work). For a data scientist you sure don't understand the nature of information, this "Machine 'learning' isn't copying, or reconstructing information in part or in whole and then storing it in the application' narrative is so annoying and intentionally misleading to allow corporations to steal data with impunity.
@@ditschiu This is such an annoying statement, there is a reason that these up and coming AI "Startups" have been settling law suites out of court with news organizations, creating special contracts paid contracta for the use of news media when training, and reinforcing these LLMs and not out of a sense of charity. It's because they know that a blind folded, half decent lawyer would be able to win a case against these AI companies in literally any copyright suite where the application was trained on copyrighted data. There is a reason that we didn't get a pop music AI spawning out of all of the recent "advancements" in AI and it's the MPAA, copying a single beat from an MPAA owned song in your hit new single can lead you to a lawsuit, so imagine how hard a company that made a music bot trained on modern copyrighted music would get hit. "Learning" is a term that literally means storing information for later use, whenever some dumbass 1st year university student goes "nah uh, these AI applications are learning information not storing information" I get so annoyed, cause it just displays that they have no fundamental understanding of the nature of information.
@@BlueScreenCorp first of all, I'm not defending these corporations using the data of others. I'm working at a company which is very careful with customer data. We can't just train our models, because of exactly these issues.What I mean is that it is not the same as storing data in a database. That's why it hallucinates sometimes. I really don't want to argue about this, that's for the legal people. My point was about the example in the video, that it shows exact words of that article. It's because copilot is literally going through these websites and summarises or quotes them. If you ask openai chatGPT it should give a different answer. These words are not stored there in the 'AI', the knowledge is.
As someone working in tech I am glad to see you call out this BS hype machine. Yes Ai will have a impact on society. But right now a lot of it is based purely on higher ups in in tech company fear of being left out and pushing any garbage with the word AI attached to it.
I work in tech as well. I know AI will take over, but right now it's still just in it's basic form and not necessarily generative and useful. Even OpenAI / copilot isn't useful enough for me to spend 20$/month on for coding purposes. I'm constantly looking to see what tech becomes available so that I can actually use it and maybe even create my own business using it. But we're just not there yet, but I invision it will be soon.
@@Lolatyou332 I agree, and would add that we should develop some ideas about what “there” means. It must be the case that part of all the hype right now has to do with the fact there are no broadly understood expectations or standards about what AI should be capable of.
Love this! I was at an insurance software conference in the US in 2002. One company was advertising automated application / claims forms scanning technology. You uploaded the scanned document to an API and later on you could query the API to get the text version. I looked at the rep, asked how that was possible (remember this was over 20 years ago) and told him I thought they just had a bank of people who typed out the content. The look on his face told me I'd got it right! The interesting point is that this service is now genuinely available to everyone, so I wouldn't totally diss AGI happening at some point.
OCR was a thing in the 90's. Even recognition of hand writing and conversion to block letters. This on devices with a fraction of the processing power of today's devices.
There are some legit use cases for LLM's, one of them being a "regurgitate what you read and give me the abridged version", along with generating things like schemas, pojo's and table creation scripts based on json data for example.
The issue is the hype and doomer baiters, both after those $$$. One side overhyping and overstating the capabilities for investment, whilst the other is farming outrage clicks for $. Meanwhile in reality its just a tool with many usecases that were difficult and time consuming to deal with previously.
"Summarizing" was one of his main points. The problem is that summaries are just stealing copyrighted data from legitimate sites, and only slightly modifying it.
@@6AxisSageagreed. I feel that the hyper is overhyping it way too much but the doomer is acting like behind any llm task there’s 1000 low skilled worker or ai hallucinates everything and we can’t do anything about it
3 groups are talking about AGI, and i have been saying this since November 2022: 1. CEOs. For the investment; 2. Content creators. For the clicks; 3. Distracted common folks. For the adrenalin. Because no 9-5job AI specialist believes AI is achievable within the next 300 years.
This is correct in some fundamental levels (such as the few examples given in the first half) but so wrong in so many other levels (such as AI is just stealing content and does nothing else). If the latter was true, it wouldn't be able to solve novel math questions or pass humans in cognitive reading tests. Check out the latest report on the "AI Index" from Stanford University, for instance. We surely have something at our hands that is way beyond any technological advancement humans have ever accomplished in their 300.000 year history. Also, the telltale sign of exaggerated AI-criticism is the fact that all videos in this vein tend to say "Yes, AI will of course improve our lives and it's definitely here to stay." If AI was all bogus currently, these creators wouldn't be able to claim it this confidently. So what they're doing is cherrypicking some extreme examples (which, to be honest I completely agree that it is REALLY annoying since what these tech bros do doesn't serve any meaningful purpose other than keeping the excitement up, make some quick cash, and then leaving a bunch of people feeling tricked) and paint the current AI wave with the broadest possible brush. GPT-4 is DEFINITELY something humans has never seen before (and if what tech leaders say is 1% true, GPT-4 will be just a toy model very soon). And a single example of asking Copilot about a VERY GENERIC thing like a "Las Vegas trip" is definitely not enough to show its capabilities or weaknesses. There are a bunch of benchmarks for this (while each of these benchmarks may not be perfect, when used in conjunction, they tell the real story of how AI is performing). GPT-4, LLAMA 3, Claude Opus, etc... If you actually add these to your workflow and if you actually interact with them for more complicated tasks, you will soon find out that these are definitely not mindlessly stealing website content (or at least, that is definitely not their main mode of their operation 99.9% of the time). You will see that they are not regurgitating random samples of the web like this video claims for 13 minutes of a 14-minute long video. Creators should do better than just merely going against a "mainstream" and instead, try to educate the public on the truths AND falses of the revolution we're in. Yes, the current AI boom can be likened to the Internet bubble of 2000's (in terms of the "feel" of it). Then let me ask you this: What exactly happened to the Internet in the next, say, mere 25 years? It's like THE LIFE right now for so many people and businesses! We can't even imagina a life without it! It's everywhere and it created so much more value than that disappeared in its "bubble" phase. And I can guarantee you that a thing that can create its own designs cannot be compared to a thing just connects multiple computers across the world to one another. All in all, let's not get ahead of ourselves when criticising the current hype. Those that actually lie will be eliminated, for sure! But it would be childish and plainly wrong to purport this as "the whole AI boom". Instead, if anything, one should focus the criticism on the ongoing culture in the Silicon Valley instead of the tech that actual scientists and researchers have been building inside the academia and outside of it (like... in Silicon Valley) for so many decades now.
I have checked and your assessment is wrong. You apparently didn't understand the stanford report. the computers using supposed AI systems, performed well on normal computer tasks such as grouping and classification by similar traits but were much worse than humans and old computer systems on reasoning, complex math, and other tasks. Please, everyone search Stanford AI index 2024 and read it for yourself. It is obvious from the report that AI is not real AI yet. The person who made the comment evrimagaci can't read statistics very well.
So, this would definitely seem to imply that Amazon's "predictive recommendations" on ads or the marketplace was really NOT the result of an algorithm AT ALL and it was our phones eavesdropping on us the entire time, EXACTLY LIKE WE ALWAYS THOUGHT.
@@macmcleod1188 , if you use Android, there's a way to access the developer tools to disable internal sensors, which includes the microphone setting used for listening.
6 หลายเดือนก่อน +43
Fucking admire you Sasha. Просто прекрасно. An excellent video denouncing big corps abuse, you should get huge credit for this.
This spurious AI branding began in China where they have been doing it for close to two years. The most hilarious example of imposter AI was at a convention in either Shenzhen or Shanghai where a dubious Chinese tech company had a wonky AI concierge on a large display screen and the AI voice turned out to be a Taiwanese woman with a microphone in a hidden booth behind the display.
1:20 the term AI is not an academic term, or a protected business term and has no true meaning. AI is and will continue to be solely a hollow marketing term. What I have seen is MANY companies have just started to replace the words software and "smart" with "AI". Its just straight non-sense
Hi Sasha, I'm a big fan of your videos and I agree with many of your viewpoints. However, I think there might be a slight misunderstanding regarding how the Bing AI works. This AI is allowed to search the internet for references on the fly, including your webpage which it references. When I tried your webpage was the first referenced and linked to. During the training phase, the AI model doesn't learn or memorize the content of your and every webpage verbatim. It simply can't store all that information in its parameters.
Yup. Basically a fake claim about people making fake claims. By the way, I am surprised Elon Musk was not mentioned: 1· He's one of the most guilty people out there on the faking front. 2· It would have done wonders for the channel vis-à-vis the algorithm (even more than mentioning AI).
I had a past client which we were assisting with due diligence for a company that had an AI chatbot. We recommended not to buy it because it looked like vaporware.... The client ended up buying the startup company. It turns out they had a team of 100 in the Philippines answering as the AI chatbot. It took the client a year to finally figure it out why there were large payments to the Phillippines but def shows this is happening from small to large companies continuing a 'Wizard of Oz' veil with tech. Don't get me started on tech debt and how many companies are on the brink of cyberattacks and catastrophic problems as they bandage shitty code.
I started researching into types of AI for a writing project and realized that none of it is even close to true AI. In fact, the big companies are hurting future true AI development for a perceived worth that will burst in a year or two. It's crazy watching this.
Chat GPT isn't and AI. It is a well trained bot. It doesn't learn from my communication with it, keeping giving me wrong advice it already got pointed out was wrong previously.
the bing ai is supposed to do a search and summarize what it found that is the stated goal of it, its not at all weird that it did that nor is it some hidden thing, nor is it presented as if its comming up with brand new stuff the whole point of it and whats special about compared to other chatbots its that its integrated with the search engine.
Great original content man! I could see you did your job. By the way: the technique Google is using for using internet content in their chatbot’s response is most probably RAG (Retrieval Augmented Generation).
It’s interesting to watch things like this, because I used to do this for work. It’s not all folks in Indian, it’s folks from all over the world, and they’re generally not employed traditionally. Anybody can go out and do it (although it’s not for the faint of heart, it’s incredibly boring, underpayed, and your treated like a cog in a machine). Mturk is a big example.
When I shop at Decathlon, I put my basket. It scans & accurately provides the total cost. I am scanned my credit card & I am done. So why does Amazon not have it?
Very interesting perspective. I love how you identified individual cases on google bard/bing chat to show us how it is directly taking information from websites. Could you maybe do an aggregate on more cases and the questions that you've asked so that we know it isn't outlier cases instead of the majority?
Wait, they're supposed to take information from the websites as theyre search engines. How useless would a search engine be if it didn't return information from stuff you search for? 😂
So basically all of these companies are doing exactly what they did at Theranos but none of the CEOs at those companies are going to be held criminally liable for fraud.
The reddit thing started 1-2 years ago cause people noticed that asking google a question and putting "reddit" at the end, will give them the answer they are looking for, instead of just random google results... so that started a feedback look that lead reddit in the eyes of the google algorithm... pushing reddit results up
This whole "AI" thing reminds me of a video on TH-cam called _Dueling Carls, a "Talking Carl" Scream Fight_ This is effectively what "AI" is in a nutshell.
I kind of prefer the way the Internet was before it was possible to earn money on websites. Google ad services have been an absolute blight upon the digital landscape.
I worked at a mobile app development company that once was promoted to have an in app chatbot that were actually just real people 😂 it was a one-off for just a small event but yeah, it happens, a lot.
Not sure this is the full story. Amazon were still using the camera system to detect items, with the humans checking and looking at items the software couldn’t detect or figure out what happened. Sounds like reasonable beta testing to me.
Yesterday at work someone got a local government document with a code on that none of us had seen before. Now, those codes are always on those documents and usually there's a summary of what they mean as well, which gives us an idea if how to classify it. (I swear this is relevant to the video.) This one didn't have a summary, so we looked it up. The first result and Bing's AI answer were both very obviously wrong in the same way. The summary they gave was for one of the most common codes, possibly THE most common one. What I think happened is that the AI (I suspect the first result was also AI generated) couldn't find an answer to what this very obscure code meant and instead of answering "I don't know" it returned the most common answer to similar questions. In this case we knew that answer was wrong, but how often does it do this with questions it can't answer and how much trouble can those incorrect answers cause people that don't know better.
I don't know about anyone else, but it seems to me like main takeaways here are that 1) a lot of people who have the funds to become major investors in these companies actually aren't very smart (or, at minimum, are very easily led), and 2) the people running these companies have no problem taking advantage of this while pumping out products that range in quality from mediocre to a complete fraud, in order to take advantage of these gullible people with money- and any consumer stupid enough to trust the company's products.
To your first point, I don't think you realize the end goal of what these investors want. They simply want to make money. Even if all the investors involved know it's all bullshit, as long as the bullshit they've invested in shows stock increases, they get paid and they're happy.
it's not stupid individual investors I'm worried about, it's stupid corporate investors....your pension fund is being wasted away right now on some AI scam because some other stupid "AI' is purchasing moving stocks
It's a classic pump and dump. Get in early, hype, diminish holdings on way up, when most retail investors and index funds are in and the exponential curve flattens, short the stock and pop the bubble. Profit on the way down as well.
Incredibly based. Only the lowest fake email jobs will see disruption from this. Making up random responses from a semantic search doesn't constitute meaningful thought, planning, or anything someone above room temp intelligence should be worried about.
My friend works in digital technology, he was a division head for products. Then he decied to get a job at another company because of the AI role that was offfered. He turned up and on his first day realised that their AI division was some bloke sat by himself below the ground floor, with no experience and no clue of what to do with AI. It was all a con! This was 3 years ago.
I tuned in for the "OpenAI Spring Update" scheduled to start at 12:00PM Eastern... but they were running late. What should appear in my sidebar of recommended videos - this one, which is totally sinking their battleship. I couldn't agree with you more Sasha, and I'm glad more creators are pointing out these issues. HILARIOUS use of the custom website.
I'm a programmer. My company's earnings call says we're investing in AI more than our competitors, and the investors went wild. Think about that. INVESTORS are clapping that we are SPENDING the most money on AI. Not results. Not profits. Not customer satisfaction. Handing money to scammers is what investors are paying for. My comoany also fired tons of people because mumble mumble AI. And guess what? We are all up to our necks in unfinished work. The same amount of work still needs done. There are just less people to do it. And no, AI isn't making us faster. These companies will be hiring like mad soon.
Can I use this as an anonymous quote?
The metrics being used to measure success in businesses can be so whack these days lol
@@allsmiles3281 Cease and Desist immediately! WE WILL sue you, and the people you represent? We will probably sue them too! You have been warned in the tool shed gets rusty, likely because of humidity, so it's helpful to create adequate ventilation and used insulation to prevent this, California turnpike fuel rest stop flower pot. -- AI Corporate Helper Bot 2000
Heh, more like OUTSOURCING more soon.
Shutting down soon will be the result for a lot unfortunately
Imagine your Job is taken from you by AI but it is not really AI but 1000 underpaid people in India 💀💀 how ironic.
This is just outsourcing but with extra steps
lmao
In all fairness to the underpaid people of India, the pay they get is probably reasonable for them. I’m not saying they are getting paid well or not, just that from talking with someone who hires people in the Philippines, while their pay is substantially lower than someone in the US air UK or most western countries, that pay is solid for them.
not every tech company is Amazon....
ChatGPT isn't a bunch of underpaid Indians in a warehouse basement googling people's queries and sending a reply back
@@angrygreek1985 until its discovered that in the training phase the results were reviewed by indians... woopsies
*AI = Associates in India*
Only for Amazon basket?
AI is love in Japanese. Iya is no. There are songs that have aiiyaiyaiya. It rounds like nonsense but it's 'Love no no' ;)
Fantastic
curry fueled AI wow...
Introducing the PSS large language model!
PHUL
SAPPORT
SAAR
❌️ Artificial Intelligence
✅️ Actually Indians
😂
ASI Actually Super Indians.
you mean natural stupidity?
I had a job up until recently where the work was first pushed offshore to India and then, since the employer didn't want to pay them, they introduced "AI" to do the work. Surprise, surprise within a couple of weeks the folks in India were back to doing 99% of the work. But honestly you could hire some fifth graders to do everything and they'd have better results than the automation.
From Croatia mainly
Hey we never said AI was "artificial intelligence", we use our own proprietary "Attentive Indians" technology.
Best comment ever 😂
I mean AI is a real thing, it exists. This video is just misinformed and banking on AI hate hype. Its a real and more importantly extremely powerful revolutionary technology that will change the world.
:DD
@@TheManinBlack9054 he said that himself in the end of the video.
@@TheManinBlack9054You don't know what you're talking about. Even the real thing that everyone is calling AI isn't AI, but on top of that it's been proven that most of these companies aren't even doing that.
I worked as a data scientist at an "AI" startup with big clients and I can tell you that our models would just give suggestions to our analysts (who would't use it most of the times) and wouldn't make any actual decisions.
Now I am working at a company that wants me to get "AI" to do something that can be done by regular software.
It's all just marketing.
At least you got a job out of it.
heh which is why i as a recent data science grad am just doing my best to master the art of cleaning awful data from medicine, because that's, i think, more important than hopping on a hype train at the moment
Like the nanotechnology everything craze it will blow over when they figure out a new scam.
@@stuartcarter4139 I agree with you, for now I am gaining experience in order to do something meaningful in the future
My Logitech mouse broke and it was in warranty. So I make a requests to use it.
I had a choice to talk to an agent or chat with an agent. I chose to chat with an agent cus I did not feel like listening to someone else voice that day.
Of course like modern companies they employ those prerecorded voice "choose A for this crap and choose c for that crap".
And so it began with the person (In the beginning I was not sure if it was AI or a Human) until it asked me to describe my problem.
I had prepared everything a few hours before. I use pasted in my 98 words (Five sentences) in the chatbot. Only to get back a reply (more like a demand).
"Could you rephrase that?"
OMG I was so shocked and offended that nobody could understand my standard English.
I said, "Do you not understand English?
"Could you rephrase that?"
NO!
"Could you rephrase that?"
So I had to rephrase that to "My mouse is broken" lol
And then some Human actually came to light, with a name I could not pronounce.
I was pretty frustrated because I can see the potential with AI, BUT my interaction with ChatGPT and some others is a disaster.
They are unable to sort out words from sentences and put them into a list, in alphabetical order, because the dummy AI keeps adding new random words to the list .
Oh I tried the same sentence word list with Gemeni. OMF! I asked why it has added "Bread" to the list when it was not in any sentence. Gemeni started to "argue" with me that it WAS in the list of sentence. IT is TRUE it said.
Eventually the Dummy realized that the word "Bread" was not in the list and corrected itself and rewrote the list Yeahhhhh
Only to realize when I rechecked the list that it had added more random words to the list.
I told it to EF OOF, and it told me that it could not continue the conversation because its feelings were hurt.
Oh you are woke I said. It just repeated its Dumb self.
AI is the new website bubble. Remember when anyone who had a website automatically meant they increased their share price?
"Long Blockchain Corp, formerly Long Island Iced Tea Corp, ..." they changed nothing except their name and the stock soared
@@twinai The markets are mainly about fads. It takes a while before people start asking questions. To this day they're more about hype than they are about facts still.
@@madclancrew I know, it's not about your opinion on the value of a company, it's about what you think others value it at (or rather what you think they think yet another group values it at). At this point it has become a giant casino
@@madclancrew Casino Economy, likely with AI assisted trading ...
Just like NFTs a couple years ago
You take a techy word that’s hard to understand what it actually does but easy to pretend it can do anything and people will just blindly jump on it
I used to work for a company that used "AI" to convert photos of houses into blueprints for home renovation and insurance purposes. But by "AI" they actually meant "a bunch of underpaid people in Ukraine." Then the war broke out, and hoooo boy did that mess things up.
That's Crazy !
It's a pump and dump. And I know this because I got an email from Samsung that let me know that their new Vacuum now "comes with A.I."
This will end the same way as the NFT pump and dump
💯Unfortunately the market can remain irrational, longer than one can remain liquid. Only the bubble masters know.
@@iRelevant.47.system.boycott
I've always loved that quote and have used it often.
@@iRelevant.47.system.boycottheard that before. i believe it
over hype to end with crash and regulations
You are seriously equating the NFT nonsense from 2021 with the progression of LLMs over the past two years? Silly.
" @AuthorJMac: You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes. "
We need AI to target the healthcare industry as well as self-driving..
Right now healthcare is abnormally expensive, they charge you thousands just for running a blood test through a machine.
It's a joke. Most people make like 20-40$/hour, many doctors are glorified dictionaries which could be replaced from a basic general purpose AI and a couple of low-risk machines.
Laundry is already automated with washing machines and dryers and washing dishes is automated with dishwashers. Now if we could get our clothes folded and our dishes put away automatically that would be revolutionary
@@Lolatyou332 the problem with your healthcare industry isn't AI or not, it's... America. Every other civilised nation has free healthcare.
@@BittermanAndyfree healthcare is garbage
100% agree. Health providers are ripping people off. AI has the potential to be cheaper and more effective for analysing conditions. It also doesn't get tired or lazy looking at scans. It won't replace doctors or consultants but will be dark better at identifying conditions early.
This is the REAL danger of AI in my opinion. Companies or governments duping people into thinking they have some amazing algorithm to make decisions without bias when in reality it's just some people behind the scenes picking the winners and losers.
Its not the real danger of AI. The real danger of AI is human extinction. What you described is not even a danger. Its barely a hicup.
Just have your own hardware and your own LLM if you’re that scared
@@g4l4h4d1 Ahh very simplistic if the AI is at a Skynet level of interconnectivity, there have been so many TV shows and movies that show what could happen if it really does exist I am surprised they just don't keep it all in the Pentagon black projects if at all.
You didn't describe AI. You described news media.
@@TheManinBlack9054 its not even a hiccup its something that happened before ai and will happen with ai.
I hadn't heard about Amazon's walk out technology just being a bunch of people in India watching the CCTV. That's hilarious.
Yeah. It’s one of the best things I’ve heard in a long time!
@@SashaYanshinthat is not the case. The Indian workers were just labelling the cctv videos for Ai training
@@jc1170 Which is why they immediately shut the whole thing down as soon as this news came out. Obviously nothing to see there.
@@SashaYanshin no, if you actually researched, the workers were being used in edge-cases where AI didnt know what it was, and had doubts, its human backup system. Its not that the entire system was AI. Dont spread misinformation, you will be reported. Research before saying stuff.
@@TheManinBlack9054 Maybe try listening to the video because I literally say exactly that and quote Google as well.
I’ve had majority of my holdings in tech stocks and irrespective of market changes, I’ve done pretty well especially with apple’s P/E(price to earnings ratio) gaining over 30% this past decade, now my questions is what stocks do you think will be the next apple in terms of growth for the next decade.
It might be difficult finding the next apple within the tech stock sector, apple has performed way better than the others, maybe look outside of tech stocks.
well things are different now, same market strategies applied over last decade wouldn’t apply to the current market, so to actually figure out how to outperform the market and stay afloat for the next decade, you should reach out to a financial advisor, that’s how I’ve managed to properly diversify across the right asset classes and gained over $450k in profit this past couple years.
That’s impressive, have you always had a financial advisor?
Her name is “Annette Christine Conte” can't divulge much. Most likely, the internet should have her basic info, you can research if you like
Thank you for this Pointer. It was easy to find your handler, She seems very proficient and flexible. I booked a call session with her.
What's really scary is they're letting doctors etc. use this stuff... I've done some AI data training at work and it's impossible to train these models to stop "hallucinating".
i work in research as an AI engineer in computer vision and some of our projects include medical data analysis. I think it's important to note that there are steps in a positiv direction there. but there are some key distinctions:
firstly, the models are only meant as a tool to the doctor to make more informed decisions. And secondly, almost all of the models are not generative which means they dont create data, but summarize lots of data point to a few easily umderstandable ones.
We also actively try to avoid the word AI and use machine learning instead, to minimize these hype fueled pipe dreams
@funnyBecauseItsME
machine learning ftw
As someone whose close to getting a bachelors in mathematics with a minor in stats and comp sci, what would you suggest the next steps be after graduation? Pursue a masters or go straight into the workforce.
I tired to use gpt for programming and it blew my mind when it began making up things that didn't exist
@@ravensharpless lol I will ask it questions about how to do something and it will give great seemingly well thought out instructions and then you discover some of the settings it's telling you to change don't even exist.
@@ravensharpless Maybe you need to learn how to use it properly. Many developers, including my own, report an order of magnitude increase in efficiency.
Took me a while to realize that these "AI" are just rebranded 'Im feeling lucky" from google search.
well, google search engine in itself is a basic ai, the ai mentioned here are more the type of i think, listen to me, i hope
@@betag24cn People have been made to conflate AI with AGI, which has still not proven itself 100% viable.
Yes, computers can do sophisticated things, since about 1970. People WANT to believe in AGI, for a variety of reasons. Corporates want to replace their useless eaters, nerds want to be friends with commander data, incels want a f- buddy, everyone wants the gleeful satisfaction of watching the main demographic of having to "learn to prompt," which, if it *actually* worked well, no one would have to learn too much about.
The hucksters have having a field day. Maybe we deserve it.
100%
A lot of it basically is. It's also a bit like the text prediction that your phone keyboard has. It's calculating the most likely response that fits your input
Asking chatgpt questions is a lot better than visiting the first page on google. It gives you a quick summary and tells you which webpage or pages it pulled it from so you can double check if you want. This is a really valuable time saver. And the summaries are always on topic and usually accurate. Much better than trying to find the specific question to type in google to find a specific stack overflow post.
Search engines have gone way down in usefulness in my 28 years of Internet use. Behind the scenes they reword your search terms to words that will benefit their advertising business. If Reddit has the info you need it is often the better option to find the info.
i can't even find anything better than google though and a lot of websites are garbage now
@@MrAzureJamesduck duck go isn't bad.
@@saliferousstudiosit‘s not bad but also not good imo
@@MrAzureJamesstart page maybe
@@saliferousstudiosStill a far cry from Google of 2010. That was the last year I can recall where Google would show what I was looking for.
Seems like those slimey creeps found out it's a bad idea. 😂
The amount of times you mentioned AI in this video should get you massive traffic and algorithm favorability. Well played!
LMAO. I somehow doubt it!
The algorithm is an illusion? But this video was at the top of my suggested list.
Who could have possibly seen this coming? Sasha. Sasha could see it coming.
It did. The video was recommended to me. I never searched the word AI or watched this channel before
@@SashaYanshin Where do we buy Sasha Yanshin stocks?
When I explain that all I do to create an AI model is literally write a simple program in R-Studio and hook it up to an excel spreadsheet, it pretty much shattered the illusion for anyone who asks me.
These models are not even tangentially useful for most things.
Literally the only thing keeping it relevant is that people are too lazy to check what ai really is
@@z1DEv_ag isn't it extremely unreliable since it hallucinates facts?
@@ZoranRavicTechI guess you just fact check it but then you're just adding more to your job
@@z1DEv_agwhich scientific research? I'd like to investigate that idea. What's an example piece of research I could lookup to check bias
Isn't it your job to minimize bias as a researcher by having a variety of sources that's properly validated? Right now AI isn't reliable because it can just make stuff up. If you find sources too bias while GPT is more objective then where did it get the data to create it's more objective point of view from?
This is not even new technology 🤦🏻♀️ Japan has auto-grocery stores that you don’t need to checkout for years.Faking it is so embarrassing
They attach rfid tags to everything in the store, if anyone is curious
@@user-te5po4bu8o - Thanks.
There's been talk of retail stores in the U.S. and Europe using RFID tech for many years, to do automatic checkout, but I haven't seen it take off. I'd hear about a store here or there using it, as a test case. My guess re. what's holding it back is there's no consensus from product manufacturers about using it. I imagine retailers don't want to take on the task of adding RFID to all of their inventory.
How many of you hate the automated voice redirection?
I work for a bank..they removed people and setup this nonsense.
Now we take 2 week extra time to resolve the issue
It's not a bug, it's a feature....
Dude you don’t get it, it’s AI, it’s the future.
And then there was idiocracy 😵💫
I've yet to come across one single company that I couldn't hit 0 or say "associate" and get a human.
@guyincognito😂😂😂😂😂😂5663
i have always suspected "we dont know whats happening in the black box" actually meant "please dont look in the black box it has evidence that im plagiarizing"
Go ahead, look inside. No one is stopping you.
You can even put your own stuff in the black box if you want
I dont think thats the case most of the time. The black box analogy is that the combinations interactions and paths through the weights and baises in the network are too numerous to know "why" it gave that output.
If we take image recognition, you and I can describe why we said it was an image of a cat, but with the AI it's just millions of numbers being added and multiplied such that by the end of the feed-forward, it lights up the "cat" Neuron the most
If you fold a cross, you will actually get a black box. When you graduate ( gradually indoctrinate) you wear a black box on your head. The kaaba, one of Islams holy sites, is a place where you pray to a black cube. We are mind controlled by media, which was once shown in a black box known as TV.
yeah there are definite cases where you just dont know what exactly it's doing. the real lesson is to be careful around hype, disingenuous people and companies will always try to catch a ride and make a profit, at the cost of others.
How isn't it illegal to mislead investors like this?
There's leeway in the law for the SEC that as long as CEOs 'try' to make the shareholders money without just outright accounting fraud as part of it, it's legal. The technology is half-baked, but there, compared to the scam about Elizabeth Holmes was running where the underlying tech didn't even work as said, which drifted it into fraud territory.
It is. When everybody is profiting this much money nobody is gonna hold you accountable though. Laws are only as real as their enforcement.
Yes and: every company profiting from the AI hype is also a lobbyist, so politicians and government also profit from it. In a year or two there will be another crash and basic commodities like food will get (even) more expensive.
One word: lobbying
Laws are like spiderwebs through which small flies get caught but big ones do not (paraphrase of something Balzac said)
AI is the new gold rush.
And Nvidia is the one selling the pickaxes.
The problem is all these companies who loudly talk about using AI in their processes actually spend money on ChatGPT licences, buying AI enabled servers all to just boost their share prices which ironically does boost share prices of companies like Nvidia which just triggers a FOMO cycle of sorts
The reddit traffic boom is probably robots scraping the site, they used to have pretty permissive API access but they ended that around a year ago, I think it was close to the date of the traffic boom you noted.
I added AI to my Breakfast café logo and we started having more clients, the hype it's unreal and ridiculous. It started as a joke but people actually ask us how are we implementing AI in our kitchen.
Lol. And what do you tell them?
Bro, you gotta tell them you have a robot chef.
I let the domain name of my "fake" wrestling company (ok, ok, it is REAL to me) lapse, so I just bought the Dot AI domain name.
😂 hipsters gonna ask for gluten free AI prepped coffee
You do have a robot chef in the kitchen, his name is Chef Mike.
Sasha just discovered how the entire software industry's been operating for the last 30 years.
How long will it take until he discovers that this is also how the entire economy's been operating for longer?
They show us pictures of fast food burgers with hidden spacers to make it look bigger and more appetizing. They sell us "dishwashers" that still require you do most of the work of washing a dish. And they sell you fruit juice as if it's supposed to be as healthy as fruit, when really it has so much sugar it might as well be pop.
The reality of a product should not be judged by an idiot who doesn't understand how metaphors and exaggeration are constantly used to appeal to your emotions rather than your logic. Because nobody buys a product or hates a product based on their logic, but based on their emotions. An advertisement isn't an objective, unbiased explanation of the factual specifications of the product as compared to alternatives. An advertisement is an emotional self-expression of the essence of a product to motivate you to buy it.
Which isn't to say that the emotional self-expression is necessarily wrong or bad. You can say something factually true that's still misleading, just like you can say something factually incorrect that's not misleading. Like I can say "the Earth is a sphere" and factually that's just wrong. The Earth has so many bumps, ridges, valleys, and is somewhat ovular. But the essence of my statement is correct, because it gives people the correct idea that the Earth isn't like a disc or a cube.
@@Pehz63 bro, touch grass
Good food for thought! Thanks!
And gaming, and finance... this is basically our entire economy outside of hard physical commodities, and those are often partially infected as well.
@@nevisysbryd7450 I think you're under exaggerating how prevalent this is. Even things like toilet paper will have all sorts of weird labels and ads about how amazing they are. The only stuff that doesn't have exaggeration is stuff that basically just doesn't have any sort of advertisement and it's only label is a plain description of what it is (like if you're buying screws or wood at Menards).
AU has replaced the word Algorithm as the corporate buzzword for anything that calculates anything. 99% of CEOs saying AI don’t know what an AI is
*2004:* egg timer with a switch
*2024:* Egg Cooking AI
Picking what you want, walking out and relying on some automated system to identify what you took and bill you for it later sounds like one of the worst things I can imagine.
yup, it's a matter of time before you get billed for a bunch of things you never touched.
A recent, short conversation between my boss and my team: "We need to use AI" - "We already do. We just don't call it AI anymore, because it actually works."
Funny... quick work story myself. I run a bread route. Ordering, delivering, merchandising.
Orders are clearly important to know expected volume and keep waste down.
Well a couple years ago the big boss company tells us we no longer have to order. They were bringing on AI to maintain orders. Based on order history, present trends, etc.
Loooong story short... no. No, no, no. AI wheels fell off and was abandoned within two months.
It didnt recognize holidays (?), weather, and couldnt even reliably replicate last years orders.
It was actually wild how bad it was.
@@lankyrob6369 It wasn't the software, it was whoever programmed a worse calendar app and sold it as AI.
google maps routes are a great example of good /helpful AI in practice. These are great. But true that we don't call them AI lol.
Every company on Earth has been using "AI" for decades. It's called the "fit a line" function in Excel.
@@lankyrob6369 Must be the same system used in my local grocery store. Always out of the stuff I wan't to buy, unless I'm there just after morning delivery. Less waste and less sales.
Finally, someone who sees the bs. I'm a programmer and have worked in numerous industries. I've seen all the bs.
Please tell us about more bs you've seen in the industry.
What other Bs have you seen ?
Have you seen my ballz
@@sabrinelan Not OP, companies I’ve worked at portrayed search engine algorithm as “AI machine learning”. Like you got a list of 10,000 individual texts sorted by emotion according to how much words match’s each list (negative: angry, annoyed, hate…; positive: pleased, happy, excited…). Current company I work at we did something similar for moderation (list of bad words we completed manually because the OG list wasn’t creative enough)
My favourite rant of the year! This AI label on everything is driving me nuts! Well said about Bing stealing.
I just had to show an example where there are no questions 😂
the story is fake @@SashaYanshin
happened in 2000 too, everything was called 2000, even the Windows OS, 2000 year compliant
We can't even define intelligence. In humans. AI is just a dumb buzzword. From the start.
@@Jordan-Ramses you think John McCarthy in the 50s chose it because it was a buzzword?
"A.I" is just applied stats. Neural networks have been around since the 1960's. They just find patterns in data, chasing an objective (such as next word prediction). LLMs will never be as good as a pocket calculator for adding numbers, even in 200 years time. They are a form of artificial narrow intelligence (not general), because their use cases are restricted to fuzzy, unstructured data, such as semantics (sequences of text), images (pixels), and audio (waves). That said, there's amazing applications to be found within those domains.
Lol. In the metal shop I work with they're "using AI" to monitor safety habits through shop cameras. This comes down to checking for people not wearing gloves, not wearing the right gloves, not wearing safety glasses, not wearing earplugs (somehow), not wearing face shields when grinding, ETC.
The problem is that AI has absolutely no sense of depth or field of perception so if it loses track of your hands at all, it assumes you're not wearing gloves. If it can't see your transparent safety glasses, it assumes you're not wearing safety glasses. If you're grinding and the sparks go in front of your face shield, it assumes you're not wearing your face shield. If you start welding, it assumes you're not using your hood. If your hood is personally bought (they all are) it assumes you're not wearing a hood. If you have a sticker on it or paint it like everyone does, it assumes you're not using one. If your leathers get covered by soot, it assumes you're not wearing leathers.
They have on guy in each building monitoring thousands of flags an hour because of security and personal security reasons - almost all of which are invalid and the sample sizes are waaaaaay too small to even use this system for with anything you could even call efficiency. And they had a meeting to tell us all that they're seeing improvements in safety, which is actually untrue in real numbers.
Companies lie about everything. Literally everything. They spend $5 mil on a machine, it somehow pays itself off in a year even though it actually puts out less product and takes more people to run it more efficiently. They cheap out on training people in operating and maintaining the machine so the machine crashes constantly. Every. Single. Real. Number. They give out is fabricated. I would not be surprised at all if we see a national economic crash stemming from fake numbers of production values and false representation of efficiency stemming from people just trying to keep their jobs after they notice the problems. Then after the problem becomes so bad it isn't notice, their bosses cover for them because they also know they should have known and never let the problem get that bad. It's the same problem that has been fucking up the economy for decades with every new technology overpromising returns (machines, computers, Internet communication, Robots, and now AI) but it's just happening faster and faster and growing more and more out of control because 'regulation is bad' and 'auditing is a waste of resources.'
Re. the AI you describe, it's what I've been calling Artificial Stupidity.
Most "real" AI isn't even AI, it's just more complex ML. It's doesn't understand anything, it's calculating probabilities. OpenAI isn't understanding your question, or even sentences, it's just calculating the most likely next word, it does that pretty good, but it has inherent issues.
that's literally pretty much all your brain did when you wrote that.
@@philsomething8313 not really, there are huge differences between how are brains work and how ML/AI works.
@@Paul-qj4dr the only big difference is the fact you have a 'conscious discriminator' that decides whether or not to filter information.
Other than that, you think of an idea of what to say and then your brain feeds the next words to your consciousness.
The fact is, ML does this part alot better than most people can.
@@philsomething8313 We might translate our idea's into words, but we have idea's in the first place. The latest "AI" models are mostly LLMs (Large Language Models), which don't have much of an understanding of what they are talking about. Our brains also "train" differently then AI is trained: we need much less "data"/trials.
@@Paul-qj4dr most 'ideas' usually 'pop' into our heads i.e. not conscious, this means that it should be replicable by most AI's without consciousness we just haven't got there yet. Also whilst AI's do need more data, this is most likely a result of our brains being pre-wired with a neural network 'half complete'. Evidence of this is in AI's LLM's ability to perform 'zero shot learning' where you feed it an example of what you want it to do and it is then able to replicate that.
The head of OpenAI was asked if they use TH-cam videos in their data sets. She paused, looked really uncomfortable, and couldn't answer. She couldn't answer because they'd get sued.
Wait so do they? 😅
@@BrisaniAshley yes
It is covered under fair use and is transformative content. While they might get sued, they will be able to defend it as fair use
@Harmony-tk1nm it is literally theft
If they listed the content of the training set, there would likely be a line of lawyers ready to sue, stretching from coast to coast.
Until we see a sentient machine, completely aware of its own existence and capable of original thought, artificial intelligence doesn't exist.
I'm so sick of machine learning being labeled "AI"...
Yep. As a sci-fi fan, my expectations for AI is high.
All these LLM tools are pretty mid.
@@oompalumpus699 Mid? We're dealing with schwag here.
And we can't even consistently create human intelligence that meets your definition.
I will make it pass butter to me.
@@oompalumpus699
If you are a sci-fi fan and not actual science you have a bad background to discuss AI. This terrible hype comes from people thinking fantasy is within grasp. It's not, AI is applied math, and math has severe limitations.
The whole point of AI is that the software trains itself, not is trained by a room of people. That is a knowledge based system, the technology of which has been around forever. They have just rebranded KBS as AI. None of the so called AI stuff out there is true AI technology.
Thank you! I'm telling that to people all the time.
Knowledge Based Systems are a form of AI though.
It's a dystopic world when Reddit, 4chan and AI are the last bastions of truth, thought and opinion when the rest of Google SEO gives us generic surface level results from literally Buzzfeed and Kotaku.
And it's creepy that AI isn't really AI after all.
I almost assume that AI "writes" these insipid articles that show up in AI search results (and google search results). The stuff that comes up at the top of searches now doesn't even seem real.
Reddit is hardly a bastion of truth, more like a bastion of group-think that will censor you if you disagree.
True. I been hooked on Geopolitics for half a year, and suddenly had a Trueman-Show moment randomly finding this vid. There actually is intelligent life in here.
@stephanieellison7834 think of it like Wall-E in lamence terms. We'll all be living in a Buy N Large space station where all the robots will be catering to our whims and needs. Making humanity entertainment, making us flavored slop in a cup, live on tracks in transporting cars, only to canive with our family/friends/everyone else as we become fat, immobile, or obese.
AI will do it all because it's almost hardly regulated.
@stephanieellison7834 think of it like Wall-E.
completely true. the only part I dont agree with is that this Generative AI will replace every single skill in the world. The NVDIA CEO is a snake oil salesman
I didn’t say that. I said AI in general is obviously going to be an amazing thing. But yeah. Large language models are fun and all but they are not it.
@@SashaYanshinLLMs are the precursor for real logic and reason based AGI…
I agree Nvidia by the looks of it have been buying call options in their own stock to cause a massive gamma squeeze and make the stock squeeze higher, not illegal, but shows a lack of integrity in my opinion if the options theory is proven.
Other thing I don't get is once Google, Amazon and Microsoft have got all their Nvidia chips, they should last a large number of years, so once they've got what they need why would they keep buying more?
The Nvidia CEO at least got the looks of a snake oil salesman.
They cannot make windows updates stable. AI? What the hell are they talking about.
As an admin user of a CRM, I talked the client out of wasting their money on the new AI feature of said CRM. It is just a weak search engine marketed as "AI".
Yeah - everything is AI now. People love buzzwords.
@@SashaYanshin it might be the new fancy term for Search Engine Optimization. That kind of thing, but the people buying AI starter kits/secret decoder rings have NEVER HEARD of the previous buzzwords.
Bing AI is specifically designed to use Bing search results and combine them using GPT4. That’s its purpose. It’s utilizing the summarization capabilities of LLM.
I was about to respond to that as well. I saw that it said, "Searching the web" and when Co Pilot is done it shows links where it found the resources at the bottom. Chat GPT, Phind, Co Pilot, they are doing web searches behind the scenes.
Indeed, if he scrolled down on the screenshot he took, you'd have seen the website he was quoting as AI having "stolen from" or "plagiarised" listed as the primary source of all that feedback he got to the query, but that didn't suit the narrative, so he cut that bit out.
@@Wooraah Agreed. I wrote a reply here similar to yours and it seems to be removed. What gives?!
But at the same time removing all traffic from the source site. Doesn't make it any more fair. Bing gets the traffic, the ad revenue. The sites they stole from are left without visitors.
@@blisswebIt doesn’t fix things but no one should have to search for significantly more time if they don’t have to. Financial incentives should never get in the way of quality of life.
The "AI" branding was always a marketing move, the same technologies were described as machine learning for a long time before midjourney and the GPT boom
Yes, before AI it was Machine Learning before that it was heuristics ... it's all marketing bling to make the product seem more s€xy.
Sports analytics companies are another example of this. There are companies that claim to have computer vision systems that can watch sports games and record all the players' stats/actions. I've worked at one of these companies and their "computer vision AI system" was a bunch of guys who were watching the game and manually inputting all the data.
BTW there is no such thing as a true random number generator.
@@silencedogood7297 Digital yes, but i would argue a real life coin flip is 'random'.
Hahaha
I was still there when google was all about : text only ads in your search and you can directly reach your responses. Indeed, these days are double youtube ads, scam ads, deepfake ads, google with image ads ... we sank deep down from where we started.
yeah, Google is WORSE. It was ok around 1999 or 2000. It was REALLY good then, but it imploded. SEO and junk tried to game it and killed search. AI is the "mop up" operation.
5 years ago every CFO mentioned cloud on earnings calls .Last year AI was the nom de guerre on earnings. Next year it will be Robotics.. Humans love a story to explain things.
Humans love lies to exploit things.
but robotics are part of AI tho.
This guy gets it. Meanwhile most executives I sell to still don’t even understand cloud 😂 much less AI
@@murphdoesbusinesscan my ai live in the cloud, or if it's cheaper, can my cloud live in an ai?
But did cloud stocks go up though 🤔
Google says it was doing Machine learning, I spent time in Kenya Nairobi that had warehouses of low cost staff clicking dog and cat pictures.
That’s the worst part. The human rights abuses
Thats called data labeling lol.
Machine Learning requires enormous quantities of Training Data; thus warehouses full of cheap labor clicking pictures. digital sweatshops.
It’s still machine learning, they were making the training data
CAPTCHA is really just a distributed, very large scale version of what you describe. All of us are the AI.
Reminds me of when I liked to imagine for fun that ATMs actually have a little person inside, with some machine to make all the noises with
My wife worked as a search rater for a bit, if you don't mind simple repetitive work it's good pay for no qualifications and work from home
There's way creeper projects too, like one from bing where you just get shown pictures from random people's cloud storage and have to identify if the ai identified their friends correctly
As I said before . People have no idea what is going on behind the curtain .
Trust me, I work in Data for insurance company
insurance as big a scam as AI
Yeah insurance companies are the biggest rip offs going, especially in the UK
@@thenoodlebuddy well , The UK invented insurance as we know them today 🙂
Yup. For example, some people think that what's going on behind the curtain is 1,000 Indians are watching all of the video footage and deciding everything manually. When in reality, there is an AI that does some of the processing. It's just probably still shit at it and requires the human expertise to further train it.
People will believe anything, no matter how extreme, as long as it confirms their moralist judgment. If you believe these companies are amazing and super innovative, then you'll believe it's all 100% AI and super efficient and saving everyone tons of money. If you believe these companies are lying, snake-oil salesmen, then you'll believe that it's all 100% human-expertise and wasting money just to dupe the investors.
The reality is more complicated than what people believe, and especially what people say, especially people who are speaking to a wide audience (because they are likely speaking to generate clicks or push their moral opinion, rather than speaking to seek truth).
@@thenoodlebuddy
Insurance is Haram.
Spot on. Whether I have an online retail enquiry, comparing insurance, doing research, troubleshooting, sorting purchase, have a tech query, need to sort something out, an AI chat bot has always been helpful and solved the issue - NOT. EVER!
Maybe sometimes they will. Just you wait. Being serious btw.
Agreed, 99% of the time they are useless. Some people who are too lazy to find the opening times of a store will use a chat ot, I use the chatbot function as I'm having an issue, but the chatbot is so useless that it doesn't understand when I tell it I need to speak to a real person as my order hasn't arrived, and I'd like to know why.
Business misusing AI will be one of the biggest business killers I think. I now actively avoid any business that says they use AI chatbots, and I think more people should too
Well said. AI is going to lead to virtual mountains of "garbage out". Finding truth will become a real challenge.
To be fair it was difficult before ai as well
@@ZoranRavicTech Usenet was good, before MS Outpost / Google groups entered.
The tech industry has been so embarassing for so long.
They made a lot of great stuff, I blame the marketing people and their sponsors.
I remember when this happened about 20 years ago with a service named Jott. It was stated or at least implied that the service used algorithms to transcribe voice messages to emails but it turns out the company used its startup funds to hire a bunch of Indians to do it. I got the same vibe with Wendy's "AI" ordering system that was demoed in the last week or so. The delays are long enough for a human to process the order in the background.
Nice to see that some things never change
Let’s not forget about the Cruise Robotaxis in San Francisco which turned out to be remotely controlled far more frequently than we were led to believe.
I'd trust them more if they were human driven. A human wouldn't have dragged a pedestrian trapped under the car 20 feet as it pulled over after a collision in that 2 Oct 2023 accident.
Just as human drivers don't scam people with insurance scams, purposely caused accidents?
They're were just bad in particular. Waymo seems more reasonable in their claims and more impressive in their capabilities.
I remember the Aloe Vera craze back in the 2000s. Aloe Vera soap, cream, make up, infused this that and the other… you know when they’re really taking the piss when you go to the local petrol station and see: Aloe Vera car wash for a gentler wash for your car. 😂😂😂 King Kong sized bs
Now o want aloa wash for my car...
@@Mishkafofer 🤣
Same thing with the acai berry. Or... "organic".
Totally noticing the demise of all the search engines usefulness, can't find anything on Google now. Surely an opening for an engine which doesn't feed you crap. The ai answers you get and the ai generated web pages are easily identified by their flimsy content.
I've actually started using bing. It gets better results. Guess with all the AI its gonna get worse too
duckduckgo isn't too bad IMO, usually pretty decent results and without all the AI hot garbage and relentless grifting
I use Yandex for a lot of searches
@@ciaranirvine they sell your data left and right, duckduckgo is not really a good search engine, they also ideologically police the results to rid it of "propaganda"
For technical questions I have been using CoPilot. Politics Yandex or Brave.
So basically Amazon just figured out how to out source service jobs to India….
that was just the low hanging fruits.
10:35 This is something that has been driving me crazy, companies asserting that they aren't storing training data cause its in the model and not directly in a database is maddening. As a software developer if I hardcoded news articles into my application that data is still being stored even if its not in a file for that specific data or in a database, its still a copyright violation if I then distribute that application. Storing data in an AI model after training as mathematical patterns is no different than directly storing that data in a database
Like reading a book, then going back to the bookstore for a refund when your done.
Data Scientist here. Actually, large language models are trained by tuning weights. The data is not stored, but it's learned to give similar answers. So a bit more complicated. But what this guy showed was the bing copilot, which indeed searches through websites and analyses them. Even the source should be mentioned below the answer. This video is misleading
@@ditschiu Staff developer who worked for IBM Watson Health Imaging for 3 years here. "learning" is a form of storing data, if information can be recalled in part or in whole that information has been stored, therefore those systems are storing data. It doesn't matter if that data is stored as direct copies of that information or in the same format, once the machine has "learned" that information and it has been incorporated into the model, and therefore that information is now stored in the resultant application after training.
From a legal standpoint traditional software that contains or used any unlicensed intellectual property is subject to copyright violation. It's insane that you young kids think that just because these AI applications are not storing the data in the same format or only storing fragments of the data that it's not "storing" data. Inorder for an application to create or perform any action it must have the patterns for that activity stored in it's programming. If you make a machine that "learns" and the "creates" similar or near identical art works it fundamentally must be at least storing mathematical patterns that can be used to recreate that information (which for classical software is a copyright violation, regardless if your software is storing something in pixels, a part of a binary file or the mathematical representation of some art work).
For a data scientist you sure don't understand the nature of information, this "Machine 'learning' isn't copying, or reconstructing information in part or in whole and then storing it in the application' narrative is so annoying and intentionally misleading to allow corporations to steal data with impunity.
@@ditschiu This is such an annoying statement, there is a reason that these up and coming AI "Startups" have been settling law suites out of court with news organizations, creating special contracts paid contracta for the use of news media when training, and reinforcing these LLMs and not out of a sense of charity. It's because they know that a blind folded, half decent lawyer would be able to win a case against these AI companies in literally any copyright suite where the application was trained on copyrighted data.
There is a reason that we didn't get a pop music AI spawning out of all of the recent "advancements" in AI and it's the MPAA, copying a single beat from an MPAA owned song in your hit new single can lead you to a lawsuit, so imagine how hard a company that made a music bot trained on modern copyrighted music would get hit.
"Learning" is a term that literally means storing information for later use, whenever some dumbass 1st year university student goes "nah uh, these AI applications are learning information not storing information" I get so annoyed, cause it just displays that they have no fundamental understanding of the nature of information.
@@BlueScreenCorp first of all, I'm not defending these corporations using the data of others. I'm working at a company which is very careful with customer data. We can't just train our models, because of exactly these issues.What I mean is that it is not the same as storing data in a database. That's why it hallucinates sometimes. I really don't want to argue about this, that's for the legal people. My point was about the example in the video, that it shows exact words of that article. It's because copilot is literally going through these websites and summarises or quotes them. If you ask openai chatGPT it should give a different answer. These words are not stored there in the 'AI', the knowledge is.
As someone working in tech I am glad to see you call out this BS hype machine. Yes Ai will have a impact on society. But right now a lot of it is based purely on higher ups in in tech company fear of being left out and pushing any garbage with the word AI attached to it.
I work in tech as well. I know AI will take over, but right now it's still just in it's basic form and not necessarily generative and useful.
Even OpenAI / copilot isn't useful enough for me to spend 20$/month on for coding purposes.
I'm constantly looking to see what tech becomes available so that I can actually use it and maybe even create my own business using it. But we're just not there yet, but I invision it will be soon.
@@Lolatyou332 At least a decade away from anything remotely useful.
@@Lolatyou332 I agree, and would add that we should develop some ideas about what “there” means. It must be the case that part of all the hype right now has to do with the fact there are no broadly understood expectations or standards about what AI should be capable of.
Lol you need to open your eyes
@Lolatyou332 what are you looking for? Any cool ideas on what kind of tech you want?
Love this! I was at an insurance software conference in the US in 2002. One company was advertising automated application / claims forms scanning technology. You uploaded the scanned document to an API and later on you could query the API to get the text version. I looked at the rep, asked how that was possible (remember this was over 20 years ago) and told him I thought they just had a bank of people who typed out the content. The look on his face told me I'd got it right!
The interesting point is that this service is now genuinely available to everyone, so I wouldn't totally diss AGI happening at some point.
OCR was a thing in the 90's. Even recognition of hand writing and conversion to block letters. This on devices with a fraction of the processing power of today's devices.
There are some legit use cases for LLM's, one of them being a "regurgitate what you read and give me the abridged version", along with generating things like schemas, pojo's and table creation scripts based on json data for example.
The issue is the hype and doomer baiters, both after those $$$. One side overhyping and overstating the capabilities for investment, whilst the other is farming outrage clicks for $. Meanwhile in reality its just a tool with many usecases that were difficult and time consuming to deal with previously.
"Summarizing" was one of his main points. The problem is that summaries are just stealing copyrighted data from legitimate sites, and only slightly modifying it.
It directly quotes the source it links you to? Outrageous! lol
Bing is a search engine!
@@6AxisSageagreed. I feel that the hyper is overhyping it way too much but the doomer is acting like behind any llm task there’s 1000 low skilled worker or ai hallucinates everything and we can’t do anything about it
@@ZoranRavicTech
Yep, and it's also the only possible way to make it work.
3 groups are talking about AGI, and i have been saying this since November 2022:
1. CEOs. For the investment;
2. Content creators. For the clicks;
3. Distracted common folks. For the adrenalin.
Because no 9-5job AI specialist believes AI is achievable within the next 300 years.
"Not achievable within the net 300 years" Sure thing buddy.
This is correct in some fundamental levels (such as the few examples given in the first half) but so wrong in so many other levels (such as AI is just stealing content and does nothing else). If the latter was true, it wouldn't be able to solve novel math questions or pass humans in cognitive reading tests. Check out the latest report on the "AI Index" from Stanford University, for instance. We surely have something at our hands that is way beyond any technological advancement humans have ever accomplished in their 300.000 year history.
Also, the telltale sign of exaggerated AI-criticism is the fact that all videos in this vein tend to say "Yes, AI will of course improve our lives and it's definitely here to stay." If AI was all bogus currently, these creators wouldn't be able to claim it this confidently. So what they're doing is cherrypicking some extreme examples (which, to be honest I completely agree that it is REALLY annoying since what these tech bros do doesn't serve any meaningful purpose other than keeping the excitement up, make some quick cash, and then leaving a bunch of people feeling tricked) and paint the current AI wave with the broadest possible brush.
GPT-4 is DEFINITELY something humans has never seen before (and if what tech leaders say is 1% true, GPT-4 will be just a toy model very soon). And a single example of asking Copilot about a VERY GENERIC thing like a "Las Vegas trip" is definitely not enough to show its capabilities or weaknesses. There are a bunch of benchmarks for this (while each of these benchmarks may not be perfect, when used in conjunction, they tell the real story of how AI is performing). GPT-4, LLAMA 3, Claude Opus, etc... If you actually add these to your workflow and if you actually interact with them for more complicated tasks, you will soon find out that these are definitely not mindlessly stealing website content (or at least, that is definitely not their main mode of their operation 99.9% of the time). You will see that they are not regurgitating random samples of the web like this video claims for 13 minutes of a 14-minute long video. Creators should do better than just merely going against a "mainstream" and instead, try to educate the public on the truths AND falses of the revolution we're in.
Yes, the current AI boom can be likened to the Internet bubble of 2000's (in terms of the "feel" of it). Then let me ask you this: What exactly happened to the Internet in the next, say, mere 25 years? It's like THE LIFE right now for so many people and businesses! We can't even imagina a life without it! It's everywhere and it created so much more value than that disappeared in its "bubble" phase. And I can guarantee you that a thing that can create its own designs cannot be compared to a thing just connects multiple computers across the world to one another.
All in all, let's not get ahead of ourselves when criticising the current hype. Those that actually lie will be eliminated, for sure! But it would be childish and plainly wrong to purport this as "the whole AI boom". Instead, if anything, one should focus the criticism on the ongoing culture in the Silicon Valley instead of the tech that actual scientists and researchers have been building inside the academia and outside of it (like... in Silicon Valley) for so many decades now.
Ahh! Nice to hear another perspective, I will re-think my stance on this topic
If you feed chatgpt novel math problems, half the time the answer is wrong
I have checked and your assessment is wrong. You apparently didn't understand the stanford report.
the computers using supposed AI systems, performed well on normal computer tasks such as grouping and classification by similar traits but were much worse than humans and old computer systems on reasoning, complex math, and other tasks. Please, everyone search Stanford AI index 2024 and read it for yourself. It is obvious from the report that AI is not real AI yet. The person who made the comment evrimagaci can't read statistics very well.
I wonder if TH-cam's AI or an actual person will demonetise this video. Thanks for exposing this BS.
So, this would definitely seem to imply that Amazon's "predictive recommendations" on ads or the marketplace was really NOT the result of an algorithm AT ALL and it was our phones eavesdropping on us the entire time, EXACTLY LIKE WE ALWAYS THOUGHT.
Oh yea they’re allowed to lie. And they do it a lot
Yes I have tested that by saying goofy random things I'm not searching for and then seeing the ads for those things show up.
My toilet has an issue and is running all the time. I've started getting ads for anti-diarrhea medicine ...
@@macmcleod1188 Some devices never sleep ...
@@macmcleod1188 , if you use Android, there's a way to access the developer tools to disable internal sensors, which includes the microphone setting used for listening.
Fucking admire you Sasha. Просто прекрасно. An excellent video denouncing big corps abuse, you should get huge credit for this.
Thank you! 🤩
This spurious AI branding began in China where they have been doing it for close to two years. The most hilarious example of imposter AI was at a convention in either Shenzhen or Shanghai where a dubious Chinese tech company had a wonky AI concierge on a large display screen and the AI voice turned out to be a Taiwanese woman with a microphone in a hidden booth behind the display.
1:20 the term AI is not an academic term, or a protected business term and has no true meaning. AI is and will continue to be solely a hollow marketing term. What I have seen is MANY companies have just started to replace the words software and "smart" with "AI". Its just straight non-sense
Hi Sasha, I'm a big fan of your videos and I agree with many of your viewpoints. However, I think there might be a slight misunderstanding regarding how the Bing AI works. This AI is allowed to search the internet for references on the fly, including your webpage which it references. When I tried your webpage was the first referenced and linked to. During the training phase, the AI model doesn't learn or memorize the content of your and every webpage verbatim. It simply can't store all that information in its parameters.
Yup. Basically a fake claim about people making fake claims.
By the way, I am surprised Elon Musk was not mentioned:
1· He's one of the most guilty people out there on the faking front.
2· It would have done wonders for the channel vis-à-vis the algorithm (even more than mentioning AI).
so basically this modules (llm's) doesn't generate content instead they match references is that what u traying to say , if so I'm 100% agreed with u
I wish Amazon would just walk out.
I had a past client which we were assisting with due diligence for a company that had an AI chatbot. We recommended not to buy it because it looked like vaporware.... The client ended up buying the startup company. It turns out they had a team of 100 in the Philippines answering as the AI chatbot. It took the client a year to finally figure it out why there were large payments to the Phillippines but def shows this is happening from small to large companies continuing a 'Wizard of Oz' veil with tech. Don't get me started on tech debt and how many companies are on the brink of cyberattacks and catastrophic problems as they bandage shitty code.
No way it took them a year to notice xD
@@ZoranRavicTech Let's just say when there's dysfunction in one area, there's going to be dysfunction in others.
I started researching into types of AI for a writing project and realized that none of it is even close to true AI. In fact, the big companies are hurting future true AI development for a perceived worth that will burst in a year or two.
It's crazy watching this.
Amazon be like: "we made an AI that detects cheese" *pulls out a box of rats
Whaaat? You're telling me that businesses are saying they have the greatest buzzword that's has happened recently and actually DON'T?!? That's absurd!
😂😂 the comments are fun
Chat GPT isn't and AI. It is a well trained bot. It doesn't learn from my communication with it, keeping giving me wrong advice it already got pointed out was wrong previously.
@@donbusu Did you run out of adjectives?
"I am sorry I made a mistake. How about reading the same crap I already told you?"
@@donbusu Chill. He probably is using an AI assisted spell checker.
AI doesn’t mean something learns live.
the bing ai is supposed to do a search and summarize what it found that is the stated goal of it, its not at all weird that it did that nor is it some hidden thing, nor is it presented as if its comming up with brand new stuff the whole point of it and whats special about compared to other chatbots its that its integrated with the search engine.
yeah true , it doesn't generate content at all ,its just doing references matching and blending
Great original content man! I could see you did your job.
By the way: the technique Google is using for using internet content in their chatbot’s response is most probably RAG (Retrieval Augmented Generation).
It’s interesting to watch things like this, because I used to do this for work. It’s not all folks in Indian, it’s folks from all over the world, and they’re generally not employed traditionally. Anybody can go out and do it (although it’s not for the faint of heart, it’s incredibly boring, underpayed, and your treated like a cog in a machine). Mturk is a big example.
AI being faked is going to be a big thing and India will benefit
Looks like a works program, unfortunately not locally. Hence the need for UBI.
Why India though ?
When I shop at Decathlon, I put my basket. It scans & accurately provides the total cost. I am scanned my credit card & I am done.
So why does Amazon not have it?
Very interesting perspective. I love how you identified individual cases on google bard/bing chat to show us how it is directly taking information from websites. Could you maybe do an aggregate on more cases and the questions that you've asked so that we know it isn't outlier cases instead of the majority?
Wait, they're supposed to take information from the websites as theyre search engines. How useless would a search engine be if it didn't return information from stuff you search for? 😂
So basically all of these companies are doing exactly what they did at Theranos but none of the CEOs at those companies are going to be held criminally liable for fraud.
Remember how WeWork was a tech company? Yeah, now everyone is an AI company
Abbey National used to do the same with automated loan applications…they were literally being printed and inputted by hand in the back office 😂
AI is just excel with a Vlookup and a macro
You think they know how to do macros? Nobody actually knows. Admit it - you just press the record button.
Fsd is macro 😂
Vlookup 💀
C'mon, you are exaggerating. They added a filter to the spreadsheet.
The reddit thing started 1-2 years ago cause people noticed that asking google a question and putting "reddit" at the end, will give them the answer they are looking for, instead of just random google results... so that started a feedback look that lead reddit in the eyes of the google algorithm... pushing reddit results up
This whole "AI" thing reminds me of a video on TH-cam called _Dueling Carls, a "Talking Carl" Scream Fight_
This is effectively what "AI" is in a nutshell.
I kind of prefer the way the Internet was before it was possible to earn money on websites. Google ad services have been an absolute blight upon the digital landscape.
SM has been a killer as well ...
I worked at a mobile app development company that once was promoted to have an in app chatbot that were actually just real people 😂 it was a one-off for just a small event but yeah, it happens, a lot.
I bet there are a few “chatbot call centres” out there!
@@SashaYanshinyou can point a bit to beat captcha selections to an Indian call centre where people are given the actual captcha and click the answers.
I work with a vendor whose "licensing bot" is not always responsive. 100% just canned responses triggered by a human.
@@DarkGobgive the poor "bot" a break xd
Brilliant Sasha. Let’s call out this AI BS.
Artificial intelligence Breaking System.
The world has gone nuts for yet another new tech that changes everything.
@@SashaYanshin It follows the pattern of Blockchain trend but without IPO every third day.
Not sure this is the full story. Amazon were still using the camera system to detect items, with the humans checking and looking at items the software couldn’t detect or figure out what happened. Sounds like reasonable beta testing to me.
AI BD!
Yesterday at work someone got a local government document with a code on that none of us had seen before. Now, those codes are always on those documents and usually there's a summary of what they mean as well, which gives us an idea if how to classify it. (I swear this is relevant to the video.) This one didn't have a summary, so we looked it up. The first result and Bing's AI answer were both very obviously wrong in the same way. The summary they gave was for one of the most common codes, possibly THE most common one.
What I think happened is that the AI (I suspect the first result was also AI generated) couldn't find an answer to what this very obscure code meant and instead of answering "I don't know" it returned the most common answer to similar questions. In this case we knew that answer was wrong, but how often does it do this with questions it can't answer and how much trouble can those incorrect answers cause people that don't know better.
Its impossible to find any information of value on Google now. Rampant AI articles with wrong information. The internet is dead.
Hahaha 🤣
Chat GPT will then retrain on the wrong information and the next iteration will be even more chaotic.
I don't know about anyone else, but it seems to me like main takeaways here are that
1) a lot of people who have the funds to become major investors in these companies actually aren't very smart (or, at minimum, are very easily led), and
2) the people running these companies have no problem taking advantage of this while pumping out products that range in quality from mediocre to a complete fraud, in order to take advantage of these gullible people with money- and any consumer stupid enough to trust the company's products.
To your first point, I don't think you realize the end goal of what these investors want. They simply want to make money. Even if all the investors involved know it's all bullshit, as long as the bullshit they've invested in shows stock increases, they get paid and they're happy.
it's not stupid individual investors I'm worried about, it's stupid corporate investors....your pension fund is being wasted away right now on some AI scam because some other stupid "AI' is purchasing moving stocks
It's a classic pump and dump. Get in early, hype, diminish holdings on way up, when most retail investors and index funds are in and the exponential curve flattens, short the stock and pop the bubble. Profit on the way down as well.
Sasha is not suicidal Sasha is not suicidal Sasha is not suicidal Sasha is not suicidal Sasha is not suicidal
Incredibly based. Only the lowest fake email jobs will see disruption from this. Making up random responses from a semantic search doesn't constitute meaningful thought, planning, or anything someone above room temp intelligence should be worried about.
And that's when the AI is not fake in the first place!
Please check back with your comment by the end of the decade. Promise?
Yeah all this A1 garbage everywhere is very fishy.
Amazon has always been pathetic.
My friend works in digital technology, he was a division head for products. Then he decied to get a job at another company because of the AI role that was offfered. He turned up and on his first day realised that their AI division was some bloke sat by himself below the ground floor, with no experience and no clue of what to do with AI. It was all a con! This was 3 years ago.
I tuned in for the "OpenAI Spring Update" scheduled to start at 12:00PM Eastern... but they were running late. What should appear in my sidebar of recommended videos - this one, which is totally sinking their battleship. I couldn't agree with you more Sasha, and I'm glad more creators are pointing out these issues. HILARIOUS use of the custom website.
It's weird ... like a glitch in the matrix. First time I see what appears to be real people in comments in my 6 mnts on UT.