It is legitimate for a model to train on output from another model. As confirmed in US Court. "On August 18, 2023, the US District Court for the District of Columbia released a landmark decision on the copyrightability of AI-generated works. The Court confirmed that human authorship is necessary for copyright to subsist in a work and that content generated by AI without any human involvement is not protected under US copyright law.". Ai generated work, is for that reason per definition legitimate to train on.
necessity is mother of innovation. Forcing China to only get H800 not H100 And the number is limited too. .It just force China scientists to think about how to make more efficient algorithms. 😂😂 Then it is distributed for free without any censorship in github😂. So what happens to investors who spend billions of dollars? 😂😂If everyone could run R1 672B in their own home .
necessity is mother of innovation. Sanctions China to only get H800 not H100 And the number is limited too. .It just force China scientists to think about how to make more efficient algorithms. 😂😂 Then it is distributed for free without any censorship in github😂. So what happens to investors who spend billions of dollars? 😂😂If everyone could run R1 672B in their own home .
China did what „open“AI was supposed to do. I think people don’t understand yet. China just gave the power of chatgpt to every developer in the world… for FREE!
There are open source models which performance is comparable to chatgpt, This model is mostly shaking up the industry because of how cheap it was made.
@@willemvanriet7160 it's not that their Developers are smarter, but they had to figure a workaround for missing the compute power. Then made the model open source in a checkmate move to big tech.
The chip sanctions is an open statement that the US has no honor, ethics, morals or historical background! The corruption called the Uniparty gov knows it can’t compete so it just cheats! Banking on the decades of lies and propaganda to mute any moral resistance Americans may have! But based on last weeks no1 downloaded app RedNote, I’d say that assumption is failing as well!
USA to CHINA: sorry we won’t allowed you to join our exclusive international space program…..CHINA: ah ok , no problem we will just build our own space station, more hightech & just a fraction of your cost.😊😊😊😊😊
The NASA is waiting China to help to retrieve the 2 astronauts stuck in in the international space station. China said we cannot help because your US law prohibits us from helping you. We can help if your government remove the law.
@@peterg0--- so sorry to correct you --- it's called --- criminally corrupt American Capitolisum that fucks over every American Citizen as well as citizens of every country in the world.
I can run DeepSeek 1.5b on a Raspberry Pi just fine. Yes, it is a heavily distilled version, but I DO NOT need a $10,000 Nvidia GPU, and most people do not. It is the democratisation of AI. The moat US tech companies claimed they built, is not a moat with crocodiles, but a shallow pond with goldfish, their whole business case has been taken out.
@@doords good idea. But I'll suggest don't hurry and wait a little bit.This thing will be long. I tried it because I already had the resources. I think in future there will be integrated AI supported chips/GPUs in the laptops. We don't have to install by ourselves.
It would be nice if it was accurate and pointed out that all the things mentioned pre existed deep seek, by A LONGASS TIME. Its almost as if all the effort the open source teams that built Hugging face, ollama, llama.cpp, vllm, and all the great open source models didnt exist. Even though their fine tuned an EXISTING OPEN SOURCE MODEL
It is really not that hard to understand why Chinese AI like Deep Seek is a lot cheaper, and more effective, at least, in the Chinese language domain. Chinese researchers and engineers focus on model and logic backed up by vast amounts of data collected by many big companies like Tencent. Alibaba, Xiaomi, etc, while the U.S. counterparts rely heavily on chips capacity and capabilities for computing. There is far more human intelligence involved in Deep Seek than ChatGTP. Human intelligence is lot cheaper and efficient and effective than hardware. The Western media pretend not to understand this simple fact because they don’t know how to compete with China and they don’t know how to deal with the consequences in the human intelligence domain. Cooperation with China is the only viable option for the U.S., so that the U.S. would know a lot more about what China is doing, and vice versa, for a better world.
The US ‘cooperated’ with China on viral research in Wuhan hoping to keep an eye on research that was illegal in the US. The CCP made fools of them and how did that work out for your better world. It is naive to expect that China will ever do what’s best for anyone other than China… if you think they are think again.
The paradox stated doesn't save Nvidia, esp if you can use any GPU to run these models. The previous narrative was that the ONLY GPUs that could run these processes were top line Nvidia GPUs, and that is being broken
That was never the case though, Intel is pretty aggressively targeting the AI market with its discrete GPUs. They just don't have the high end hardware to offer, but they do have hardware that's good enough to eat the low end of nVidia's line if they fumble the ball. Intel is putting 12 GB on its entry level card, for example.
What are you talking about use any model. The make believe improvements discussed in the paper were specifically done for NVIDIA GPU's not AMD or INTEL or Chinas ASCEND GPU's. As for running them. lets be honest most people will be running them in the cloud. Which will be a mixture of NVIDIA gpus and cloud provider specific chips.
@jasonbirchoff2605 sure but the argument that the only way to get o1 level performance was to train on the highest end GPU is basically dead. Also the idea that you need to run these models exclusively on NVidia GPUs is also dead. Regardless it doesn't justify their valuation and the market is realizing that.
In project management there are two sayings: good is good enough. Better is the enemy of good. A billion dollars made U.S. engineers fat dumb and happy.
Those sayings are 'truisms' for a project manager. A project manager has three things to manage. Cost, Schedule and Performance. Performance is defined by specifications. It takes time (think schedule) to develop something. Keeping people on the payroll for that time costs money. Hence, the old adage: Time is money. If your specification requires you to advance technology to get performance 'x' and it costs more to reach 2 times better than x, you are wasting money if you achieve 2 times x (because your client only asked for 'x'. Your customer won't be happy getting charged for developing 2x when he only wanted x.) Good is good enough means you have met the customer's specification...stop spending money and deliver the product. It can also be stated that a design engineer is not in the business of making good enough better. If a customer wants better, it's time to revise the specification and go back into development (spiral development). In the same way 'better is the enemy of good'...because it increases cost and schedule to get better for no known reason (meaning you are not spec'd for better).
The reason for this was companies funding AI were the ones selling the shovels (Microsoft, Nvidia) so there was no reason to improve efficiency (or risk losing the funding).
@jamesnotsmith1465 You are thinking in terms of widget production, where the end point properties, quality, and performance are well specified. RDT&E is a different animal, and the notion of good enough is constantly changing. Cutting edge innovation is team and individual dependent, and break throughs are never guaranteed.
@@Sven-cd8sn The thief not only stole but was also audacious enough to publicly disclose the stolen items, allowing everyone, including the owner, to use them for free.
No, they are mostly wondering what took all you trolls long enough to recognize open source models like llama and mixtral existed BEFORE R1 and offered similar performance.
100% I'm Indian I visit once a year, and I'm ALWAYS disappointed at how far behind India is. All of Asia is seemingly leaving India behind. India's biggest export CAN'T be its engineers... but how do we address this?
@@TwoBitDaVinci yes that's very disappointing, but it's a deep structural problem. The Indian government and the society as a whole just don't have the humility, forsight and the will to recognize a problem and setting up long term plan to tackle it. You can very easily see that through the performance of the Indian Olympic team. For a population of 1.4 billion people, the country cannot even produce a single gold medal for most of the Olympics games that runs every 4 years. It really says a lot about the country and the leadership's mantality. China on the other have a very different mentality and it shows in the way they performed in the Olympics and any other international competitions.
As a PhD holder (in STEM) from India, I am often laughed at in India even by many educated Indians. But not outside India. It's ture that India is still behind because of the mentality.
The only export from India to the Globe is cheap engineers and truck/cab drivers and inn keepers. Most techs from India have phony or lackluster credentials unless they were educated in the West. India will never catch up (till 2050) to China with its A+ STEMs. India shud focus on fixing its sanitation system first.
I work in automotive software and we still use 8 and 16 bit fixed point data types to save memory even today. I didnt realize 8 bit floating types existed, thats cool!
that's very true, a lot of messages on different busses def are very data optimized. Glad you got as big a kick out of some of these innovations as we did. very cool stuff and it'll definitely find its way into all future models
วันที่ผ่านมา +16
Essentially it's the same game as back in the day when consoles where not so powerful and game developers had to be creative to build a great game on those less powerful systems. In 20 years we look back to AI as gamers look back to the late 80s to mid 90s how creative developers had to be.
So I guess llama and mistral and the plethora of open source llm's on hugging face just appeared out of no where when deep seek got announced on SM....
The CS grads who pass out in US hardly hardly know anything about floating point arithmetic or log2 base arithmetic. I used to treat bits like gold in my embedded dev days. The grads walk around like rich daddy’s kids who only learnt CS using infinite compute storage and network. 99% of them can’t tell me what a loader, compiler or micro instruction set is They surely know how to ask 200k starting salary It’s over for west. 😢
you're an embedded dev, so you are well poised to understand just how easy we've made coding. When I wrote my first app for the iPhone, it was SO HARD, and now its become nearly no code. We definitely need a rethink! i appreciate your comment, cheers!
What are you talking about dude. the quantization your referencing was started in american labs. They are literally how the open source models which have been competing well against OAI models have been used by users in the open source community running vllm and ollama. Hell there is even a distributed implementation of llama.cpp so you can split the model across multiple nodes. That way you can run bigger models on a cluster of machines like raspberry pis... I swear the ignorance on display is staggering.
@ bigger is not better. Do you know how to optimize a GPUs processor interleaving? Meaning reduce process cycle wait time or interrupts? I bet 100% you don’t. You are a modern day techie who only knows python, json and a few readily available tools. Can’t dig inside or open the hood It’s over for west! Weakest techie generation
@VLADDDD-TTHE-SANCTIONS-IMPALER so I point out that those optimizations have been actively used in open source from US Labs for years and your comeback is.... I am a normies techie... Tell me your a China propagandist without telling me your a China propagandist
He should have heard about the story of a person from Vancouver, Canada. He was manufacturing smart phones that operated on its own network and programming hardware. He was selling them to criminals for $5000. Everything about his system, including his customers, was illegal and got caught by the authorities. The FBI heard about him and the phones and asked the RCMP if they could borrow him. They had him make these phones and created a false front to sell them to some of their prime targets. The phones were made so they could listen in whenever they wanted to. This operation was in use for 2 years before someone figured out the phones were bugged.
It is legitimate for a model to train on output from another model. As confirmed in US Court. "On August 18, 2023, the US District Court for the District of Columbia released a landmark decision on the copyrightability of AI-generated works. The Court confirmed that human authorship is necessary for copyright to subsist in a work and that content generated by AI without any human involvement is not protected under US copyright law.". Ai generated work, is for that reason per definition legitimate to train on.
Great video. Technical, but understandable to an LLM tourist. This DeepSeek thing is reminiscent of the NASA million dollar pen vs. the soviet one dollar pencil solution in the great space race.
I agree. I’m sure every chip is available on the black market. Might cost more, but it is available. Also, can’t they rent it from a data center like anyone else?
It is so funny. It is always the same. The next big AI thing will come from a garage, from a dorm, from a lab ... That is where Apple, Google, Meta, Microsoft were born, and they forgot. They have forgotten that getting biger doesn't make you smarter , and what you need is the best idea, the best concept, with the best team ... not money, but ingenuity ...
The fallacy here and a reflection of American conceit is the delusion that Nvidia is the only GPU in the world. Well, news flash, it's not. China has dozens of GPU manufacturers, most notable is Huawei Ascend. Most of these GPU are of course inferior to Nvidia's H100, but are good enough for lower requirements. Also, memory is basically unlimited in China. Another conceit is that DeepSeek released R1 just to spite the US, as though everything has to revolve around the US. DeepSeek released it because that was the day all preparations were completed. They need to release it as fast as possible because they want to get ahead of ByteDance and Moonshot , who will be releasing their own models. Lastly, the answer is that Tonya Harding still lost even after kneecapping Nancy Kerrigan.
Yep, it's going to be another shock moment not far in the future. Tech control would just make China much stronger. Like resistant training. A normal person can left 50 kgs without a training. The same person can lift 200kgs after a couple years of consistent hard work.
I'm working on truth augmentation architecture. Others are working on other fundamental improvements; so, good reason to be optimistic. OTOH, there is no way me and my shovel can compete with an earth mover. I tried, I moved 14 yards of dirt. It took me a week. It's a 10-minute job with an earth mover.
If a manufacturing defect causes a fatal accident no one goes to jail because otherwise we wouldn't have cars. There are recalls and fines for not complying with recalls. Jail time in corporate America is generally reserved for malicious behavior rather than unintentional harm. There is no way that Google or Meta would produce publicly available AI if there was a chance their CEOs would serve time.
You mean like other open source models ahve done??? Believe microsoft was the one who demonstrated that you could use a bigger model to train a smaller model with their phi line of llm's
It was astute to compare these AI models to plastic in the ocean, and to further discuss the liability for companies for where the outputs end up, for example, becoming tools for scammers. Just like plastic in the ocean, the only solution is to have never produced them in the first place.
Your doubt is basically assuming they may be lying in a scientific paper which is very rare in scientific arena. We do not lie and present false info in a scientific paper. That would bring an end to your career.
@ The party did not even know that they did this and was equally shocked by how much of a shock they brought to the world. And the supposed evil party is doing much better for its people than all the “leaders of democracies”.
The significant reductions in training times, costs, and the ability to scale efficiently can be traced back to the implementation of a dualpipeline data processing unit. This is similar to the early experiments conducted by the Wright bros using kites and gliders as it marks the beginning of a new era in AI where it is no longer necessary to have a trillion-dollar budget to compete with established players
Idono why but when he said Ai is hungry for electricity I thought of Morpheus holding up a battery to Neo when he was explaining how Ai turned humans into batteries..
If it costs less to use AI then the energy consumption would be 10X more than if the cost to use AI is less. Instead of a small population of users now it's going to be 10X the population using it.
If models become 25x more compute-efficient-whether in training or inference-that’s bad news for Nvidia. So your guest saying he feels '25 times better' about Nvidia doesn’t make sense. Greater efficiency means fewer Nvidia chips are needed for training and inference, which means lower revenue growth for Nvidia but could be good for SaaS companies using these models.
nVidia can salvage this situation if they just put more VRAM on the cards. 12 GB should be entry level, and the 5090 should be available with 64 GB, even if not all of them are that decked out. Intel is putting 12 GB on _its_ entry level GPU. Both AI and modern games desperately need more VRAM. Providing this on consumer-grade cards could effectively lock AMD and Intel out of that market unless they _also_ choose to increase VRAM. Intel might, AMD won't as they've (at least temporarily) abdicated the high performance GPU market. Back in the RTX 30 series, you had to choose between a 3060 with 12 GB, or a moderately faster 3070 with only 8 GB that cost an extra $50 or $100. We should have options that don't make us choose one or the other. Sometimes nVidia's biggest competition is itself, either due to odd product placement like this, or because the improvement over the previous generation is not that large. Stick extra VRAM on the 50 series and it will move some people over who otherwise wouldn't.
In the future, if you buy an AI to help you diagnose cancer and you need 99.99 percent accuracy, you certainly do not need that model to know European history.
Necessity necessitates innovation. DeepSeek is a perfect example. In each tech area China is denied access, GPS, space station, AI, chips,…, China is forced to innovate. The next shock will come when new Chinese AI models will be trained using Chinese homemade chips. Huawei is already making chips as powerful as Nvidia’s H100 now. Nvidia’s stock price will never recover its excessive highs in the past.
Huawei is limited by TSMC if it wants advanced process nodes. The best fab on the Chinese mainland is 14 nm FinFET -- almost good enough to match an RTX 20x0 series.
@ Keep your head in the sand. A few days ago, your American daddy believed China would never catch up with them on AI. Just be patient, the day is coming soon.
It's like resistant and weight training, US is putting weight on the Chinese and they are training an pushing harder to over come that weight. Overtime, the Chinese are so much stronger than if there's no restrictions. Deepseek is just the tip of the iceberg, the millions of engineers and scientists are working on all sorts of creative ways to overcome the semiconductors restrictions. While China might not be able to compete in nm in the next 5 years, I wouldnt be surprised that China will overcome that disadvantage by other novel ways that would do the same or much better than the American semiconductors industry as what Deepseek has done to the US AI industry.
No!!!!! The reason why adding two lanes to the 405 freeway doesnt fix the traffic problem is because two lanes were needed in 1997. But the project didnt finish until 2017. And by 2015 we needed 5 new lanes. Also multiple lanes are wasted for pay to play riders who can buy their faster commute.
@@cryolasv2 what stark did in the cave was innovate from scratch. What deep seek did was cobble together techniques and data from others. MOE was first done outside OAI by Mistral labs in France... So yes it is an insulting comparison. You can only say what you said if this is your first experience with open source models.
It would be good to give this Deepseek development time to ripen. China and Elon Musk both have a tendency to overstate their accomplishments. There might be less to this than what was said.
It's exactly the same as games, look back at something like the original rogue or elite or cartridge games and the mind boggles at just how small they are for what they can do. Modern coding has become lazy because the coders time is more expensive than the compute resources. The Chinese actually designed something to be efficient with the resource that is the limiting factor, compute. If you have a limiting factor, the easy answer is to increase the supply, the efficient answer is to optimise the process to minimise the use of the limiting factor.
"The quality of life gets better"...how much of that is because we are redefining the meaning of "better". Questions of Liability and Responsibility will be determined by the best rules that money can buy.
Try asking it "What would the possible consequences be if the three Gorges Dam were to fail?" It won't outright refuse to answer, but it will basically say "The dam is awesome, you need to learn to relax and trust the authorities". This will be unrelated to any Chain of Thought it might produce (and it will go blank even if you had time to read it), so it's very obviously the result of a nanny filter.
@@mal2ksc Same, But if I follow up with "but what if it did fail" it does detail all the catastrophic results including "undermine public trust in the government's....."
"Having an AI send an email on your behalf is kinda scary!" Why would it say, "This is Surfer," instead of, "This is Surfer's dumb-ass robot," or something like that? 🤔
In discussing the sub models that specialize in particular problems, would this lend itself to developing sub GPUs that are optimized for those specific sub models?
I think that the entire AI infrastructure is a stair stepping environment. Codes may different, but the creator starts with the question: ❓ maybe this way may work better. Someone says "let's try it".
Good Lord. STOP training AI for free. Stop asking it idiot questions, and stop trying to back it into a corner. It's learning with every question and your not getting PAID.
Expert: The consumer should benefit from this in a cheaper product or better performing product Company CEO: 😂 My expenses will drop and my bonus will sky rocket. Screw the consumer.
Could you offer a solid resource for someone who is just starting out in the stock market that describes the various investing vehicles, such as "index ETFs"? I have 50k to invest, but I do not know anything yet.
Having an investment advisor is the best approach to the market especially for a newbie like you. I was going solo without much success until my husband introduced me to an advisor. I've achieved over 80% capital growth since Q3 last year, excluding dividends. So i will advise you get one as well.
Lauren Christine Campbell is the CFA I work with and im just putting this out here because you asked. You can Just search the name. You’d find necessary details to work with to set up an appointment.
Thanks for the suggestion! I really needed it. I looked her up on Google and explored her website; she has an impressive background in investments. I've sent her an email, and I hope to hear back from her soon
no way is Ricky old enough to remember punch card days. I am, my university in the UK had a punch card reader, and i'm probably 20 years older than Ricky
Will DeepSeek use your content for training? Will DeepSeek steal your original and unique content if it is the best content available on a topic, without giving you credit?
Great info I don’t really know what to think about deep seek yet but pretty wild if all the numbers are accurate which based on his info seems at least mostly accurate 🤯
All the hype for the the Americans side was to establish the narrative that America owns the edge of future tech, since it will revolve around AI. So there has been an impetuous to make sure that people think it is essential for our economic dominance and security. For this we had our omnipotent oligarchs, politicians and media to thank for. The problem is that we have a concept into gestation and need to propel it as fast as we can against our peer competition. As usual all the bragging and hyping led people to believe that it would need subsidies from our government backed by taxpayer money. The government want to play this card along because it throws the economy in a buying frenzy for advance chips and severs, a gigantic infrastructure that draws so much energy that it needs to reform our grid. Therefore the prospect of push for consommation in edge technology accompanied by a surge in employment and more potential for our stock market to fly o the moon. Well, China played its card very well, showing that it is an innovative country, nor just a copycat. It dispelled the political propaganda behind the hype and gave an essential tool for the common denominators by making it open source. Deepseek was distcreete in the making of their AI, it is a join local venture that did not seek profit or gratification. Their team worked really hard to dedicate their resources, about 5.6 millions u$, put their brain together to create a long lasting master piece affordable to all, no royalties imposed. That's the spirit of the BRICS in action my friends. "China's got talents' too!
Thanks DeleteMe for sponsoring this video! Protect your online Info Today! joindeleteme.com/TwoBitDavinci
It is legitimate for a model to train on output from another model. As confirmed in US Court. "On August 18, 2023, the US District Court for the District of Columbia released a landmark decision on the copyrightability of AI-generated works. The Court confirmed that human authorship is necessary for copyright to subsist in a work and that content generated by AI without any human involvement is not protected under US copyright law.". Ai generated work, is for that reason per definition legitimate to train on.
Ironically, Sora training on YT videos is not legitimate, because that work was created by humans and thus copyright is held by the author.
necessity is mother of innovation.
Forcing China to only get H800 not H100 And the number is limited too.
.It just force China scientists to think about how to make more efficient algorithms. 😂😂
Then it is distributed for free without any censorship in github😂.
So what happens to investors who spend billions of dollars? 😂😂If everyone could run R1 672B in their own home .
necessity is mother of innovation.
Sanctions China to only get H800 not H100 And the number is limited too.
.It just force China scientists to think about how to make more efficient algorithms. 😂😂
Then it is distributed for free without any censorship in github😂.
So what happens to investors who spend billions of dollars? 😂😂If everyone could run R1 672B in their own home .
I think you just misspelled AI in your video title Ricky. FYI
China did what „open“AI was supposed to do. I think people don’t understand yet. China just gave the power of chatgpt to every developer in the world… for FREE!
@@sam_so_nice but it’s nowhere near the power of ChatGPT, it can’t even read hand writing and convert it to text, try it
There are open source models which performance is comparable to chatgpt, This model is mostly shaking up the industry because of how cheap it was made.
American companies, scientists, and engineers don't, can't, or won't cover their nipples and pussies and complain about the Chinese looking at them.
Chinas ai companies didn't do this, a crypto startup with extra machines lying around did, you have to give credit where credit is due.
А твоей женой, машиной, квартирой китайские разработчики тоже могут пользоваться бесплатно?
Much like when placing tariffs on Japanese cars made them better and US cars worse, the chip sanctions on China made their developers better...
Capitalism made American cars worse; cheaper materials, less engineering, cheaper poor manufacturing = Pure GM Trash.
@@willemvanriet7160 it's not that their Developers are smarter, but they had to figure a workaround for missing the compute power. Then made the model open source in a checkmate move to big tech.
It's distilled. Smash and grab, obscure actual source. FP8 sure, watch who buys nvidia stock.
The chip sanctions is an open statement that the US has no honor, ethics, morals or historical background! The corruption called the Uniparty gov knows it can’t compete so it just cheats! Banking on the decades of lies and propaganda to mute any moral resistance Americans may have! But based on last weeks no1 downloaded app RedNote, I’d say that assumption is failing as well!
not if they stole from OpenAI, we will know more soon
USA to CHINA: sorry we won’t allowed you to join our exclusive international space program…..CHINA: ah ok , no problem we will just build our own space station, more hightech & just a fraction of your cost.😊😊😊😊😊
@@Felipe-n3j God bless China
The NASA is waiting China to help to retrieve the 2 astronauts stuck in in the international space station. China said we cannot help because your US law prohibits us from helping you. We can help if your government remove the law.
America is great at spending more than is needed and getting far less, USA military, healthcare, education, housing....
This is what we called Corruption!
@@peterg0--- so sorry to correct you --- it's called --- criminally corrupt American Capitolisum that fucks over every American Citizen as well as citizens of every country in the world.
@@peterg0 America makes corruption a business model for the billionaire class!
Hey, look on the bright side, ...at least Israel gets free housing, healthcare, defenses, and education. BWahhhahhahhahhaaa
I can run DeepSeek 1.5b on a Raspberry Pi just fine. Yes, it is a heavily distilled version, but I DO NOT need a $10,000 Nvidia GPU, and most people do not. It is the democratisation of AI. The moat US tech companies claimed they built, is not a moat with crocodiles, but a shallow pond with goldfish, their whole business case has been taken out.
I ran 7b version in my AMD GPU laptop. It is working even without wifi. I am so happy. 😅
@@Daisy_912 I am going to need a new computer soon, maybe i will get one that can handle this.
@@doords good idea. But I'll suggest don't hurry and wait a little bit.This thing will be long. I tried it because I already had the resources. I think in future there will be integrated AI supported chips/GPUs in the laptops. We don't have to install by ourselves.
I understand, I can wait
like all other chinese goods, they are based on western design, and create something just as good but lot cheaper.
Thank you. We need more of this kind of non-politicized, technology-focused commentary. Objective, technology-driven analysis like this is valuable.
It would be nice if it was accurate and pointed out that all the things mentioned pre existed deep seek, by A LONGASS TIME. Its almost as if all the effort the open source teams that built Hugging face, ollama, llama.cpp, vllm, and all the great open source models didnt exist. Even though their fine tuned an EXISTING OPEN SOURCE MODEL
It is really not that hard to understand why Chinese AI like Deep Seek is a lot cheaper, and more effective, at least, in the Chinese language domain. Chinese researchers and engineers focus on model and logic backed up by vast amounts of data collected by many big companies like Tencent. Alibaba, Xiaomi, etc, while the U.S. counterparts rely heavily on chips capacity and capabilities for computing. There is far more human intelligence involved in Deep Seek than ChatGTP. Human intelligence is lot cheaper and efficient and effective than hardware. The Western media pretend not to understand this simple fact because they don’t know how to compete with China and they don’t know how to deal with the consequences in the human intelligence domain. Cooperation with China is the only viable option for the U.S., so that the U.S. would know a lot more about what China is doing, and vice versa, for a better world.
The US ‘cooperated’ with China on viral research in Wuhan hoping to keep an eye on research that was illegal in the US. The CCP made fools of them and how did that work out for your better world. It is naive to expect that China will ever do what’s best for anyone other than China… if you think they are think again.
dude calm down... their model is not cheaper or more effective.
It's pretty good, way better than chatgpt's default version which normal people use
The paradox stated doesn't save Nvidia, esp if you can use any GPU to run these models. The previous narrative was that the ONLY GPUs that could run these processes were top line Nvidia GPUs, and that is being broken
@@matten_zero agree. now there are real alternative GPUs which is good for all except Nvidia.
@@matten_zero
Nvidia's marketing myth is broken.
That was never the case though, Intel is pretty aggressively targeting the AI market with its discrete GPUs. They just don't have the high end hardware to offer, but they do have hardware that's good enough to eat the low end of nVidia's line if they fumble the ball. Intel is putting 12 GB on its entry level card, for example.
What are you talking about use any model. The make believe improvements discussed in the paper were specifically done for NVIDIA GPU's not AMD or INTEL or Chinas ASCEND GPU's.
As for running them. lets be honest most people will be running them in the cloud. Which will be a mixture of NVIDIA gpus and cloud provider specific chips.
@jasonbirchoff2605 sure but the argument that the only way to get o1 level performance was to train on the highest end GPU is basically dead. Also the idea that you need to run these models exclusively on NVidia GPUs is also dead. Regardless it doesn't justify their valuation and the market is realizing that.
India is releasing their own AI called Dheep Sikh. Okay, I'm sorry.
lol
@@paper_gem 🤓
😂 I'm awake now ☕.
Nice to see "goofy nerds" are still around.
DeepSingh.
am I samosa find that funny?
In project management there are two sayings: good is good enough. Better is the enemy of good. A billion dollars made U.S. engineers fat dumb and happy.
Oops! Nice sayings, makes sense.
Those sayings are 'truisms' for a project manager. A project manager has three things to manage. Cost, Schedule and Performance. Performance is defined by specifications. It takes time (think schedule) to develop something. Keeping people on the payroll for that time costs money. Hence, the old adage: Time is money. If your specification requires you to advance technology to get performance 'x' and it costs more to reach 2 times better than x, you are wasting money if you achieve 2 times x (because your client only asked for 'x'. Your customer won't be happy getting charged for developing 2x when he only wanted x.) Good is good enough means you have met the customer's specification...stop spending money and deliver the product. It can also be stated that a design engineer is not in the business of making good enough better. If a customer wants better, it's time to revise the specification and go back into development (spiral development). In the same way 'better is the enemy of good'...because it increases cost and schedule to get better for no known reason (meaning you are not spec'd for better).
The reason for this was companies funding AI were the ones selling the shovels (Microsoft, Nvidia) so there was no reason to improve efficiency (or risk losing the funding).
@jamesnotsmith1465 You are thinking in terms of widget production, where the end point properties, quality, and performance are well specified. RDT&E is a different animal, and the notion of good enough is constantly changing. Cutting edge innovation is team and individual dependent, and break throughs are never guaranteed.
It is truely sureal watching MSM introduce normies to open source and they act like china invented open source...
Silly Con valley oligarchy just got trumped by creativity
call it theft creativity, wait for the dust to settle...
@@Sven-cd8sn The thief not only stole but was also audacious enough to publicly disclose the stolen items, allowing everyone, including the owner, to use them for free.
No, they are mostly wondering what took all you trolls long enough to recognize open source models like llama and mixtral existed BEFORE R1 and offered similar performance.
Wake up call for India: brain drain needs to stop, genuine research and genuine entrepreneurship needs to be promoted.
100% I'm Indian I visit once a year, and I'm ALWAYS disappointed at how far behind India is. All of Asia is seemingly leaving India behind. India's biggest export CAN'T be its engineers... but how do we address this?
@@TwoBitDaVinci yes that's very disappointing, but it's a deep structural problem. The Indian government and the society as a whole just don't have the humility, forsight and the will to recognize a problem and setting up long term plan to tackle it. You can very easily see that through the performance of the Indian Olympic team. For a population of 1.4 billion people, the country cannot even produce a single gold medal for most of the Olympics games that runs every 4 years. It really says a lot about the country and the leadership's mantality. China on the other have a very different mentality and it shows in the way they performed in the Olympics and any other international competitions.
As a PhD holder (in STEM) from India, I am often laughed at in India even by many educated Indians. But not outside India. It's ture that India is still behind because of the mentality.
The only export from India to the Globe is cheap engineers and truck/cab drivers and inn keepers. Most techs from India have phony or lackluster credentials unless they were educated in the West. India will never catch up (till 2050) to China with its A+ STEMs. India shud focus on fixing its sanitation system first.
Why would any country spend Trillion dollars. If it can buy it much cheaper or free after 2 years. AI is not profitable.
I work in automotive software and we still use 8 and 16 bit fixed point data types to save memory even today. I didnt realize 8 bit floating types existed, thats cool!
that's very true, a lot of messages on different busses def are very data optimized. Glad you got as big a kick out of some of these innovations as we did. very cool stuff and it'll definitely find its way into all future models
Essentially it's the same game as back in the day when consoles where not so powerful and game developers had to be creative to build a great game on those less powerful systems.
In 20 years we look back to AI as gamers look back to the late 80s to mid 90s how creative developers had to be.
DeepSeek's open source => AI will become like the transition from Mainframe Computers to Personal Computers.
So I guess llama and mistral and the plethora of open source llm's on hugging face just appeared out of no where when deep seek got announced on SM....
The CS grads who pass out in US hardly hardly know anything about floating point arithmetic or log2 base arithmetic. I used to treat bits like gold in my embedded dev days. The grads walk around like rich daddy’s kids who only learnt CS using infinite compute storage and network. 99% of them can’t tell me what a loader, compiler or micro instruction set is
They surely know how to ask 200k starting salary
It’s over for west. 😢
you're an embedded dev, so you are well poised to understand just how easy we've made coding. When I wrote my first app for the iPhone, it was SO HARD, and now its become nearly no code. We definitely need a rethink! i appreciate your comment, cheers!
@ so easy to code!
They won’t have a job soon!
This generation is the weakest techie generation in 100 years
It’s truly over for USA !! Truly!
What are you talking about dude. the quantization your referencing was started in american labs. They are literally how the open source models which have been competing well against OAI models have been used by users in the open source community running vllm and ollama. Hell there is even a distributed implementation of llama.cpp so you can split the model across multiple nodes. That way you can run bigger models on a cluster of machines like raspberry pis...
I swear the ignorance on display is staggering.
@ bigger is not better. Do you know how to optimize a GPUs processor interleaving? Meaning reduce process cycle wait time or interrupts?
I bet 100% you don’t. You are a modern day techie who only knows python, json and a few readily available tools. Can’t dig inside or open the hood
It’s over for west! Weakest techie generation
@VLADDDD-TTHE-SANCTIONS-IMPALER so I point out that those optimizations have been actively used in open source from US Labs for years and your comeback is....
I am a normies techie...
Tell me your a China propagandist without telling me your a China propagandist
I was beginning to wonder whether Alex was an AI with all of the freezing 😂
He should have heard about the story of a person from Vancouver, Canada. He was manufacturing smart phones that operated on its own network and programming hardware. He was selling them to criminals for $5000. Everything about his system, including his customers, was illegal and got caught by the authorities. The FBI heard about him and the phones and asked the RCMP if they could borrow him. They had him make these phones and created a false front to sell them to some of their prime targets. The phones were made so they could listen in whenever they wanted to. This operation was in use for 2 years before someone figured out the phones were bugged.
It took 4kb to go to the moon, yet we had lost the tech to go back to the moon.
@@sportsonwheelss The aliens on the moon said, don’t come back!👽
@@robertfansler7800 lol
It is legitimate for a model to train on output from another model. As confirmed in US Court. "On August 18, 2023, the US District Court for the District of Columbia released a landmark decision on the copyrightability of AI-generated works. The Court confirmed that human authorship is necessary for copyright to subsist in a work and that content generated by AI without any human involvement is not protected under US copyright law.". Ai generated work, is for that reason per definition legitimate to train on.
Ironically, Sora training on YT videos is not legitimate, because that work was created by humans and thus copyright is held by the author.
Great video. Technical, but understandable to an LLM tourist. This DeepSeek thing is reminiscent of the NASA million dollar pen vs. the soviet one dollar pencil solution in the great space race.
I'll solve the mystery... They have access to every chip, they always have
i absolutely agree with this
That makes no sense. You don’t know what you are talking about.
I agree. I’m sure every chip is available on the black market. Might cost more, but it is available. Also, can’t they rent it from a data center like anyone else?
@@TwoBitDaVinci They used OpenAI to train their model. OpenAI already said they have evidence of this.
Typical sore loser mentalify. If i canf do iit, others who succeeded are cheaters.
It is so funny. It is always the same. The next big AI thing will come from a garage, from a dorm, from a lab ... That is where Apple, Google, Meta, Microsoft were born, and they forgot. They have forgotten that getting biger doesn't make you smarter , and what you need is the best idea, the best concept, with the best team ... not money, but ingenuity ...
The fallacy here and a reflection of American conceit is the delusion that Nvidia is the only GPU in the world. Well, news flash, it's not. China has dozens of GPU manufacturers, most notable is Huawei Ascend. Most of these GPU are of course inferior to Nvidia's H100, but are good enough for lower requirements. Also, memory is basically unlimited in China.
Another conceit is that DeepSeek released R1 just to spite the US, as though everything has to revolve around the US. DeepSeek released it because that was the day all preparations were completed. They need to release it as fast as possible because they want to get ahead of ByteDance and Moonshot , who will be releasing their own models.
Lastly, the answer is that Tonya Harding still lost even after kneecapping Nancy Kerrigan.
Yep, it's going to be another shock moment not far in the future. Tech control would just make China much stronger. Like resistant training. A normal person can left 50 kgs without a training. The same person can lift 200kgs after a couple years of consistent hard work.
I'm working on truth augmentation architecture. Others are working on other fundamental improvements; so, good reason to be optimistic. OTOH, there is no way me and my shovel can compete with an earth mover. I tried, I moved 14 yards of dirt. It took me a week. It's a 10-minute job with an earth mover.
If a manufacturing defect causes a fatal accident no one goes to jail because otherwise we wouldn't have cars. There are recalls and fines for not complying with recalls. Jail time in corporate America is generally reserved for malicious behavior rather than unintentional harm. There is no way that Google or Meta would produce publicly available AI if there was a chance their CEOs would serve time.
Open AI being replaced with AI, aint that ironic Altman.
At least Nvidia won't be booked out for two years, maybe just 18 months now?
China Deep seek programmers used America's OPEN A.I. chat GPT to make their language model more efficient and Cheaper ..... Genius 😂
Open sourcing it is the important bit. It turns AI into a similar tech as an OS when everyone has it.
why didn't OpenAI use OpenAI ChatGPT to make their language model more efficient and cheaper?
@@mijmijrm They seem to really enjoy throwing around a billion here and there to show off.
You mean like other open source models ahve done??? Believe microsoft was the one who demonstrated that you could use a bigger model to train a smaller model with their phi line of llm's
@@mijmijrmSimply because there was no need. Would you be saving pennies if you have trillions to play with.
❤All US Tech are all national security concerns 😮ohhohh, politicians are in DS❤
The compression is very necessary for limited computation resources. Thanks to the NVDA ban. And it also results in carbon emission lower.
It was astute to compare these AI models to plastic in the ocean, and to further discuss the liability for companies for where the outputs end up, for example, becoming tools for scammers. Just like plastic in the ocean, the only solution is to have never produced them in the first place.
Or maybe China, Indonesia and the Philippines can stop dumping plastic waste in the oceans.
Brothers ? Same height ? Same beard ? Hahaha good luck. 👍
we're absolutely brothers from different mothers
@@TwoBitDaVinci I'm really awake now 😂. You guys make my day. Have a good one guys.
@ yeah you father used say “getting rich and getting lucky too”…
Your doubt is basically assuming they may be lying in a scientific paper which is very rare in scientific arena. We do not lie and present false info in a scientific paper. That would bring an end to your career.
But contradicting the wishes of the Party brings an end to the author. 😵
@ The party did not even know that they did this and was equally shocked by how much of a shock they brought to the world. And the supposed evil party is doing much better for its people than all the “leaders of democracies”.
Alex and Ricky are two of my favorite TH-cam creators. Super interesting conversation!
The great takeaway for me is efficiency. So the more power you have the more it will be capable of doing. This was an eye opening chat. Thank you.
The significant reductions in training times, costs, and the ability to scale efficiently can be traced back to the implementation of a dualpipeline data processing unit. This is similar to the early experiments conducted by the Wright bros using kites and gliders as it marks the beginning of a new era in AI where it is no longer necessary to have a trillion-dollar budget to compete with established players
Idono why but when he said Ai is hungry for electricity I thought of Morpheus holding up a battery to Neo when he was explaining how Ai turned humans into batteries..
Discord notification gang checking in 🎉
Yep :)
what's up Mario!
If it costs less to use AI then the energy consumption would be 10X more than if the cost to use AI is less. Instead of a small population of users now it's going to be 10X the population using it.
If models become 25x more compute-efficient-whether in training or inference-that’s bad news for Nvidia. So your guest saying he feels '25 times better' about Nvidia doesn’t make sense. Greater efficiency means fewer Nvidia chips are needed for training and inference, which means lower revenue growth for Nvidia but could be good for SaaS companies using these models.
I know many people now wanting to spend a few grand on hardware to train their own DeepSeek model, so...
@@ThePallidor They can use cheap gpu
nVidia can salvage this situation if they just put more VRAM on the cards. 12 GB should be entry level, and the 5090 should be available with 64 GB, even if not all of them are that decked out. Intel is putting 12 GB on _its_ entry level GPU. Both AI and modern games desperately need more VRAM. Providing this on consumer-grade cards could effectively lock AMD and Intel out of that market unless they _also_ choose to increase VRAM. Intel might, AMD won't as they've (at least temporarily) abdicated the high performance GPU market.
Back in the RTX 30 series, you had to choose between a 3060 with 12 GB, or a moderately faster 3070 with only 8 GB that cost an extra $50 or $100. We should have options that don't make us choose one or the other. Sometimes nVidia's biggest competition is itself, either due to odd product placement like this, or because the improvement over the previous generation is not that large. Stick extra VRAM on the 50 series and it will move some people over who otherwise wouldn't.
This is one of the most educational programs i have seen. Thanks.
This video is so informative and laid out information in layman terms, making it so easy to digest
the question is why did Deep-seek open source it? could it be that they figured that if it was open source it would crash the AI market in the US?
It won't be long before deepseek AI engine will be natively part of Linux builds
Thank you china, I love deepseek. 👍
American AI - Decked out, lifted F350.
Chinese AI - Toyota Corolla
Isn’t it interesting how mUrica gets shocked when the red dragon waves the tail 😅 empires are going down sometimes 😂
We moved away from empires after WWII. Are you saying empires are not coming back?
@ so having military bases around the world doesn’t make an empire? Meddling in other countries politics doesn’t make you an empire bully?
They probably bought the GPUs from amazon.
Competition is awesome! This will be a kick in the ass and a driver for innovation for U.S. corps.
Ali Baba just release an AI that is much more powerful than Deepdeek
@@jinjihu3767 nope
Compares to V3 not R1
Thank you very much for the technical clarification that allows me to understand the subject even better.
Fantastic conversation and explanation to a fascinating topic👏✌️
"Military Parade" makes me imagine:
Rows, rows, and rows of goose-stepping code 😅.
Wait ... does AI have "consciousness yet".
In the future, if you buy an AI to help you diagnose cancer and you need 99.99 percent accuracy, you certainly do not need that model to know European history.
Necessity necessitates innovation. DeepSeek is a perfect example. In each tech area China is denied access, GPS, space station, AI, chips,…, China is forced to innovate. The next shock will come when new Chinese AI models will be trained using Chinese homemade chips. Huawei is already making chips as powerful as Nvidia’s H100 now. Nvidia’s stock price will never recover its excessive highs in the past.
Huawei is limited by TSMC if it wants advanced process nodes. The best fab on the Chinese mainland is 14 nm FinFET -- almost good enough to match an RTX 20x0 series.
@ Keep your head in the sand. A few days ago, your American daddy believed China would never catch up with them on AI. Just be patient, the day is coming soon.
TH-cam made it so I can't delete my past comments, nor see responses from you guys. This is my last comment on TH-cam...great video TBDV
Looks like OpenAI drop the ball, didn't detect it was talking to other AI.
@@MarcinKryszak Open AI took the path of least resistance, it easier to do it with money than innovate.
Necessity is the mother of all invention.
yes so true!
It's like resistant and weight training, US is putting weight on the Chinese and they are training an pushing harder to over come that weight. Overtime, the Chinese are so much stronger than if there's no restrictions. Deepseek is just the tip of the iceberg, the millions of engineers and scientists are working on all sorts of creative ways to overcome the semiconductors restrictions. While China might not be able to compete in nm in the next 5 years, I wouldnt be surprised that China will overcome that disadvantage by other novel ways that would do the same or much better than the American semiconductors industry as what Deepseek has done to the US AI industry.
Innovations on the rise!, watch prices drop eventually.
Awesome. Thank you for putting this one and educating me. what a comprehensive analysis. A San Diego follower 😊
It is their language, their language is 500,000 characters, each character has a name, object, or subject. Ours have 26 characters.
That's why APUs are the future, you can put 2+TB of RAM and share between your CPU and GPU.
No!!!!! The reason why adding two lanes to the 405 freeway doesnt fix the traffic problem is because two lanes were needed in 1997. But the project didnt finish until 2017. And by 2015 we needed 5 new lanes. Also multiple lanes are wasted for pay to play riders who can buy their faster commute.
"Tony Stark did it with scraps in a cave"
Please dont insult what stark did by comparing to what deep seek did. They are not the same not even a little bit.
@jasonbirchoff2605 there's no insult given, just perfect analogy in my opinion.
@@cryolasv2 what stark did in the cave was innovate from scratch.
What deep seek did was cobble together techniques and data from others. MOE was first done outside OAI by Mistral labs in France... So yes it is an insulting comparison. You can only say what you said if this is your first experience with open source models.
I don't get it. They didn't build a moat. Open source model. More efficient than previous state of the art. Seems like a WIN for the AI industry.
64 bit floating point for LLM is kind of ridiculous for something that could be done better in analog circuits. You heard it here first!
So A.I. is taking other A.I.'s job. How ironic.
AI efficiency can't improve beyond certain point because moors law is dead. And energy price can't go down beyond certain point.
It would be good to give this Deepseek development time to ripen. China and Elon Musk both have a tendency to overstate their accomplishments. There might be less to this than what was said.
It's exactly the same as games, look back at something like the original rogue or elite or cartridge games and the mind boggles at just how small they are for what they can do. Modern coding has become lazy because the coders time is more expensive than the compute resources. The Chinese actually designed something to be efficient with the resource that is the limiting factor, compute.
If you have a limiting factor, the easy answer is to increase the supply, the efficient answer is to optimise the process to minimise the use of the limiting factor.
Informative and well presented.
Good content for sure.
I recommend watching
I believe that limiting the precision was one of the goals of DOJO.
"The quality of life gets better"...how much of that is because we are redefining the meaning of "better". Questions of Liability and Responsibility will be determined by the best rules that money can buy.
Why people think AI service should be free?
LOL, I just downloaded a local copy and am trying it out now!
Try asking it "What would the possible consequences be if the three Gorges Dam were to fail?" It won't outright refuse to answer, but it will basically say "The dam is awesome, you need to learn to relax and trust the authorities". This will be unrelated to any Chain of Thought it might produce (and it will go blank even if you had time to read it), so it's very obviously the result of a nanny filter.
@@mal2ksc Same, But if I follow up with "but what if it did fail" it does detail all the catastrophic results including "undermine public trust in the government's....."
People trust Meta/Llama? Is this guy serious?
They use open ai for most of its training so it's basically open ai but a different color
Nope, that is not a fact. That's just what you want to believe.
Training is alot easier when you don't give a sht about IP.
That's why "Open'Ai is facing multiple lawsuits for infringement..
right! imagine all those IP open AI collected to train theirs
There’s a huge difference between scanning through The NY Times and stealing someone else’s already trained AI model, which is what happened here.
Not what happened anyway, but go ahead and believe that if it helps you feel better.
Thank you for translating into English!
Was this on ANYONE'S radar before today?
"Having an AI send an email on your behalf is kinda scary!"
Why would it say, "This is Surfer," instead of, "This is Surfer's dumb-ass robot," or something like that? 🤔
In discussing the sub models that specialize in particular problems, would this lend itself to developing sub GPUs that are optimized for those specific sub models?
yeah, or cheaper base models ... i think it lends itself to re-thinking the architecture of how we build these mega servers
I think that the entire AI infrastructure is a stair stepping environment. Codes may different, but the creator starts with the question: ❓ maybe this way may work better. Someone says "let's try it".
Exactly .....punish "not for using a tool"....i didn't know Alex is a strong proponent of 2A.
Good Lord. STOP training AI for free. Stop asking it idiot questions, and stop trying to back it into a corner. It's learning with every question and your not getting PAID.
Expert: The consumer should benefit from this in a cheaper product or better performing product
Company CEO: 😂 My expenses will drop and my bonus will sky rocket. Screw the consumer.
Everybody going crazy about DeepSeek and Nvidia but I am just a chill guy who invests in Index ETFs.
Could you offer a solid resource for someone who is just starting out in the stock market that describes the various investing vehicles, such as "index ETFs"? I have 50k to invest, but I do not know anything yet.
Having an investment advisor is the best approach to the market especially for a newbie like you. I was going solo without much success until my husband introduced me to an advisor. I've achieved over 80% capital growth since Q3 last year, excluding dividends. So i will advise you get one as well.
Could you recommend who you work with? I really could use some help at this moment please.
Lauren Christine Campbell is the CFA I work with and im just putting this out here because you asked. You can Just search the name. You’d find necessary details to work with to set up an appointment.
Thanks for the suggestion! I really needed it. I looked her up on Google and explored her website; she has an impressive background in investments. I've sent her an email, and I hope to hear back from her soon
no way is Ricky old enough to remember punch card days. I am, my university in the UK had a punch card reader, and i'm probably 20 years older than Ricky
Did the hide the Chinese spyware?
How many people understate what Y2K means. I do as I lived through it, just to save two bits.
It discovered division of labor? We're boned.
Your best interview!
Perhaps the whole goal of Deep Seek was to short the NVIDIA stock... Very profitable shorting a 600 billion drop
hah ... you know if they knew what would happen it could have been very lucrative, either to short, or buy on the dip or both
Either way, we'll know when we've "arrived at the singularity ", AI will pull the mother of all pump&dump and own Everything!
That 4k ram to get to MOON is utter Nonsense..HINT HINT ! 😂
I draw a parallel with the Human Genome project. The first pass cost a fortune, Now...
Will DeepSeek use your content for training? Will DeepSeek steal your original and unique content if it is the best content available on a topic, without giving you credit?
this is the very question and concern I have! but not just for DeepSeek, but for ALL AI models
❤US investors are being screwed by Big Tech for many year’s 😢 ohhhhhh, party is over❤
Great info I don’t really know what to think about deep seek yet but pretty wild if all the numbers are accurate which based on his info seems at least mostly accurate 🤯
All the hype for the the Americans side was to establish the narrative that America owns the edge of future tech, since it will revolve around AI. So there has been an impetuous to make sure that people think it is essential for our economic dominance and security. For this we had our omnipotent oligarchs, politicians and media to thank for. The problem is that we have a concept into gestation and need to propel it as fast as we can against our peer competition. As usual all the bragging and hyping led people to believe that it would need subsidies from our government backed by taxpayer money. The government want to play this card along because it throws the economy in a buying frenzy for advance chips and severs, a gigantic infrastructure that draws so much energy that it needs to reform our grid. Therefore the prospect of push for consommation in edge technology accompanied by a surge in employment and more potential for our stock market to fly o the moon.
Well, China played its card very well, showing that it is an innovative country, nor just a copycat. It dispelled the political propaganda behind the hype and gave an essential tool for the common denominators by making it open source. Deepseek was distcreete in the making of their AI, it is a join local venture that did not seek profit or gratification. Their team worked really hard to dedicate their resources, about 5.6 millions u$, put their brain together to create a long lasting master piece affordable to all, no royalties imposed. That's the spirit of the BRICS in action my friends. "China's got talents' too!