Open source Deepseek is truely a gift for mankind. You can run the full model with about $6000 hardware budgets, completely free and with 100% privacy… This does huge help to some scientific scholar researches that need to protect data security. we never thought this could happen back in 2024, where OpenAI was the only option for reasoning model.
$6000 to run the current R1 model on your own hardware for a midsize companies is probably not too bad. But in six months or so can or will they be able to update it? And what does it mean for that $6000 machine?
@@gautamarora7764this the cost for hardware needed for using the full version locally. You can use DS for free on web browser but you need to be connected and there is a china censorship
@gautamarora7764 The webpage version of Deep Seek is free to use, but the conversation data will be transmitted to servers in China. What he said was that he was using the API for local deployment. The main drawback of local deployment of large models is that the speed will be very slow if the hardware computing power is insufficient. The main advantage is that it can protect privacy.
Before we think only Big tech companies can own AI but now with deep seek even someoen in third world countries can run deep seek with reasonable specs, truly a give for humanity❤
Locally executed deepseek-r1:32b (only 18.48GB file !!!) on RTX 4090 ran mostly faster than in your video and was still very verbose and understandable. question 1: pass question 2: pass question 3: pass question 4: pass question 5: pass - note: this one took quite a bit longer but answer didn't have any formatting issues question 6: pass - note: slightly different emphasis on last point - 'she' but explained correctly question 7: pass question 8: pass - note: also pass for misspelled word I didn't bother with comparing privacy policies. Overall I am very impressed a 18.49GB model can actually reason better than average human being and has more specialized knowledge than average human being... really, it is totally nuts and especially if you consider this thing can read and write in most languages known to man and I don't need to have internet connection to have access to it. It runs on the same GPU as your average path traced game... yeah, it does need some processing to be snappy and nice to use. In theory however any computer with 32GB RAM should be able to mull through all the tokens and generate similar responses. Now I really feel like I am living in XXI century. Only flying cars and we are also flying and not only talking!
I ran the Distill Llama 8B version locally and got these results: 1. Pass 2. Pass, though it's the same single answer as o1 3. Pass, note: it got the u≈0.8879c in answer section though the final answer in squares is rounded to 0.89c 4. Fail, it said the chicken came before the egg. 5. Fail, it answered "The missing $3 was taken by the waiter as an unintended profit." 6. Pass 7. Pass 8. Pass 9. Failed, it said there are 3 "r"s, it didn't see the misspelling and counted using "strawberry" instead. Not the best result but I also thought it did very well for a 8B model that fits on my 9 year old RX470 8G, way better than any other similar sized model I've tried.
I have read many papers but to sum up R1 has five main advantages *1) it gives you the reasoning behind its thoughts, you can find and tell it to correct it's mistake if you find one 2) it is much more DEPLORABLE it's like when they first invented Personal Computer PC!! You don't have to have a huge Data center or large amount of GPUs to run it, in fact, you can even run it on your phone without internet 3) it is cheaper and faster of course 4) most of all it is free 5) open source so you can open you can edit it update it any way you like* Any of the reasons above should be a game changer by itself but combination of five you got a stock crash like yesterday
@@catharsis222 You could. It would just cost four times more per token. Limited access is free for DS while I haven't seen free access to the reasoning model from OpenAI.
DeepSeek has changed the way of life. I had given DeepSeek and Chatgpt4o, an advanced linear algebra problem to solve, the way DeepSeek took me its deep forest of thinking process, it was out of the world experience. It is a fantastic tool to learn complicated concepts, you will be provided with a recipe of its change of , which helps in reinforcing your concepts. Enjoy and and learn from DS❤
@@GRIMREAPER-sf9pr It depends on what you do. For math problems, you may get the same results. However, I think AI usage will be more like having someone you can have conversations with. You're not going to get the answer on the first shot. There are always nuance and you go back and forth. For example, to get help with writing, I find Chat to be more of my liking. It's not just about who is cheaper but who gives the best results. DS is good for second opinion but definitely not a replacement for Chat. Perhaps in your usage, you don't need Chat and can get by with just DS. I'm sure in certain application, you can just run DS and it's more than adequate.
The emphasis on having a clear exit strategy is crucial, especially after experiencing the volatility of previous bull and bear markets. It’s refreshing to see the focus on setting realistic goals and understanding the why behind profit-taking. This approach not only prepares you for unexpected market shifts but also keeps you grounded during the emotional rollercoaster of trading. It’s a valuable reminder that crypto investing should ultimately serve our personal aspirations, rather than just becoming a game of chasing numbers. I have managed to grow a nest egg of around $200k to a decent 7 figures in the space of a few months...Thanks to Laura Brockman insights, daily trade signals, and my dedication to learning, I've been increasing my daily earnings. Kudos to the journey ahead!
One thing I know for certain is crypto is here to stay, the only thing that leaves is the people who don't manage their risk. Manage that, or the market will manage it for you. With the right strategies you will survive.
Thank you for sharing your experience. She’s helped grow my reserve, despite inflation, from $87k to $246k as of today…..Her insights and daily siignals are worth following.
I very much prefer DeepSeek. I love seeing the logic and I am not concerned by the extra time required. The quality is very impressive. I will say that as an Australian I am going to say that I have far more trust in the DeepSeek offering because it is Open Source, it is more transparent with logic, and the narrative & values seem very decent. Conversely, I feel ChatGPT is hiding something, has multiple agendas, is shadowed by some nasty events and team politics, is supported by a company I have no trust in, I find the CEO is deceptive. As for cost. DeepSeek is not only a clear winner - it came out with the best foot forward without having to be pushed into doing so.
My happiness is if there's competition with AI companies, we consumers don't have to break the bank for something average, Deepseek thank you, keep this big companies on their toes
It’s unbelievable! USA $500 billion AI competing against CHINA $6 million AI …woooo not bad at all. It just shows how good brilliant are those young Chinese engineers, mathematicians & scientists in CHINA 🇨🇳 today ! DEEPSEEK RI is just the beginning! 😊😊
Definitely as long as Americans are busy being a gender switching pronouns and other weird things. This is why America first and Trump narratives of 2 genders and other things are needed. Deepseek is just a start.... Blame is on all Americans that sat and voted democrats.
It depends on your needs and application. I find Chat is more of my liking in helping with my writing. I think it's good to have both. I use DS as my 2nd opinion to check on each other. But the idea of replacing Chat with DS is ridiculous. I don't see that.
Nice test. Been using deepseek a lot and it was much much faster before the malicious attacks started occurring. With that being said, it feels like o1 has been either dumbed down or they lowered the thinking time, as that would explain getting questions wrong like the 9.9 vs 9.11 question, as it used to get that right. Even 4o would get that right. Seems like they always dumb down their models before they release a new model so you can see the "vast improvement", in this case before o3 mini releases within the next few days.
@@Kinglife1000 No. Its not warfare its called technical innovation and open-source software. You know the foundations of what we are using this very moment on this app. Its not warfare in fact its very common to make a superior product for cheaper and open source and deliver it to the world. America big tech failed to do so.. so naturally someone else did and did an excellent job doing so. American intelligence operatives and the jewish bankers lost this round of innovation, no different than Bitcoin
So is Chat. I think it's good to have both and I use them to check on each other when I want a 2nd opinion. I don't see Chat going away as I like its responses better in helping with my writing.
can you build a pc and run the actual r1 model. i mean if we can do that locally, it will completely wipe of need of paid chatgpt. as there will be zero privacy risk of running R1model (whole r1) locally.
I was able to run all the models up the 32b parameter locally on my rig. Super impressive. I wouldnt bother with the 1.8b or 7b models I found them fast but the output quality wasnt anywear near 8b
Hey Sir, im not sure if youll see this but i just wanted to say thank you, your super simple tutorials helped me get deepseek running locally on my own PC and its been great, no limits like with chatGPT from openai, only sad thing is it cant seem to look at / describe images like chatgpt.
if you turn on chatgpt o1 "think", it would take longer to think. Also, I tried deepseek counting r's in strawberrry, without the r1, and it was actually very fast, but with r1, it took a lot longer.
If someone has sensitive data they don’t want on an online platform, they can likely afford a few GPUs to run the model locally-so the privacy argument doesn’t hold up.
I ran distilled R1 locally on my laptop and the first thing I asked it was, “What song by David Bowie was his biggest hit?”. DeepSeek gave me a bogus song name and a load of hallucinations around years it was released etc. ChatGPT gave an acceptable answer. I downloaded a larger distilled R1 Llama. This time it gave an acceptable detailed answer, only it decided that the song had a co-writer - quite random, but on the surface plausible, but incorrect. Interesting…. 🤷
Don't be fooled by "longer" thinking of DeepSeek-R1, it depends on compute availability. I would not expect DeepSeek has enormous compute availability for all the users now trying R1. FOR FREE.
Also, at the end you mention de privacy comparison which is great also. Quick question? Doesn't LLM keeps evolving and improving? So if you download locally would be as effective, or you would have to keep downloading newer versions? And what is the minimum version you think is worth downloading?
You have to keep downloading newer versions to get improvements. Once you download one, it is frozen at whatever "intelligence" it has and will never improve. Models do not evolve on their own, at least not yet. Maybe in the future, someone will come out with one that does, but for now, no.
Okay, looks like ChatGPT was already updated after your video :) because it gave the correct answer: Version 4o quickly answered: "9.9 is bigger than 9.11. This is because 9.9 is equivalent to 9.90, and when comparing 9.90 vs. 9.11, 9.90 is clearly larger." And Model 01 took 39 seconds to respond but also gave the correct answer: "Mathematically, 9.9 (which you can think of as 9.90) is larger than 9.11 (9.110). The integer part (9) is the same, but .90 is greater than .11, so 9.9 is the bigger number."
The iOS DeepSeek app only asks for an email, while ChatGPT insists I give them my phone number - this right there is a win for DeepSeek hands down! And yes, I'm going to download it and run locally, when I stop playing and need some real work done or have to process any of my data.
Interesting note is that deepseek had an error in the misspelled strawberry prompt. It had the right number of letters for that, but then it went on to say: "Note that the correct spelling of the fruit is "strawberry" (with 2 "r"s)"
I was actually working on an AI model that uses reasoning as opposed to language, thank you deep seek for saving me time. Reasoning is extremely efficient, its why humans do it. The smart ones anyway.
Great video as usual. I saw your video on the privacy settings on DeepSeek and worried me a little. What are your thoughts on that regard? Not sure if I should just accept that or stick with the free ChatGPT. Any thoughts?
I think the fact that you can’t opt out of training data is a big problem with deepseek. Mainly if you use it for work. The other privacy issues are personal preference
Yea it was one of the key question OpenAI used when they released this. LLMs have a hard time figuring that out, so the reasoning is suppose to help think through it step by step
Great comparison. In my case o1 answered it correct that 9,9 is more than 9,11. Very weird. I tested Alibabas new Qwen 2.5 and i really like it for generating free images.
question: if you are testing its reasoning why not make the prompts as vague as possible and which one stays closest to the prompt or which one gives you a more relatable answer
love your videos. just started with ai stuff , running 8b locally thanks to you 😊 is there any way to modify it and connect to internet? like telegram bot api?
More than the answer, We need the route of that answer that's the only thing will develop yourself, I think in that case DeepSeek doing far better job.
Additionally, on your comment on search with R1 search. Couldn't you do the Dave thing with an MCP server for access to search the web? I know this can be done in VS code. With so many pre built MCP servers available, a perplexity come can be built entirely by AI itself.
Hi, thanks for the awesome video! I just wonder if PROJECT DIGITS is good enough to run DeepSeek locally-I believe it is. It seems like perfect timing. I'm a big fan of ChatGPT because, at this point, it's so personalized that it almost feels like I'm in the ChatGPT ecosystem-very practical. But I believe that at some point, having something like DeepSeek, the highest/top model, locally might be the answer. What do you think?
@@SkillLeapAI Yeah, so I was curious enough to dig deeper... and if DeepSeek V3 and R1 (if that’s correct) have 671 billion parameters, and two Project DIGITS units can handle 405 billion, then yeah… we’d probably need three to run it smoothly. Wow, that’s a crazy amount of money. Imagine dropping that kind of cash just to get one model running properly. I might have to wait for things to get cheaper-who knows, maybe next year they’ll release something new, and this setup will drop to 2000 dollars… or at least wishful thinking! 😆
Asked to deepseek.immagine theres a straight line🤔🤔,cutting only once that results in three pieces .ans is folding in u shape cut in the middle .reasoning capacity is great👍👍👍
I havent been lucky with R1 now. I gave it a block of 40 instructions to craft a python script to automate my workflow. I gave up on the 11th attempt. O1-mini did it on the first try. This has been happening since day one. Pitty, i thought this is the time i would stop wasting my 20$ to the overpriced GPT. I was rooting for DeepSeek but it seems to not be ready for prime time. I will give it one year more.
One thing to NOTE: give R1 the instructions to" limit your thinking of processing of the problem to less than 60 seconds" or similar and compare it's answer to chatGpt. I think you will be very surprised with the results.
The deepseek R1 cant answer a single question on my phone but works good on computer , also deepseek connot see pictures and just see the text but chatgpt can see other thing too inside an image . So xhatgpt is quite more useful for me right now.
You are talking about the multi-modal large model. DS's V3 and R1 are LLM models. DeepSeek has released a new open source model for image recognition and image generation from text functions.
i am using macbook m1 8 gb ram i want to use deepseek r1 for my studies i think the seeing the thinking process to answer anyquestion is great for me to lear fater but i faced many performance isssues like if i use 1.5b model then thats not able to answer my question in great way like how bigger model does even the 7b but i am faceing too much performance issues so how to use quantized version for faster and smoother experience
The 1.5B model might be sufficient for answering simple questions, but if you have a MacBook with more RAM, it’s recommended to use at least the 14B r1 model to achieve 80% of the full r1 version’s performance. -- translate by r1
You could install them locally with ollama and install a simple web frontend to use it. Might it be as fast? Well I am running pretty beefy gear, and I am not disappointed.
Not from what I've seen. These benchmark questions are ridiculous. How often do you actually ask those in real life? I use Chat to help with my writing, as well as for asking questions about fixing my car and home repairs. It’s like having a friend or expert to consult with, where the conversation flows back and forth. I tried DS, but I prefer Chat for its tone and writing style. I think both are valuable, not just one. I use DS when I want a second opinion, and likewise, some DS users may want Chat as a second opinion. I don’t see Chat going away-both programs are good. In real life, you don’t ask how many "r’s" are in "strawberry." You ask how to fix your car, complete a home repair, estimate a roofing cost, negotiate a contract, or find a recipe with a shopping list. I had Chat corrected my writing before posting this.
Technically, they could let ChatGPT think as much as they want, but they limited it to save a bit more money-just like "millionaires" do, always squeezing out every penny.
I like the concept of a local database with all my data local including the search engine. Clearly Microsoft could do this and now they can do this with deepseek open source.
By watching this video, I am pretty sure that the way how DeepSeek reasons is exactly the way how a Chinse student reasons because this is how we learned/training from school
I could literally train an open source LLM from hugging face to face”learn like a Chinese person” and tell it to output the steps. It’s almost as if that’s what they did
Deepseek using older gpu models and so far outstanding performance. In upcoming year's when Chinese have their own gpu models, then more advance they turn. Right now Usa banned 🚫 high tech gpu devices exports to china.
Maybe Deepseek needs more time for translation, since it is Chinese APP or they need to train more English spoken questions. The chinese version is much faster.
How did 01 miss one(early)? Your question was "Which Animals (and how many of each) did you buy"? You never asked how many variations are there that fit this answer. 01 provided a correct answer.🙄
There is from a test. Answer key had two answers. Every other model gave me two answers also. Pretty straight forward. Omitting another correct answer makes it incorrect or at least half correct.
GPT 4 The number 9.9 is larger than 9.11. Here's why: 9.9 is the same as 9.90 when written with two decimal places. 9.11 is already in two decimal places. When comparing 9.90 and 9.11, 9.90 is larger. So, 9.9 > 9.11.
Dave's Garage tried to run the full version on his $50k Apple Mac and it output 1 word per second or a couple of seconds. Dylan The Technogizguy tried to run the 5GB 8-billion lama model on Dell 730 server with 28-core CPUs, 64GB RAM and 2 P40 Nvidia accelerator cards, it ran the pretty blazingly fast. I think 5-10GB, maybe 20GB, models are the max for the last gen servers. The question is would you spend a couple of grands for no censor AI? It depends on what you want to ask or generate. 😂
That was the test. There were two correct answers. i was testing to see if it can answer with all correct answers. The actual test is called "math question with a twist"
O1 says you are right so who am I to argue? 🙂 Short Answer: If the question implicitly wants all possible solutions (which is common in puzzle-like wordings), then providing only one solution would be incomplete. However, if someone only asked for “Which animals (and how many of each) did you buy?” without any further clarification, then listing any valid combination is technically correct-but not fully so if multiple solutions exist and you don’t mention the others. --- Longer Reasoning 1. Is one valid solution “correct”? In a narrow sense, yes-because it meets the stated conditions (4 animals totaling $140). If you only show one set of animals that works, you have not actually made a false statement. The question as phrased does not explicitly say, “List all possible sets of animals.” 2. Why do puzzles often imply all solutions? Typical puzzle or riddle wordings strongly imply the desire for uniqueness. When more than one solution exists, puzzle-setters often highlight that there is more than one correct answer, or phrase it to draw out the multiple solutions. 3. How do we interpret the question? If the puzzle question appears in a textbook or puzzle forum, the expectation is usually to find all integer solutions-unless it explicitly says “Give an example of a solution.” Since we do indeed have two valid solutions, it’s common practice to provide both if the puzzle says “Which animals did you buy?”-implying a request for a definitive answer. 4. Conclusion o1’s answer is technically correct if it gave a valid set of numbers. However, in puzzle contexts, many people would say it’s incomplete because it ignores the existence of another valid solution. In everyday conversation, though, if the question is simply “Which animals could you buy…?” giving a correct set might suffice. Ultimately, it depends on the context: Puzzle or academic setting: Usually best to give all solutions. Everyday scenario: A single valid answer might be enough.
@@damien2198 Then I must of used a different version of ChatGPT. I was uploading numerous images yesterday of homes for analysis to see if the rooftops were metal or some other material. No version of Deepseek let me do that.
I am able to run deepseek r1 7b . On my hp i5 intel pc it working great.. the step. Installation process Step-1 Download ollalma for your pc is free Ollama run deepseek r1:7b . In 1st run it install it locally you good to go
Next time try asking a question related to previous question and you will figure where deepseek really stands. It might be useful one day for now though its trash.
I really enjoy Deepseek and how they show their reasoning. But i still find the answers they provide to be misleading about 40% of the time. I found when i asked it to reason through subject areas where i know the answers to, they will still make up answers at an alarming rate. While other models will do the same, if call it out they will admit they were making things up and then if they dont know how to answer it they will tell me Deepseek tends to admit as well, but keep providing incorrect information. The steps are amazing, but not convinced overall. I tend to use their reasoning to then work through problems and ask other models. Anyone else have similar experiences?
Open source Deepseek is truely a gift for mankind. You can run the full model with about $6000 hardware budgets, completely free and with 100% privacy… This does huge help to some scientific scholar researches that need to protect data security. we never thought this could happen back in 2024, where OpenAI was the only option for reasoning model.
$6000 to run the current R1 model on your own hardware for a midsize companies is probably not too bad. But in six months or so can or will they be able to update it? And what does it mean for that $6000 machine?
How $6000? Isnt it free? Sorry, Didn’t get it?
@@gautamarora7764this the cost for hardware needed for using the full version locally. You can use DS for free on web browser but you need to be connected and there is a china censorship
@gautamarora7764 The webpage version of Deep Seek is free to use, but the conversation data will be transmitted to servers in China. What he said was that he was using the API for local deployment. The main drawback of local deployment of large models is that the speed will be very slow if the hardware computing power is insufficient. The main advantage is that it can protect privacy.
100% privacidad ???
Before we think only Big tech companies can own AI but now with deep seek even someoen in third world countries can run deep seek with reasonable specs, truly a give for humanity❤
Deepseek is currently experiencing massive ddos attack that's why sometimes it won't load
Run it locally
DDoS from USA
15 billion attacks per minute peak. That's about 3day worth of network traffic from the whole Europe. Ouch someone lost money got really mad😂
@@tonnyww5ng If that were the case, how come DeepSeek limit the number of registrations in poor China?
Jealous America Doing this Attack
Locally executed deepseek-r1:32b (only 18.48GB file !!!) on RTX 4090 ran mostly faster than in your video and was still very verbose and understandable.
question 1: pass
question 2: pass
question 3: pass
question 4: pass
question 5: pass - note: this one took quite a bit longer but answer didn't have any formatting issues
question 6: pass - note: slightly different emphasis on last point - 'she' but explained correctly
question 7: pass
question 8: pass - note: also pass for misspelled word
I didn't bother with comparing privacy policies.
Overall I am very impressed a 18.49GB model can actually reason better than average human being and has more specialized knowledge than average human being... really, it is totally nuts and especially if you consider this thing can read and write in most languages known to man and I don't need to have internet connection to have access to it. It runs on the same GPU as your average path traced game... yeah, it does need some processing to be snappy and nice to use. In theory however any computer with 32GB RAM should be able to mull through all the tokens and generate similar responses.
Now I really feel like I am living in XXI century.
Only flying cars and we are also flying and not only talking!
How much does your computer costs? What is the specific R1 model? R1 14G?
@@taijistar9052 It's the 32B Model, it run on a mac (M series) with 32gb ram.
I ran the Distill Llama 8B version locally and got these results:
1. Pass
2. Pass, though it's the same single answer as o1
3. Pass, note: it got the u≈0.8879c in answer section though the final answer in squares is rounded to 0.89c
4. Fail, it said the chicken came before the egg.
5. Fail, it answered "The missing $3 was taken by the waiter as an unintended profit."
6. Pass
7. Pass
8. Pass
9. Failed, it said there are 3 "r"s, it didn't see the misspelling and counted using "strawberry" instead.
Not the best result but I also thought it did very well for a 8B model that fits on my 9 year old RX470 8G, way better than any other similar sized model I've tried.
What quantization did you use? And top p and temperature?
lol @ ''Now I really feel like I am living in XXI century.''
I have read many papers but to sum up R1 has five main advantages *1) it gives you the reasoning behind its thoughts, you can find and tell it to correct it's mistake if you find one 2) it is much more DEPLORABLE it's like when they first invented Personal Computer PC!! You don't have to have a huge Data center or large amount of GPUs to run it, in fact, you can even run it on your phone without internet 3) it is cheaper and faster of course 4) most of all it is free 5) open source so you can open you can edit it update it any way you like*
Any of the reasons above should be a game changer by itself but combination of five you got a stock crash like yesterday
Do you mean "Deployable" and not "deplorable"?
yesterday, stocks were up?
How do they make money off it?
This comment is ai generated and by the looks of it by chatgpt free model 😂
@@newworldhello It's a research project probably supported by the Chinese government but not 100% sure.
DeepSeek is ideal for students. It walks you through the problem
The company's employees are composed of: Mathematical geniuses from China's top universities,
Doesn’t OpenAI do the same? If not, can’t an OpenAI prompt do this with a prompt to walk through it
@@catharsis222 You could. It would just cost four times more per token. Limited access is free for DS while I haven't seen free access to the reasoning model from OpenAI.
DeepSeek has changed the way of life. I had given DeepSeek and Chatgpt4o, an advanced linear algebra problem to solve, the way DeepSeek took me its deep forest of thinking process, it was out of the world experience.
It is a fantastic tool to learn complicated concepts, you will be provided with a recipe of its change of , which helps in reinforcing your concepts. Enjoy and and learn from DS❤
So who's better ?
@@GRIMREAPER-sf9pr Deepseek is a game changing experience I have ever had.
Deepseek is far better at everything and its explaining is very good for understanding concepts @@GRIMREAPER-sf9pr
@@GRIMREAPER-sf9pr It depends on what you do. For math problems, you may get the same results. However, I think AI usage will be more like having someone you can have conversations with. You're not going to get the answer on the first shot. There are always nuance and you go back and forth. For example, to get help with writing, I find Chat to be more of my liking. It's not just about who is cheaper but who gives the best results. DS is good for second opinion but definitely not a replacement for Chat. Perhaps in your usage, you don't need Chat and can get by with just DS. I'm sure in certain application, you can just run DS and it's more than adequate.
i just played deepseek for hours...and its just amazing how detailed it was for everything... thanks deepseek.
The emphasis on having a clear exit strategy is crucial, especially after experiencing the volatility of previous bull and bear markets. It’s refreshing to see the focus on setting realistic goals and understanding the why behind profit-taking. This approach not only prepares you for unexpected market shifts but also keeps you grounded during the emotional rollercoaster of trading. It’s a valuable reminder that crypto investing should ultimately serve our personal aspirations, rather than just becoming a game of chasing numbers. I have managed to grow a nest egg of around $200k to a decent 7 figures in the space of a few months...Thanks to Laura Brockman insights, daily trade signals, and my dedication to learning, I've been increasing my daily earnings. Kudos to the journey ahead!
She mostly interacts on Telegrams, using the user-name,,
@LauraBrockman
One thing I know for certain is crypto is here to stay, the only thing that leaves is the people who don't manage their risk. Manage that, or the market will manage it for you. With the right strategies you will survive.
Thank you for sharing your experience. She’s helped grow my reserve, despite inflation, from $87k to $246k as of today…..Her insights and daily siignals are worth following.
Thank you…. I have searched her up Google I think I am satisfied with her experience.
I very much prefer DeepSeek. I love seeing the logic and I am not concerned by the extra time required. The quality is very impressive.
I will say that as an Australian I am going to say that I have far more trust in the DeepSeek offering because it is Open Source, it is more transparent with logic, and the narrative & values seem very decent. Conversely, I feel ChatGPT is hiding something, has multiple agendas, is shadowed by some nasty events and team politics, is supported by a company I have no trust in, I find the CEO is deceptive.
As for cost. DeepSeek is not only a clear winner - it came out with the best foot forward without having to be pushed into doing so.
Yes, I don't really care about the extra time, cause we get a clear reasoning behind the answer which ChatGPT lacks of.
Yes, open-source models that run locally and offline, private AI-there will be no privacy leaks whatsoever.
I believe you are right
My happiness is if there's competition with AI companies, we consumers don't have to break the bank for something average, Deepseek thank you, keep this big companies on their toes
It’s unbelievable! USA $500 billion AI competing against CHINA $6 million AI …woooo not bad at all. It just shows how good brilliant are those young Chinese engineers, mathematicians & scientists in CHINA 🇨🇳 today ! DEEPSEEK RI is just the beginning! 😊😊
Maybe USA did not spend $500 billion and most of money are hiding on those techno oligarchs' pockets.
@@Felipe-n3j
Or that they just copy everything they steal from other countries. It could be that
Definitely as long as Americans are busy being a gender switching pronouns and other weird things. This is why America first and Trump narratives of 2 genders and other things are needed. Deepseek is just a start.... Blame is on all Americans that sat and voted democrats.
😂😂 who told you the us used 500 billion
well, most of the employees in but tech US companies are Chinese as well, so the competition is China vs China, as US youth is busy with Cardasians
DS is being cyber attacked
probably by nvidia :)
@@gravelbikemark I saw the security vid of Jensen Huang and Sam Altman are throwing fire bottles into DS office building 🤣
No need to guess who is or are the perpetrators. The sore losers!
deepseek is better because they design their algorithm to mimic human to much explaination and back and forth before they answer.
It depends on your needs and application. I find Chat is more of my liking in helping with my writing. I think it's good to have both. I use DS as my 2nd opinion to check on each other. But the idea of replacing Chat with DS is ridiculous. I don't see that.
Nice test. Been using deepseek a lot and it was much much faster before the malicious attacks started occurring.
With that being said, it feels like o1 has been either dumbed down or they lowered the thinking time, as that would explain getting questions wrong like the 9.9 vs 9.11 question, as it used to get that right. Even 4o would get that right. Seems like they always dumb down their models before they release a new model so you can see the "vast improvement", in this case before o3 mini releases within the next few days.
Yea I can see that. It got it right in my very first test when o1 first came out. Interesting
Yeah. I also agree with this. Have experienced this too. Considering this sh*t of OpenAi, it's a no brainer not to shift to DS
now this is a good review, allow users to decide themselves. no bias. thanks
Breaking News: World leaders are congratulating CHINA , DEEPSEEK for helping the world for FREE ! ❤ ❤❤
Question is when has the world leaders ever benefited regular folks like us. Think Covid and booster shots.....
There is a report that US big tech is adopting DeepSeek.
@@obifinest521 you mean the "safe and effective" which hurt my friends?
'Free'???" You may not be paying for it with hard cash but you clearly don't understand MACRO political warfare
@@Kinglife1000 No. Its not warfare its called technical innovation and open-source software. You know the foundations of what we are using this very moment on this app. Its not warfare in fact its very common to make a superior product for cheaper and open source and deliver it to the world. America big tech failed to do so.. so naturally someone else did and did an excellent job doing so. American intelligence operatives and the jewish bankers lost this round of innovation, no different than Bitcoin
Deepseek is better for students and people who like to learn things
Revisit this in six months. I’m betting that the whole landscape looks different.
Yes, Alibaba's will be at the forefront.
Deepseek is a gift to people around the world
So is Chat. I think it's good to have both and I use them to check on each other when I want a 2nd opinion. I don't see Chat going away as I like its responses better in helping with my writing.
can you build a pc and run the actual r1 model. i mean if we can do that locally, it will completely wipe of need of paid chatgpt.
as there will be zero privacy risk of running R1model (whole r1) locally.
amd3770+4060可以運行r1模型48G本地部署,janus pro 文字生成圖20G本地部署
Deepseek 目前正在遭受大规模 DDoS 攻击,因此有时无法加载;本地运行 R1model,会是最优的选择;
Yes, you can.
You can run 'distillations' of R1 locally but no consumer setup can run the full model...
I was able to run all the models up the 32b parameter locally on my rig. Super impressive. I wouldnt bother with the 1.8b or 7b models I found them fast but the output quality wasnt anywear near 8b
Hey Sir, im not sure if youll see this but i just wanted to say thank you, your super simple tutorials helped me get deepseek running locally on my own PC and its been great, no limits like with chatGPT from openai, only sad thing is it cant seem to look at / describe images like chatgpt.
*The speed maybe their server is not in the United States so you have to travel longer distance aside of everybody's downloading it right now?*
Deepseek 目前正在遭受大规模 DDoS 攻击,因此有时无法加载
It is under massive cyber attack now.
我们中国人甚至不能再正常使用这个软件了,因为它已经受到了来自国外的攻击,搅乱了一些人的苹果车。
Latency is measured in milliseconds, it is not the limiting factor.
DeepSeek is Powerful than ChatGPT
When everyone is worried about DeepSeek but you work for a company that’s still stuck in 2005.
it seems R1 is similar to O1 Pro. Do you agree?
Far better mate , off the grid as well
Better, not similar
much better
o1 pro not o1
if you turn on chatgpt o1 "think", it would take longer to think. Also, I tried deepseek counting r's in strawberrry, without the r1, and it was actually very fast, but with r1, it took a lot longer.
oh interesting. Thank you for that
If someone has sensitive data they don’t want on an online platform, they can likely afford a few GPUs to run the model locally-so the privacy argument doesn’t hold up.
Open source does promote provably private security practices. I wish OpenAI was actually open.
I ran distilled R1 locally on my laptop and the first thing I asked it was, “What song by David Bowie was his biggest hit?”. DeepSeek gave me a bogus song name and a load of hallucinations around years it was released etc. ChatGPT gave an acceptable answer. I downloaded a larger distilled R1 Llama. This time it gave an acceptable detailed answer, only it decided that the song had a co-writer - quite random, but on the surface plausible, but incorrect. Interesting…. 🤷
Don't be fooled by "longer" thinking of DeepSeek-R1, it depends on compute availability. I would not expect DeepSeek has enormous compute availability for all the users now trying R1. FOR FREE.
Also, at the end you mention de privacy comparison which is great also. Quick question? Doesn't LLM keeps evolving and improving? So if you download locally would be as effective, or you would have to keep downloading newer versions? And what is the minimum version you think is worth downloading?
You have to keep downloading newer versions to get improvements. Once you download one, it is frozen at whatever "intelligence" it has and will never improve. Models do not evolve on their own, at least not yet. Maybe in the future, someone will come out with one that does, but for now, no.
@@weevil601 Good to know, thanks!
Finally OPENAI vs DEEPSEEK 👑
The battle will be legendary 💪🏻
Okay, looks like ChatGPT was already updated after your video :) because it gave the correct answer:
Version 4o quickly answered:
"9.9 is bigger than 9.11. This is because 9.9 is equivalent to 9.90, and when comparing 9.90 vs. 9.11, 9.90 is clearly larger."
And Model 01 took 39 seconds to respond but also gave the correct answer:
"Mathematically, 9.9 (which you can think of as 9.90) is larger than 9.11 (9.110). The integer part (9) is the same, but .90 is greater than .11, so 9.9 is the bigger number."
The iOS DeepSeek app only asks for an email, while ChatGPT insists I give them my phone number - this right there is a win for DeepSeek hands down!
And yes, I'm going to download it and run locally, when I stop playing and need some real work done or have to process any of my data.
Interesting note is that deepseek had an error in the misspelled strawberry prompt. It had the right number of letters for that, but then it went on to say: "Note that the correct spelling of the fruit is "strawberry" (with 2 "r"s)"
it means 2 consecutive "r" s at the end, not 3. doesn't mean 2 in total.
the final sentence is a correction for user to spell the world, not counting the correct number of r.
I was actually working on an AI model that uses reasoning as opposed to language, thank you deep seek for saving me time. Reasoning is extremely efficient, its why humans do it. The smart ones anyway.
What a time to be alive !!! :)
Great video as usual. I saw your video on the privacy settings on DeepSeek and worried me a little. What are your thoughts on that regard? Not sure if I should just accept that or stick with the free ChatGPT. Any thoughts?
I think the fact that you can’t opt out of training data is a big problem with deepseek. Mainly if you use it for work. The other privacy issues are personal preference
@ Thanks
How did you compare the two privacy statements? Uploading both files to the tool?
I have seen so many youtubers testing the model by asking the same question " How many 'r's are there in 'strawberry' " . Is there a reason for that?
Yea it was one of the key question OpenAI used when they released this. LLMs have a hard time figuring that out, so the reasoning is suppose to help think through it step by step
Great comparison. In my case o1 answered it correct that 9,9 is more than 9,11. Very weird. I tested Alibabas new Qwen 2.5 and i really like it for generating free images.
Yea I don’t know why sometimes it gets the wrong. I’ve tried the qwen website. I keep getting errors
Deepseek now has janus pro which is an image generator
Qwen 2.5目前还不是很优的选择,不要考虑;
These models pass:
Question 1: GPT 3.5, 4o, Sonnet 3.5, Deepseek V3, Qwen R1 32B
Question 2: GPT 3.5 (1/2), Sonnet 3.5 (1/2), Deepseek V3, Qwen R1 32B
Question 3: 4o, Sonnet 3.5, Deepseek V3, Qwen R1 32B
Question 4: 4o, Deepseek V3, Qwen R1 32B
Question 5: GPT 3.5, 4o, sonnet 3.5, Deepseek V3, Qwen R1 32B
Question 6: GPT 3.5, 4o, sonnet 3.5, Deepseek V3, Qwen R1 32B
Question 7: 4o, Deepseek v3, Qwen R1 32B
Question 8: 4o, Deepseek v3, Qwen R1 32B
Not a single question Deepseek V3 got wrong, so the reasoning was not even necessary for those questions.
Qwen R1 is the distilled version of Deepseek R1 at 32B
Interesting - thanks for sharing
question: if you are testing its reasoning why not make the prompts as vague as possible and which one stays closest to the prompt or which one gives you a more relatable answer
do you have an example I can use in my next test?
@SkillLeapAI 'make a detailed explanation on the most truly relevant topic of today'
9:01 You said large language models have trouble with decimals? Why?
LLMs sometimes struggle with decimal comparisons because they process numbers as text rather than actual numerical values.
In short deepseek will show you, teach you and let you validate the approach
Locally executed deepseek-r1:32b passes all test. q4-k-l gguf
love your videos. just started with ai stuff , running 8b locally thanks to you 😊 is there any way to modify it and connect to internet? like telegram bot api?
Thanks. I don't know a way to add web access to the local installs
chrome plugins “Page Assist”
Good comparison, however, you didn't mention how DeepSeek said the original spelling of "Strawberry" has 2 r's.
More than the answer, We need the route of that answer that's the only thing will develop yourself, I think in that case DeepSeek doing far better job.
Additionally, on your comment on search with R1 search. Couldn't you do the Dave thing with an MCP server for access to search the web? I know this can be done in VS code. With so many pre built MCP servers available, a perplexity come can be built entirely by AI itself.
Hi, thanks for the awesome video! I just wonder if PROJECT DIGITS is good enough to run DeepSeek locally-I believe it is. It seems like perfect timing.
I'm a big fan of ChatGPT because, at this point, it's so personalized that it almost feels like I'm in the ChatGPT ecosystem-very practical. But I believe that at some point, having something like DeepSeek, the highest/top model, locally might be the answer.
What do you think?
Yep I think so. And you can stack those for more power. So maybe it will take more than one to run the full model locally
@@SkillLeapAI Yeah, so I was curious enough to dig deeper... and if DeepSeek V3 and R1 (if that’s correct) have 671 billion parameters, and two Project DIGITS units can handle 405 billion, then yeah… we’d probably need three to run it smoothly.
Wow, that’s a crazy amount of money. Imagine dropping that kind of cash just to get one model running properly. I might have to wait for things to get cheaper-who knows, maybe next year they’ll release something new, and this setup will drop to 2000 dollars… or at least wishful thinking! 😆
Asked to deepseek.immagine theres a straight line🤔🤔,cutting only once that results in three pieces .ans is folding in u shape cut in the middle .reasoning capacity is great👍👍👍
I tried it on chatgpt o1 and it gave a correct answer as follows: Hence, 9.9 is bigger than 9.11 when considering their values as decimal numbers.
I havent been lucky with R1 now.
I gave it a block of 40 instructions to craft a python script to automate my workflow. I gave up on the 11th attempt. O1-mini did it on the first try. This has been happening since day one. Pitty, i thought this is the time i would stop wasting my 20$ to the overpriced GPT. I was rooting for DeepSeek but it seems to not be ready for prime time. I will give it one year more.
One thing to NOTE: give R1 the instructions to" limit your thinking of processing of the problem to less than 60 seconds" or similar and compare it's answer to chatGpt. I think you will be very surprised with the results.
This is great, I'd love to see a comparison to 03 or 03 mini
I love these AI comparisons. Gotham Chess just did a ChatGPT vs. Deepseek chess game, with um, interesting results.
link?
@@Ian.TF123 b站有
Very interesting. Thanks!
Can’t generate pics or vids tho can it?
DeepSeek can't but it has another model called Janus Pro that can. ChatGPT has DALLE when you use GPT-4o for images
Deepseek is a gift to humanity
Glory to China
The deepseek R1 cant answer a single question on my phone but works good on computer , also deepseek connot see pictures and just see the text but chatgpt can see other thing too inside an image .
So xhatgpt is quite more useful for me right now.
You are talking about the multi-modal large model. DS's V3 and R1 are LLM models. DeepSeek has released a new open source model for image recognition and image generation from text functions.
So i have to code it myself to make it recognize images
i am using macbook m1 8 gb ram i want to use deepseek r1 for my studies i think the seeing the thinking process to answer anyquestion is great for me to lear fater but i faced many performance isssues like if i use 1.5b model then thats not able to answer my question in great way like how bigger model does even the 7b but i am faceing too much performance issues so how to use quantized version for faster and smoother experience
The 1.5B model might be sufficient for answering simple questions, but if you have a MacBook with more RAM, it’s recommended to use at least the 14B r1 model to achieve 80% of the full r1 version’s performance. -- translate by r1
😊how can i invest in deepseek, i want to buy their shares, even if it's small
The young boss of Deepseek recently stated: Deepseek will not go public to maintain business purity and will not raise funds.
deepseek was much faster than GPT , but lately because of people using it too much , it became even slower than GPT
在知识的海洋中,我们如同漂泊的船只,试图用理性的罗盘指引方向,却常常被感性的风暴吹离航道。人类的认知,仿佛是一座由无数镜子组成的迷宫,每一面镜子都反射出不同的真相,却又彼此矛盾,令人难以捉摸。我们追求真理,却往往发现真理本身是一个多面的棱镜,每一面都闪耀着不同的色彩,而我们所看到的,仅仅是其中微不足道的一角。科学、哲学、艺术,这些人类智慧的结晶,似乎都在试图从不同的角度逼近那个终极的答案,然而,越是接近,越是发现答案的边界在无限延伸,仿佛永远无法触及。我们试图用逻辑的链条串联起世界的碎片,却常常发现,逻辑的链条本身也是由碎片组成的。或许,真正的智慧并不在于找到那个终极的答案,而在于学会在不确定性中航行,在矛盾中寻找平衡,在碎片中拼凑出属于自己的那一幅不完整的图景。
You could install them locally with ollama and install a simple web frontend to use it. Might it be as fast? Well I am running pretty beefy gear, and I am not disappointed.
Should I cancel ChatGPT then?
Not from what I've seen. These benchmark questions are ridiculous. How often do you actually ask those in real life? I use Chat to help with my writing, as well as for asking questions about fixing my car and home repairs. It’s like having a friend or expert to consult with, where the conversation flows back and forth.
I tried DS, but I prefer Chat for its tone and writing style. I think both are valuable, not just one. I use DS when I want a second opinion, and likewise, some DS users may want Chat as a second opinion. I don’t see Chat going away-both programs are good.
In real life, you don’t ask how many "r’s" are in "strawberry." You ask how to fix your car, complete a home repair, estimate a roofing cost, negotiate a contract, or find a recipe with a shopping list. I had Chat corrected my writing before posting this.
If the performance is on par, why waste your money to feed the greedy?😢
Technically, they could let ChatGPT think as much as they want, but they limited it to save a bit more money-just like "millionaires" do, always squeezing out every penny.
Try Gemini flash thinking
Yea it's pretty good. On my list of videos
I like the concept of a local database with all my data local including the search engine. Clearly Microsoft could do this and now they can do this with deepseek open source.
By watching this video, I am pretty sure that the way how DeepSeek reasons is exactly the way how a Chinse student reasons because this is how we learned/training from school
I could literally train an open source LLM from hugging face to face”learn like a Chinese person” and tell it to output the steps. It’s almost as if that’s what they did
@@CANT_FEAR_YOUR_OWN_WORLD impress me plz
Deepseek using older gpu models and so far outstanding performance.
In upcoming year's when Chinese have their own gpu models, then more advance they turn.
Right now Usa banned 🚫 high tech gpu devices exports to china.
Maybe Deepseek needs more time for translation, since it is Chinese APP or they need to train more English spoken questions. The chinese version is much faster.
How did 01 miss one(early)? Your question was "Which Animals (and how many of each) did you buy"? You never asked how many variations are there that fit this answer. 01 provided a correct answer.🙄
There is from a test. Answer key had two answers. Every other model gave me two answers also. Pretty straight forward. Omitting another correct answer makes it incorrect or at least half correct.
GPT 4
The number 9.9 is larger than 9.11.
Here's why:
9.9 is the same as 9.90 when written with two decimal places.
9.11 is already in two decimal places.
When comparing 9.90 and 9.11, 9.90 is larger.
So, 9.9 > 9.11.
Where it proofs that you do not need to finish first if you think right, just as my math teacher taught us long time ago.
The speeds measure doesn’t mean anything because they are hosted in different servers and with different users!
Dave's Garage tried to run the full version on his $50k Apple Mac and it output 1 word per second or a couple of seconds. Dylan The Technogizguy tried to run the 5GB 8-billion lama model on Dell 730 server with 28-core CPUs, 64GB RAM and 2 P40 Nvidia accelerator cards, it ran the pretty blazingly fast. I think 5-10GB, maybe 20GB, models are the max for the last gen servers. The question is would you spend a couple of grands for no censor AI? It depends on what you want to ask or generate. 😂
Seems DeepSeek was patterned on how women think. :)
7:02 That wasn't written correctly. Suppose to say the waiter gave each 1 dollar and packeted 2. 3*14=42+2=44 One is missing.
Unusual for serious business. The Deepseek model is not able to perform a simple logistics calculation for placing identical pallets in a container.
Look man just look into your heart, do you really think ChartGPT will keep your data safe????
this is incredible. I could download and run only 7b and its not super intelligent. 607b is amazing, similar to o1, which is incredible.
Deepseek is free
Regarding the animal question, you didn't say you wanted to answers so I think chat GPT 01 solved that one for you.
That was the test. There were two correct answers. i was testing to see if it can answer with all correct answers. The actual test is called "math question with a twist"
@SkillLeapAI you said find a solution, 01 found a solution. Ask it for all possible solutions and if it doesn't give you both then you've got a point.
O1 says you are right so who am I to argue? 🙂
Short Answer: If the question implicitly wants all possible solutions (which is common in puzzle-like wordings), then providing only one solution would be incomplete. However, if someone only asked for “Which animals (and how many of each) did you buy?” without any further clarification, then listing any valid combination is technically correct-but not fully so if multiple solutions exist and you don’t mention the others.
---
Longer Reasoning
1. Is one valid solution “correct”?
In a narrow sense, yes-because it meets the stated conditions (4 animals totaling $140). If you only show one set of animals that works, you have not actually made a false statement. The question as phrased does not explicitly say, “List all possible sets of animals.”
2. Why do puzzles often imply all solutions?
Typical puzzle or riddle wordings strongly imply the desire for uniqueness. When more than one solution exists, puzzle-setters often highlight that there is more than one correct answer, or phrase it to draw out the multiple solutions.
3. How do we interpret the question?
If the puzzle question appears in a textbook or puzzle forum, the expectation is usually to find all integer solutions-unless it explicitly says “Give an example of a solution.” Since we do indeed have two valid solutions, it’s common practice to provide both if the puzzle says “Which animals did you buy?”-implying a request for a definitive answer.
4. Conclusion
o1’s answer is technically correct if it gave a valid set of numbers. However, in puzzle contexts, many people would say it’s incomplete because it ignores the existence of another valid solution. In everyday conversation, though, if the question is simply “Which animals could you buy…?” giving a correct set might suffice.
Ultimately, it depends on the context:
Puzzle or academic setting: Usually best to give all solutions.
Everyday scenario: A single valid answer might be enough.
there is one more possiblity 1 horse, 1 goat, 2 chickens
is 9.11 bigger than 9.9.
It depends on context. If it's a version number 9.11 could be regarded as bigger than 9.9
Its the answer for sam altman who said third world contries cant create ai like chatgpt.
Files upload & internet search make DS far more useful/powerful
Uhhh ChatGPT has both of those. And ChatGPT lets you upload videos and images for analysis whereas deepseek doesn't.
o1 doesn’t have search and more limited file upload. Like you can upload csv even though you can do both with 4o
@@henrythegreatamerican8136 O1 has no search and only accept image upload, basically useless
@@damien2198 Then I must of used a different version of ChatGPT. I was uploading numerous images yesterday of homes for analysis to see if the rooftops were metal or some other material.
No version of Deepseek let me do that.
Yea 4o can do all that with vision capabilities and it's really good. You don't need o1 or DeepSeek for that
DeepSeek is easier to install or use
First huawei, second naval power, 3rd tiktok tencent, 4th deepseek, goodluck blocking everything...
The light bulbs - ha, all those turned on are squares - 1, 4, 9, 16, 25 ....
The most important test is whcih one can design an FTL warp engine the fastest and with the highest FTL speed.
I will just keep using my own brain
20$ a month for a Tractor vs Free for a Bulldozer. The choice is yours.
OpenAI: my money is gone!
I am able to run deepseek r1 7b . On my hp i5 intel pc it working great.. the step.
Installation process
Step-1
Download ollalma for your pc is free
Ollama run deepseek r1:7b .
In 1st run it install it locally you good to go
Next time try asking a question related to previous question and you will figure where deepseek really stands. It might be useful one day for now though its trash.
Good idea. I'll do that in the next test.
I really enjoy Deepseek and how they show their reasoning. But i still find the answers they provide to be misleading about 40% of the time.
I found when i asked it to reason through subject areas where i know the answers to, they will still make up answers at an alarming rate.
While other models will do the same, if call it out they will admit they were making things up and then if they dont know how to answer it they will tell me
Deepseek tends to admit as well, but keep providing incorrect information.
The steps are amazing, but not convinced overall.
I tend to use their reasoning to then work through problems and ask other models.
Anyone else have similar experiences?