What’s your thoughts on the new Replit IOS app? Does that just allow you to build web based apps on your iPhone? How is that difference from your method?
You don't deserve to earn anything. AI girlfriend ppffffff, you mean addiction, manipulation and value extraction bot. R1 distill is not R1. Clickbait title. Blocked and not giving you another chance.
@@chidorirasenganz No, they are much more the base model (not Deepseek) + extra steps (Deepseek). I've tried them all and keep the two most useful ones for some uses: distills are impressive but they are not R1: each of them feel more like their base models than anything else, because that's mostly what they are. Video creator knows this and is click baiting. If they said R1-Distill on mobile there would be no issue with title. They are a scummy grifter proven by their own mouth: AI girlfriend grifter, disgusting.
@ Nope the Distilled versions are still R1. Every content creator I’ve seen have mentioned it’s the Distilled versions that are running locally. Deepseek themselves consider the Distilled versions as a part of the Deepseek family, hence why they released them. From my own testing they do react similarly, obviously higher parameter models are better quality but the core behavior is the same.
@ They're almost all jumping the hype train (but have seen a few creators being more honest about the distinction): all who do I've blocked and not interested in hearing from further, those who don't I've more respect for refraining. I take your point up to a point: Deepseek did put effort in and developed these distills and they do reason and think things out in a way the original models don't - but they each still act much more like their base models than R1 proper and they each show similar limitations and behaviour as their base models. R1-llama-70B codes and writes like Llama and has very short outputs as a lot of it's context gets spent on thinking. R1-Qwen-32B codes and writes like Qwen-32B. And so on. These are impressive and deserve the R1-distill name, but they way these creators pull this bait and switch is disingenuous and thirsty. No thanks.
What’s your thoughts on the new Replit IOS app? Does that just allow you to build web based apps on your iPhone? How is that difference from your method?
Naah i tried this same thing a week ago on my redmi note 13 pro + but the token output speed was so terrible i gave up
gonna get bumped from app store since it's using 3rd party content without disclosure
Phenomenal
You don't deserve to earn anything. AI girlfriend ppffffff, you mean addiction, manipulation and value extraction bot. R1 distill is not R1. Clickbait title. Blocked and not giving you another chance.
Distilled R1 is R1
@@chidorirasenganz No, they are much more the base model (not Deepseek) + extra steps (Deepseek). I've tried them all and keep the two most useful ones for some uses: distills are impressive but they are not R1: each of them feel more like their base models than anything else, because that's mostly what they are. Video creator knows this and is click baiting. If they said R1-Distill on mobile there would be no issue with title. They are a scummy grifter proven by their own mouth: AI girlfriend grifter, disgusting.
@ Nope the Distilled versions are still R1. Every content creator I’ve seen have mentioned it’s the Distilled versions that are running locally. Deepseek themselves consider the Distilled versions as a part of the Deepseek family, hence why they released them. From my own testing they do react similarly, obviously higher parameter models are better quality but the core behavior is the same.
@ They're almost all jumping the hype train (but have seen a few creators being more honest about the distinction): all who do I've blocked and not interested in hearing from further, those who don't I've more respect for refraining.
I take your point up to a point: Deepseek did put effort in and developed these distills and they do reason and think things out in a way the original models don't - but they each still act much more like their base models than R1 proper and they each show similar limitations and behaviour as their base models.
R1-llama-70B codes and writes like Llama and has very short outputs as a lot of it's context gets spent on thinking.
R1-Qwen-32B codes and writes like Qwen-32B.
And so on.
These are impressive and deserve the R1-distill name, but they way these creators pull this bait and switch is disingenuous and thirsty. No thanks.
Another banger content!