DeepSeek’s Privacy Panic, OpenAI’s Hypocrisy, and Authors vs AI: The AI Argument EP42
ฝัง
- เผยแพร่เมื่อ 8 ก.พ. 2025
- DeepSeek’s open-source AI model is causing chaos-and everyone’s freaking out. Privacy watchdogs are banning it, OpenAI is crying foul over stolen data, and Justin? He thinks it’s all a bit ridiculous. To him, the backlash is classic geopolitical fear-mongering. The fight over AI dominance isn’t just about better models-it’s about who can convince you their version of control is safer.
There are some valid reasons to worry though, especially when it comes to corporations accidentally handing the Chinese their trade secrets on a silver platter. Frank and Justin may not agree on everything, but they both think companies need to be very careful about using DeepSeek.com.
And if you thought that was spicy, wait until they break down OpenAI’s hilarious hypocrisy over data theft, DeepSeek’s blunder of leaving user data wide open for the world to see, and Justin’s cheeky plan to turn Europe into the AI capital of the world with a clever copyright fix.
From bold tech predictions to battles over creativity and innovation, here’s what else they tackle:
00:40 Will AGI make NVIDIA the most valuable company on Earth?
04:29 Is DeepSeek a data privacy nightmare or overhyped fear?
14:45 Is OpenAI’s outrage over DeepSeek’s data theft hypocritical?
17:46 Can Europe’s AI thrive with copyright carrots, not sticks?
22:30 Will AI like Devin ever replace human software developers?
29:01 Are authors fighting a losing battle against AI giants?
► SUBSCRIBE
Don't forget to subscribe to our channel to stay updated on all things marketing and AI.
► STRATEGIC AND CREATIVE AI FOR SMALL BIZ MARKETING
For my full insights, be sure and subscribe to my emails here:
www.frankandma...
► LINKS TO CONTENT WE DISCUSSED
Irish watchdog contacts DeepSeek amid data concerns
www.rte.ie/new...
Dario Amodei: On DeepSeek and Export Controls
darioamodei.co...
Guess who left a database wide open, exposing chat logs, API keys, and more? Yup, DeepSeek
www.theregiste...
OpenAI has evidence that its models helped train China’s DeepSeek
www.theverge.c...
The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do
futurism.com/f...
Authors strike back against AI stealing their books as licensing startup Created by Humans raises $5.5 million seed round
fortune.com/20...
Human Authored Certification
authorsguild.o...
► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: / justincollery
Frank: / frankprendergast
► YOUR INPUT
Should countries ban AI models like DeepSeek over privacy concerns, or is that an overreaction?
Deepseek R1 reveals two things: 1. You can achieve top tier performance without American tech namely NVIDIA chips with CUDA. 2. You can achieve top tier AI research without US-trained experts.
This will lead to disruption of the monopoly pricing model the US tech market currently enjoys.
I'm interested to see how it plays out - right now the privacy concerns around Deepseek just keep developing, but that doesn't take away from the facts you outlined, and so how long before a similar open source model with better guardrails and privacy appears?
Terms of service don't mean a lot. You can put literally anything you want in your TOS, south park had an episode about this. What matters is whats actually legal and enforceable.
So my website visitors don't actually owe me100k for subbing to my mailing list?
Good point, and let's face it - it would be fun to see OpenAI try to enforce the TOS with China claiming fair use and the transformative nature of the process.
nVidia is not alone in creating chips, they are first but others are also developing AI hardware
NVIDIA A100 GPU can consume up to 400 watts. The human brain consumes 20 watts. Don't take investment advice from these people.
You should definitely and absolutely not take investment advice from these two people.
Thanks for the inside info guys, you are my new financial advisors . Love the show as always
ha ha ha ha... pleeeeease don't say that :D