Llama 3.1 8B Local AI Tested: iPhone vs Mac Running Ollama!
ฝัง
- เผยแพร่เมื่อ 5 ก.พ. 2025
- In this epic Ollama vs Private LLM showdown, we test Private LLM on an iPhone 15 Pro Max against Ollama on a powerful 64GB M4 Max MacBook Pro. Both are running the same model, Meta Llama 3.1 8B, but with a twist:
Private LLM uses cutting-edge 3-bit OmniQuant quantization.
Ollama relies on 4-bit RTN quantization.
The prompts are identical, the settings are the same (temperature 0.7, top-p 0.95, system prompt: "You are a helpful AI assistant"), but the results? You’ll be shocked!
Watch as we test:
1️⃣ Reasoning skills: Can both solutions handle tricky logic?
2️⃣ Logical consistency: How well do they stick to facts?
3️⃣ Creativity: Do they ace a classic riddle or ramble off course?
Will the iPhone prove mightier than the Mac? Or does sheer hardware power win the day? Find out in this head-to-head comparison that showcases the future of local AI.
🌐 Connect with Us:
👉 Website: privatellm.ai
👉 Discord: / discord
👉 X (Twitter): / private_llm
👉 LinkedIn: / private-llm
👉 Reddit: / privatellm
👍 Like and subscribe for more AI comparisons, reviews, and tips! Let us know in the comments: Which AI solution do you prefer?
#Ollama #PrivateLLM #Llama3 #iPhone #MacBookPro #ai #aitools
Make a video of how to have an offline ai chat on a 4gb ram android phone
We are working with Qualcomm to bring this to the Snapdragon platform.
x.com/qualcomm/status/1849875602456670708?s=46
@PrivateLLM But I don't have a snapdragon
@@PrivateLLMJust on on Snapdragon 8 Elite CPUs or also older architectures like the Snapdragon 8 Gen 2?
We've learned the hard way on the Apple platform that supporting older hardware is a recipe for negative reviews on the App Store. However, we'll see what we can do-it's still too early to discuss it publicly.
Are you planning to release a window version?
🔜 th-cam.com/video/kxtrQsDlMgk/w-d-xo.html