Interesting demonstration. I've seen demonstrations like this in the past few days and it's funny everyone is complaining about the speed of GPT4. I'm reminded of the story of the tortoise and the hare. As I recall the tortoise won the race.
Could you redo this comparison after a week? So it would be 1 month. So we could see do they improve or not. I think, the great idea would be to use THE SAME tasks with the same prompts (word in word!) so we could compare apples to apples ;-)
Cool comparison, if you keep comparing these models, try giving them roles. For example: Act as an expert in [x]. I think they will answer more correctly that way, though I could be wrong. Also try asking them some hard problems in physics, chemistry, etc
Good tests. It seems Bing is something between 3.5 and 4 but it should be 4. Did you test Bing on Precise tone? I'm wondering if that will make a difference?
Bing seems worse than ChatGPT+ gpt-4. Perhaps it's modified to run faster or something, or having internet searches on is making its output less accurate. Or maybe Microsoft is just lying to us about it running on GPT-4 currently.
@@wordsatscale Hmm, I did a test with a chess game. On precise it finished the game only making one illegal move but the balanced could not make it past move 8. Although it also could be that I used Bing Unchained for the balanced test and edge sidebar for the other one.
Cool review. However based on my experience, it's best to use creative mode for Bing when trying to get quality outputs.
Good tip!
Thank you for the quick video! I will certainly be on the look out for your next videos.
Thank you!
Interesting demonstration. I've seen demonstrations like this in the past few days and it's funny everyone is complaining about the speed of GPT4. I'm reminded of the story of the tortoise and the hare. As I recall the tortoise won the race.
That’s right))
Fantastic comparison, thank you
Thank you!
gPT4 does not even do 25,000 words input like they advertise. Not even 3000
Sadly yes
Wonderful analysis as usual!
Thank you!
Could you redo this comparison after a week? So it would be 1 month.
So we could see do they improve or not.
I think, the great idea would be to use THE SAME tasks with the same prompts (word in word!) so we could compare apples to apples ;-)
Good idea. Noted!
Cool comparison, if you keep comparing these models, try giving them roles. For example: Act as an expert in [x]. I think they will answer more correctly that way, though I could be wrong. Also try asking them some hard problems in physics, chemistry, etc
All valid points! Thank you!
Good tests. It seems Bing is something between 3.5 and 4 but it should be 4. Did you test Bing on Precise tone? I'm wondering if that will make a difference?
No, not yet
The word is "write" btw. 😃
Yes, i’d figured)
Chippity
))
Which one is best and easiest for blog writing? Still GPT3.5?
I’d say GPT-4 has an edge over GPT-3.5 as far as blog writing
Bing seems worse than ChatGPT+ gpt-4. Perhaps it's modified to run faster or something, or having internet searches on is making its output less accurate. Or maybe Microsoft is just lying to us about it running on GPT-4 currently.
I have mixed feelings about Bing as well)
@@wordsatscale Hmm, I did a test with a chess game. On precise it finished the game only making one illegal move but the balanced could not make it past move 8. Although it also could be that I used Bing Unchained for the balanced test and edge sidebar for the other one.
Great stuff! Maybe you should let every tool give u at least 2 outputs. Better to compare that way ...
Good idea, I'll give it a try.