These models don't do any background reasoning (essentially thinking before answering). Definitely recommend trying out o1-mini which does do this. Currently o1-mini does better at maths than o1-preview, but o1-preview has better general knowledge reasoning. o1 when it's finally released should be just downright better than o1-mini at everything including maths. Highly recommend trying some of these out on that model :)
This uses ChatGPT 3 which is outdated. The latest free tier model is ChatGPT 4o and the top model is o1. Both of these are much better at math that ChatGPT 3 which is TWO YEARS OLD now.
I once asked chatGPT to prove that π is irrational. It gave back the proof of √2 problem, discuss squaring the circle problem and in final conclusion wrote hence π is irrational.
The problem is that chatgpt or any llm, they are not applying formal logic or arithmetic to a problem, instead they regurgitate a solution they tokenized from their training set, and try to morph the solution and the answer in the context of the question being asked. Therefore, just like a cheater, it can often give a correct result confidently because it has memorised that exact question, sometimes it can even substitute values into the result to appear to have calculated it, but in the end it's all smoke and mirrors. It didn't do the math, it didin't think through the problem, that's why llm's crumble when never before seen questions get asked, because an llm has no understanding, only memorisation. Also llms crumble when irrelevant information is fed alongside the question, because the irrelevant information impacts the search space that's being looked at, so accuracy of recall is reduced. LLM's do not think, they do not process information logically, rather they process input and throw out the most likely output, and use some value substitution in the result to appear to be answering your exact question. LLM's cannot do mathematics, at best they can spit out likely solutions to to your questions where similar or those exact questions and their solutions have been fed to them in their training set. An LLM knows everything and understands nothing.
They _might_ process information logically, we actually don't know. Since they generate it word by word (or token by token), after enough training it might have learned some forms of logic because it turns out those are very good at predicting the next token in logical proofs. Logic is useful for many different proofs, just memorizing the answer is only useful for a single one (i.e. it would be trained out pretty quickly); this doesn't guarantee it knows logic, but it makes it plausible. It is a common misconception that these programs work by searching the dataset, 3Blue1Brown has an excellent video series I would recommend that shows just how complex its underlying mechanics actually are.
Question 3 the geometry one ends up much better when you give it the graph with the instructions. I tried it and got a much better result. To do this I used the snipping tool to make an image of both the question and the graph. Then I saved it to desktop as screenshot.jpg and dragged that into the ChatGPT window. It read them both fine.
It becomes obvious that the language model is essentially a separate module to the image generator. I bet even if the solution had been flawlessly found, the drawing of a diagram would be completely bonkers
yes of course. the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
Hey Dr Crawford - thank you for your video and insight. It seems that you are using the basic GPT4 model to solve these BMO questions. There is a different model ChatGPT provides called the o1-preview, which is specifically designed for complex and advanced reasoning and solving difficult mathematical questions like this. If you use the o1-preview model, it would take way longer time (sometimes even more than a minute) before giving you a response, and it thinks in a way deeper way than the model you have used here. With that model, I've tried feeding it questions 5 and 6 on the BMO1 paper, and it could solve them perfectly. Therefore I would encourage you to try again with that specific model. I do believe that you have to have ChatGPT subscription to access that model, but I think that they are going to release a free version of that model. Anyways, thanks you so much! P.S. It would have been better if you simply uploaded a screenshot of the question as diagrams could have been included, and ChatGPT would be able to read the question from the image (probably better than it being retyped with a different syntax)
On an unrelated note, I remember sitting this BMO paper last year and struggling but enjoying it. I recently started uni in Canada and have been training for putnam, and now I’m looking back at these questions both cringing and being proud at how much I’ve grown in just a year, how I’ve gone from finding these questions tough, to now being able to solve them without much struggle. This is what I love about maths, how I can always continue with just some practice. P.s, great video Tom, really enjoyed watching it.
19:35 LOL, the diagram drawing looks like equal parts 1) M.C.Escher, 2) Indian Head test pattern from the early days of television, 3) steampunk, 4) Vitruvian Man. It's all sorts of incorrect, its confidence is a barrel of laughs, but it's lovely to look at and fun to contemplate how ChatGPT may have come up with that. My favorite part is the top center A with the additional 'side shield' A, and honorable mention to how the matchsticks of the equilateral triangle have three-dimensional depth and shadows.
In Q1 there seems to be an error in chatgpt's explanation. For example, it says "D" must be in position 7, 8 or 9 but "DOLYMPIAS" is a valid misspelling...every letter is one late, except for D (early) and S (correct).
Yeah, its mistaken assumption that a letter must be within one of its original location (in either direction) actually limits the number of possible permutations to 55. So, it definitely didn't properly pair up its explanation with its answer.
Why didn’t you use OpenAI’s new model o1, which is designed for these types of problems? Would be interesting to see the performance of o1-preview with these.
I've only watched up to the first question so far, but I came up with a different solution that's interesting enough to mention. Another way to think of the problem is dividing the characters into 2 subsets, one of them is the characters that were typed 1 late and the other is all the others that weren't. If all the characters are different, these 2 sets give enough information to reconstruct any possible spellings. Therefore, we just need to count all the ways to make these subsets. We know that in an n character long word the last character can never be 1 late. So we only have n-1 letters left to work with. [n-1 choose k] will give us a k sized subset. To get all possible subsets, we need to sum up for every case of k. [sum(k = 0..n-1)(n-1 choose k)] This is the n-1st row of Pascal's triangle. We know that the sum of the n-1st row of it is 2^(n-1). The word "OLYMPIADS" has 9 letters, therefore the answer is 2^8 which is 256.
ok. to use this at the point Z you need two lines through Z which cut a circle in 1 or 2 points. Say this circle is centered atB with radius BA. You can conclude: ZX*ZY = ZB*ZW (W is the point where ZB coincides with the circle) Since ZW=ZB-BA we get ZX*ZY = ZB*ZB-ZB*BA. This looks almost like what chatGPT wrote. I'd give it a pass 😂
I asked the o-1 preview the geometric question and it approached the problem very analytically - by setting up a coordiante system, finding the points X,Y and Z by solving equation systems for lines and the circle and finally showing BZ is Perpendicular to AC using vectors and dot product of BZ⋅AC. I can't fully evaluate whether it's perfect, but I still think its solution was way better.
You did not use the latest o1 series of models. I was trying to search for where you mention which model you were using - couldn’t find an exact response and you have cropped the part where it mentions the model and also haven’t shown the footage of the answer generation - which would give away the model you were testing. O1 can not generate images - which was the give away. Do the same tests with o1-preview.
The second problem reminds me of euclids alogithm and most notably the chinese usage of such method. If you got 2 vessels of volunes a And b the lowest volume which you can measure is the greatest common divisor of a and b. By using this logic and the fact that any ai and ai-1 are some linear combinations of a0 and a1 it folowsthat gdc(ai,ai-1)=gcd(a0,a1) henceif they are consecutive they both have gcd of 1.
20:00 I think it might have misunderstood the question. I think it interpreted "two apart" as "has two dots in between" despite the question being very clear about this
@2:41 There seems to be a problem in your definition of the problem. It is said a letter can appear at most one position late, but any position early as you wish. So the third letter Y can also appear in first position, am I wrong ? Like MATHS can be typed TMASH where you see 3rd letter appears in 1st position ...
7:09 the second rule is incorrectly rewritten. the rewritten second rule ChatGPT wrote is just the rewritten first rule in flipped order and negated. the correct rewritten second rule would be: a_i - a_{i-1} = 2 * (a_{i-2} - a_{i-1}) this is impossible if a_i and a_{i-1} are consecutive (2*n can never be +-1), so by induction, the first case must hold for all i
the new QWQ 32b preview model amazingly does better at this hardcore math questions than bigger models, it outputs over 3k tokens for each question as it tries to brute force a solution
Hi Tom, I really like the video! 😀If you want to see a good performance in logic and reasoning from GPT, using GPT o1-preview seesms to be the best at the moment. It would be interesting to repeat the same with that more advanced model. It thinks before answering which allowes it to check its own answeres before saying the first thing that comes to mind
Forgot exactly how I phrased the question to chatgpt, but it involved splitters with 1 input and 1-3 outputs where the outputs were equally divided from the input, and mergers with 1 output and 1-3 inputs where the output is equal to the sum of the inputs and how to construct a sequence of splitters and mergers to end up with two outputs with 80% and 20% of the original source input. It said to split the first input with a 1-2 splitter (50%/50%) and split one of those outputs with a 1-2 splitter (25%/25% of the original input). then merge the remaining 50% with the 25% and that will equal the requested 80% output, and the remaining 25% equals the requested 20%. In summary, it thinks that 50% plus 25% equals 80% and 25% equals 20%. So, yeah, ChatGPT can't math.
I have found the WolframGPT is better at Maths than the standard ChatGPT. That said both often require additional prompting to achieve desired results. Then again it could just be human error on the prompter side. Cheers! ^.^
When I did this practice paper I got the same thing as u for question 2 about how the difference either increases or stays the same at each point, so if it is 1 at 2024 then it must be 1 at 1 bc the each term is an integer but I was confused when looking at the mark scheme so wasnt sure it was right. Thanks for explaining the mark scheme it helped me understand it better😁👍
the reason why the "diagram" it drew was such complete nonsense is that the model for generating images is completely different from the one used to generate text, so all the image generator is given is a text description from the gpt model, none of the text model's internal "understanding" of the question
Hey Tom, Thanks for the video. BUT! ;) OpenAI will release the full o1 “reasoning model” soon. Currently we only have access to the preview. It would be fantastic to see a professional mathematician evaluate its performance, ideally with a problem set that isn’t on the internet or in books or has only been put on the internet recently.
Hi Dr Tom! I am a fan from Singapore and I would like to inform you about the Singapore A level, which is known to be harder than the IB HL maths paper. I think that you would probably enjoy doing that paper
@@ramunasstulga8264 If you are so retarded that youre unable to even do both paper before making a valid criticism you shouldn't even comment. I find it baffling someone like you is even watching this video.
First you did not use o1-preview would be more interesting, also 0-shot is something humans do not be limited 0-shot which means I have to give my first thought in university exam, no you have typically 1+h per question. So do test with o1 and give natural critique not leading might give better results. Like just try to convince it to be better to look into it's own arguments. It would be simple to do, better results hmmm no answer there from me but yes when they become better when? Great channel Tom!
@@TomRocksMathsThat's fair, but you should definitely do a video where you compare the two. Or see if you can beat 4o1 at chemistry, physics, or some other subject that isn't your speciality
ChatGPT's "proof" for the first question was wrong. According to its "step 2" the answer should be 2^2 * 3^7 which is false. Also, the possible position are wrong since the n-th letter can be in any of the 1,...,n+1 position (except for the last letter which is in 1,...,n). I have no idea why it needed to mention Young Tableaux in step 3, since even if they are related somehow, this is a simple problem that doesn't need anything advance in order to solve it. Finally, in step 3, without a proper explanation it suddenly only gives 2 possibilities for each letter, and for some reason the letter 'L' has either 2 or 3 possible positions. Even if you ignore this, and give 2 positions for each letter you get 2^9 and not the 2^8 correct answer.
@@FlavioGaming You are right. Without the knowledge of the position of the previous letters, the n-th letter can be in 1,...,n+1 positions (which seemed what step 2 meant to say) and after you assume that you placed the previous n-1 letters, then you only have 2 possibilities (which should have been step 4), except for the last letter which only has one possibility. In any case, while somehow chatGPT managed to give the final right answer, everything in between seems like guesses. This sort of proof is something which I would expect from a student that saw the answer before the exam, didn't understand it, and tried to rewrite it from memory, which granted, this is how chatGPT works. I would not call this "mathematics", and I have yet to see chatGPT answer any math problem correctly, unless it is very standard and elementary and its the type of question you expect to see in basic math textbooks.
On the unreliable typist: I feel ChatGPT mischaracterized the possible positions of letters (or I'm drastically misunderstanding the rules. In steps 1 ^2, it said 'S' can only be in the last 2 positions. But 'SOLYMPIAD' appears to fit the rules ('S' is way early, and each other letter is 1 late). It may have gotten the right answer, but it's argument was flawed. On the polygon: Step 1 is false. Convex with equal sides does *not* imply the vertices lie on a circle. A rhombus is convex and all its sides are equal, but the vertices are *not* on a circle. This alone invalidates all the rest of the proof, which relies on the circle. Also, in step 4 part 'n=5', the 3 diagonals do *not* form an equilateral triangle. Nor would it "ensure … a regular polygon" if they did. The important thing to remember is that LLM "AI" isn't *reasoning* at all. It's just stringing a series of tokens together based on how often it has seen those words strung together before, plus a bit of randomness.
@20:00 : as an euclidean geometry addict...I like the diagram a lot ;-) "Power of a point" is of course a not accurate definition. I know the "power of a point with respect to a circle" only. "Please draw a sheep". I tried some months ago to get a generated picture but no way. They must be taught the Compass and Ruler techniques.
Question 1, step 2, doesn't "SOLYMPIAD" fit the constraints? Same with "OLSYMPIAD"? At least some cases with a letter appearing at least 2 slots early seem omitted. D should not be restricted to 7 or later and S should be allowed before 8, for instance.
I use it to study some theoretical stuff, it’s good at explaining theorems and definitions and producing good examples. It can even prove things pretty well, because it’s not actually doing the proof but just taking it from its database and pasting it to you. Of course it makes mistakes now and then, but they’re so dumb they’re easy to catch. And by “using it” i mean: as i’m studying from my notes or books i ask from time to time chatgpt things in order to understand the mind bogglingly abstract stuff i have to understand. Overall it has proven to be a fairly useful tool to learn math, at least for me, as i’m pursuing my bachelor degree in math.
Yes, you can copy the diagram. Thats no issue at all. You can just copy and paste an image into chtgpt (or click the image button) as long as you have access to full 4o, after a few prompts until you pay accordingly itll downgrade to 3 though.
It works with the math needed for engineering but not what we come up with in Physics (theory)-we do rely on concepts freshly come out of pure math and a mathematician’s mind. How about showing chatgpt o1 getting literally tossed in the storm with G(n) 😅 20:05 Yea Sabine and the rest don’t like it too. Mathos is pretty decent compared with o1 but also fails later.
Whenever I am asking chatgpt for help with math questions, I almost always notice something went wrong. So I guess a tool made for helping me get the question right, made me help myself in knowing when things are wrong instead :3 (this makes sense in my head okay)
One time I asked it what an abelian group was as a test and it told me all abelian groups are dihedral groups and spit out a bunch of complete nonsense math and i was so sad because at first i saw all the math and thought it might be actually real
@@TomRocksMaths Makes sense. :) However, like I mentioned earlier, given the title of the video, it might be apt to include a discussion on o1 or drawn a comparison with o1. Damn. I sound like a reviewer now. 😅
@@IsZomg This is probably the most accurate way to think about chatGPT... Yes, his answers seem like it tries to remember and rewrite an answer that it had seen before, but never understood it, however, as someone who checked many math exams, it does not seem too far from the average student's answers. So in this sense, chatGPT does exactly what it suppose to do: answer like a human...
@@eofirdavid o1 scores 120 on IQ tests which means it's beating more than half of humans now. There's no reason to think the progress will stop either.
@@IsZomg Then create a reply video demonstrating that o1 can solve all the math problems that ChatGPT failed at in Tom's video. This would be very instructive for the Tom Rocks Maths audience
@35:00 : what??? an equilateral convex polygon is NOT necessary circumscribable. An usual equilateral kite (not a square) cannot be inscribed in a circle. ......only one word is coming to my mind : "bluff" ;-)
Thanks for the breakdown! A bit off-topic, but I wanted to ask: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
Power of a point is very much a real theorem. It is involved, for example, in Geometrical Inversion through a circle. ChatGPT completely misapplied it though, and the formula it provided has nothing to do with it.
It feels like ChatGPT is a still quite a way from being able to solve these sorts of problems. I made a similar video recently putting it up against this year's (2024) Senior Maths Challenge and I found its results quite surprising! th-cam.com/video/crMeD37Q49U/w-d-xo.html
A while back I saw a "research" paper written by CHatGPT about an issue in game theory It was absolute nonsense, the vocabulary and sentence structure was all OK, but the "logical steps" were all outright nonsense.
Comparing Gemini vs chat GTP for the time being Gemini is worse than chat GPT. However, Gemini doesnt limit the amount of questions you may do but chat gpt does. That would be a decisive factor in the dominance of Gemini vs Chat GPT, depending upon how many of us start teaching Gemini or Chat GTP to do Maths properly. ¿Do you want to be redundant? That is the main question!
🌏 Get NordVPN 2Y plan + 4 months extra ➼ nordvpn.com/tomrocksmaths It’s risk-free with Nord’s 30-day money-back guarantee! ✌
These models don't do any background reasoning (essentially thinking before answering). Definitely recommend trying out o1-mini which does do this. Currently o1-mini does better at maths than o1-preview, but o1-preview has better general knowledge reasoning. o1 when it's finally released should be just downright better than o1-mini at everything including maths.
Highly recommend trying some of these out on that model :)
This uses ChatGPT 3 which is outdated. The latest free tier model is ChatGPT 4o and the top model is o1. Both of these are much better at math that ChatGPT 3 which is TWO YEARS OLD now.
Chatgpt doing black magic instead of geometry.
It sees the world differently
@@asiamies9153 it doesn't see the world at all
@@delhatton that's still different from how humans see the world. 🙄
“Narn, flëmadoch, F’Tadn ygsorath, loqgawtygsdryr!”
Seems to be a Deep Language learning model…
ChatGPT invoked the Illuminati on the Geometry question 😂
the geometry drawing it produced had me gasping for air 🤣
I once asked chatGPT to prove that π is irrational. It gave back the proof of √2 problem, discuss squaring the circle problem and in final conclusion wrote hence π is irrational.
Wow, it independently (re)discovered the Chewbacca defence!
The problem is that chatgpt or any llm, they are not applying formal logic or arithmetic to a problem, instead they regurgitate a solution they tokenized from their training set, and try to morph the solution and the answer in the context of the question being asked. Therefore, just like a cheater, it can often give a correct result confidently because it has memorised that exact question, sometimes it can even substitute values into the result to appear to have calculated it, but in the end it's all smoke and mirrors. It didn't do the math, it didin't think through the problem, that's why llm's crumble when never before seen questions get asked, because an llm has no understanding, only memorisation. Also llms crumble when irrelevant information is fed alongside the question, because the irrelevant information impacts the search space that's being looked at, so accuracy of recall is reduced.
LLM's do not think, they do not process information logically, rather they process input and throw out the most likely output, and use some value substitution in the result to appear to be answering your exact question.
LLM's cannot do mathematics, at best they can spit out likely solutions to to your questions where similar or those exact questions and their solutions have been fed to them in their training set. An LLM knows everything and understands nothing.
I wish everyone understood this.
Try o1 brother
@@Eagle3302PL it's even in the name Large Language Model. I don't get how anyone thinks they have any understanding
New o1 model can 'show its work' and reason in multiple steps. If you think LLMs won't beat humans at math soon you are mistaken.
They _might_ process information logically, we actually don't know. Since they generate it word by word (or token by token), after enough training it might have learned some forms of logic because it turns out those are very good at predicting the next token in logical proofs. Logic is useful for many different proofs, just memorizing the answer is only useful for a single one (i.e. it would be trained out pretty quickly); this doesn't guarantee it knows logic, but it makes it plausible.
It is a common misconception that these programs work by searching the dataset, 3Blue1Brown has an excellent video series I would recommend that shows just how complex its underlying mechanics actually are.
I feel like ChatGPT may have taken your first message to be meant as a compliment rather than as a prompt that it should pretend to be you.
Question 3 the geometry one ends up much better when you give it the graph with the instructions. I tried it and got a much better result. To do this I used the snipping tool to make an image of both the question and the graph. Then I saved it to desktop as screenshot.jpg and dragged that into the ChatGPT window. It read them both fine.
after using snipping tool you can directly Ctrl C + Ctrl V in chat gpt
It becomes obvious that the language model is essentially a separate module to the image generator. I bet even if the solution had been flawlessly found, the drawing of a diagram would be completely bonkers
20:42 power of a point is a basic geometry theorem...
Cool video and all but are you aware of o1-mini and o1-preview???
yes of course. the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@@TomRocksMaths 4o is the best 'free' model, not ChatGPT 3
What to know if you could test with Stephen Wolfram add in! To see how good the addin makes chatgpt at maths
@@9madness9are there plugins???
@@TomRocksMaths ChatGPT 3 is TWO YEARS OLD now lol you didn't do your research.
Hey Dr Crawford - thank you for your video and insight. It seems that you are using the basic GPT4 model to solve these BMO questions. There is a different model ChatGPT provides called the o1-preview, which is specifically designed for complex and advanced reasoning and solving difficult mathematical questions like this. If you use the o1-preview model, it would take way longer time (sometimes even more than a minute) before giving you a response, and it thinks in a way deeper way than the model you have used here. With that model, I've tried feeding it questions 5 and 6 on the BMO1 paper, and it could solve them perfectly.
Therefore I would encourage you to try again with that specific model. I do believe that you have to have ChatGPT subscription to access that model, but I think that they are going to release a free version of that model. Anyways, thanks you so much!
P.S. It would have been better if you simply uploaded a screenshot of the question as diagrams could have been included, and ChatGPT would be able to read the question from the image (probably better than it being retyped with a different syntax)
ChatGPT has, on multiple occasions, told me that odd numbers were even and vice versa
On an unrelated note, I remember sitting this BMO paper last year and struggling but enjoying it. I recently started uni in Canada and have been training for putnam, and now I’m looking back at these questions both cringing and being proud at how much I’ve grown in just a year, how I’ve gone from finding these questions tough, to now being able to solve them without much struggle. This is what I love about maths, how I can always continue with just some practice. P.s, great video Tom, really enjoyed watching it.
25:43 Obviously it just used the power of a point thereom
Tom is not locked in. Every uni maths student knows if you take a picture of the question it will always give you the right answer
Facts, but for some reason it has a really hard time with topology
19:35 LOL, the diagram drawing looks like equal parts 1) M.C.Escher, 2) Indian Head test pattern from the early days of television, 3) steampunk, 4) Vitruvian Man. It's all sorts of incorrect, its confidence is a barrel of laughs, but it's lovely to look at and fun to contemplate how ChatGPT may have come up with that. My favorite part is the top center A with the additional 'side shield' A, and honorable mention to how the matchsticks of the equilateral triangle have three-dimensional depth and shadows.
In Q1 there seems to be an error in chatgpt's explanation. For example, it says "D" must be in position 7, 8 or 9 but "DOLYMPIAS" is a valid misspelling...every letter is one late, except for D (early) and S (correct).
Yeah, its mistaken assumption that a letter must be within one of its original location (in either direction) actually limits the number of possible permutations to 55.
So, it definitely didn't properly pair up its explanation with its answer.
You caught it first. I'm surprised GPT could pull out the correct number while misunderstanding the terms along the way.
@@coopergates9680 it literally did 2^9=2^8=256
Why didn’t you use OpenAI’s new model o1, which is designed for these types of problems? Would be interesting to see the performance of o1-preview with these.
I've only watched up to the first question so far, but I came up with a different solution that's interesting enough to mention. Another way to think of the problem is dividing the characters into 2 subsets, one of them is the characters that were typed 1 late and the other is all the others that weren't. If all the characters are different, these 2 sets give enough information to reconstruct any possible spellings. Therefore, we just need to count all the ways to make these subsets.
We know that in an n character long word the last character can never be 1 late. So we only have n-1 letters left to work with. [n-1 choose k] will give us a k sized subset. To get all possible subsets, we need to sum up for every case of k.
[sum(k = 0..n-1)(n-1 choose k)]
This is the n-1st row of Pascal's triangle. We know that the sum of the n-1st row of it is 2^(n-1). The word "OLYMPIADS" has 9 letters, therefore the answer is 2^8 which is 256.
we can add shape through the attachment icon right in the left corner of the prompt box, just take a Screenshot figure and put forward like this.
Power of a point is actually real and while I’m usually bad in geometry at olympiads, some of my friends used it several times.
ok. to use this at the point Z you need two lines through Z which cut a circle in 1 or 2 points. Say this circle is centered atB with radius BA. You can conclude:
ZX*ZY = ZB*ZW (W is the point where ZB coincides with the circle)
Since ZW=ZB-BA
we get
ZX*ZY = ZB*ZB-ZB*BA.
This looks almost like what chatGPT wrote. I'd give it a pass 😂
I asked the o-1 preview the geometric question and it approached the problem very analytically - by setting up a coordiante system, finding the points X,Y and Z by solving equation systems for lines and the circle and finally showing BZ is Perpendicular to AC using vectors and dot product of BZ⋅AC. I can't fully evaluate whether it's perfect, but I still think its solution was way better.
@@bigbluespike5645 How does it do on the other problems that ChatGPT made a mess of?
@bornach I didn't test yet, but i'll update you when i do
You did not use the latest o1 series of models. I was trying to search for where you mention which model you were using - couldn’t find an exact response and you have cropped the part where it mentions the model and also haven’t shown the footage of the answer generation - which would give away the model you were testing. O1 can not generate images - which was the give away.
Do the same tests with o1-preview.
yeah this all moot if not o1 which is openAI's first reasoning model, all the others LLMs are just level 1 chatbots by openAI def
The second problem reminds me of euclids alogithm and most notably the chinese usage of such method. If you got 2 vessels of volunes a And b the lowest volume which you can measure is the greatest common divisor of a and b.
By using this logic and the fact that any ai and ai-1 are some linear combinations of a0 and a1 it folowsthat gdc(ai,ai-1)=gcd(a0,a1) henceif they are consecutive they both have gcd of 1.
Our jobs are safe, ChatGPT can’t do maths at all.
20:00 I think it might have misunderstood the question. I think it interpreted "two apart" as "has two dots in between" despite the question being very clear about this
0:24 maybe i'm too panicky but the mere mention of the MAT sends a shiver down my spine... hoping for a non-disaster tomorrow 🙏
@2:41 There seems to be a problem in your definition of the problem.
It is said a letter can appear at most one position late, but any position early as you wish.
So the third letter Y can also appear in first position, am I wrong ?
Like MATHS can be typed TMASH where you see 3rd letter appears in 1st position ...
7:09 the second rule is incorrectly rewritten. the rewritten second rule ChatGPT wrote is just the rewritten first rule in flipped order and negated. the correct rewritten second rule would be:
a_i - a_{i-1} = 2 * (a_{i-2} - a_{i-1})
this is impossible if a_i and a_{i-1} are consecutive (2*n can never be +-1), so by induction, the first case must hold for all i
I love the way you approch problems, You should try a Sri Lankan A/L paper.
That image had me dying for 2 minutes straight😂😂
19:50 - “Wull there’s yer prablem!”
Thanks for coming to my school (I was one of the year 10s), the presentation was very interesting!
the new QWQ 32b preview model amazingly does better at this hardcore math questions than bigger models, it outputs over 3k tokens for each question as it tries to brute force a solution
Me who also can’t do math: “Maybe I am ChatGPT”
Hi Tom, I really like the video! 😀If you want to see a good performance in logic and reasoning from GPT, using GPT o1-preview seesms to be the best at the moment. It would be interesting to repeat the same with that more advanced model. It thinks before answering which allowes it to check its own answeres before saying the first thing that comes to mind
ooooo this is exactly the kind of thing I was thinking it needs!
Forgot exactly how I phrased the question to chatgpt, but it involved splitters with 1 input and 1-3 outputs where the outputs were equally divided from the input, and mergers with 1 output and 1-3 inputs where the output is equal to the sum of the inputs and how to construct a sequence of splitters and mergers to end up with two outputs with 80% and 20% of the original source input. It said to split the first input with a 1-2 splitter (50%/50%) and split one of those outputs with a 1-2 splitter (25%/25% of the original input). then merge the remaining 50% with the 25% and that will equal the requested 80% output, and the remaining 25% equals the requested 20%.
In summary, it thinks that 50% plus 25% equals 80% and 25% equals 20%. So, yeah, ChatGPT can't math.
I have found the WolframGPT is better at Maths than the standard ChatGPT. That said both often require additional prompting to achieve desired results. Then again it could just be human error on the prompter side. Cheers! ^.^
When I did this practice paper I got the same thing as u for question 2 about how the difference either increases or stays the same at each point, so if it is 1 at 2024 then it must be 1 at 1 bc the each term is an integer but I was confused when looking at the mark scheme so wasnt sure it was right. Thanks for explaining the mark scheme it helped me understand it better😁👍
the reason why the "diagram" it drew was such complete nonsense is that the model for generating images is completely different from the one used to generate text, so all the image generator is given is a text description from the gpt model, none of the text model's internal "understanding" of the question
Hi Dr, Can you do a lecture series on any math course you like,similar to the ones you did with calculus and linear algebra.
Chatgpt got really creative in geometry🤣
Hey Tom,
Thanks for the video. BUT! ;) OpenAI will release the full o1 “reasoning model” soon. Currently we only have access to the preview.
It would be fantastic to see a professional mathematician evaluate its performance, ideally with a problem set that isn’t on the internet or in books or has only been put on the internet recently.
Hi Dr Tom! I am a fan from Singapore and I would like to inform you about the Singapore A level, which is known to be harder than the IB HL maths paper. I think that you would probably enjoy doing that paper
Nah jee advanced is easier than IB HL, lil bro 💀
@@ramunasstulga8264 If you are so retarded that youre unable to even do both paper before making a valid criticism you shouldn't even comment. I find it baffling someone like you is even watching this video.
First you did not use o1-preview would be more interesting, also 0-shot is something humans do not be limited 0-shot which means I have to give my first thought in university exam, no you have typically 1+h per question. So do test with o1 and give natural critique not leading might give better results. Like just try to convince it to be better to look into it's own arguments. It would be simple to do, better results hmmm no answer there from me but yes when they become better when? Great channel Tom!
Try using GPT-o1-preview - It uses advanced reasoning.
Yeah I was gonna say that he will be shocked
I found it works a lot better when you upload a photo of a question. Just press a screenshot or snippet tool and paste.
1. The Prompt is definitely upgradable 😂
2. You should use the new preview model o1 it is quite a lot better than 4o
Try using GPT o1 preview. Unlike GPT 4o, it excels at STEM questions due to its "advance reasoning"
a british man saying math instead of maths is a thing i never thought id see in my life
Is this GPT 4o or 4o1?
I think this is a relevant question. O1 is probably a better "thinker"
the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@@TomRocksMathsThat's fair, but you should definitely do a video where you compare the two. Or see if you can beat 4o1 at chemistry, physics, or some other subject that isn't your speciality
ChatGPT's "proof" for the first question was wrong.
According to its "step 2" the answer should be 2^2 * 3^7 which is false. Also, the possible position are wrong since the n-th letter can be in any of the 1,...,n+1 position (except for the last letter which is in 1,...,n). I have no idea why it needed to mention Young Tableaux in step 3, since even if they are related somehow, this is a simple problem that doesn't need anything advance in order to solve it.
Finally, in step 3, without a proper explanation it suddenly only gives 2 possibilities for each letter, and for some reason the letter 'L' has either 2 or 3 possible positions. Even if you ignore this, and give 2 positions for each letter you get 2^9 and not the 2^8 correct answer.
L has 2 possibilities AFTER placing O in position 1 or 2. Y has 2 possibilities AFTER placing the letters O and L in their positions and so on...
For the last letter S, it should've said that since we've placed 8 letters in 8 positions, there's only 1 place left for S.
@@FlavioGaming You are right. Without the knowledge of the position of the previous letters, the n-th letter can be in 1,...,n+1 positions (which seemed what step 2 meant to say) and after you assume that you placed the previous n-1 letters, then you only have 2 possibilities (which should have been step 4), except for the last letter which only has one possibility.
In any case, while somehow chatGPT managed to give the final right answer, everything in between seems like guesses. This sort of proof is something which I would expect from a student that saw the answer before the exam, didn't understand it, and tried to rewrite it from memory, which granted, this is how chatGPT works. I would not call this "mathematics", and I have yet to see chatGPT answer any math problem correctly, unless it is very standard and elementary and its the type of question you expect to see in basic math textbooks.
On the unreliable typist: I feel ChatGPT mischaracterized the possible positions of letters (or I'm drastically misunderstanding the rules. In steps 1 ^2, it said 'S' can only be in the last 2 positions. But 'SOLYMPIAD' appears to fit the rules ('S' is way early, and each other letter is 1 late). It may have gotten the right answer, but it's argument was flawed.
On the polygon: Step 1 is false. Convex with equal sides does *not* imply the vertices lie on a circle. A rhombus is convex and all its sides are equal, but the vertices are *not* on a circle. This alone invalidates all the rest of the proof, which relies on the circle. Also, in step 4 part 'n=5', the 3 diagonals do *not* form an equilateral triangle. Nor would it "ensure … a regular polygon" if they did.
The important thing to remember is that LLM "AI" isn't *reasoning* at all. It's just stringing a series of tokens together based on how often it has seen those words strung together before, plus a bit of randomness.
@20:00 : as an euclidean geometry addict...I like the diagram a lot ;-) "Power of a point" is of course a not accurate definition. I know the "power of a point with respect to a circle" only. "Please draw a sheep". I tried some months ago to get a generated picture but no way. They must be taught the Compass and Ruler techniques.
You can input images onto the prompt by copy pasting a screenshot or 🥇 lacing an attachment onto the prompt :)
The way chatgpt makes Tom wonder is the same way I make my maths teacher wonder about my answers in exams 😂
Looked for something like this after I got frustrated it was getting algebra and calculus wrong 😅 Thanks for the vid!
Question 1, step 2, doesn't "SOLYMPIAD" fit the constraints? Same with "OLSYMPIAD"? At least some cases with a letter appearing at least 2 slots early seem omitted. D should not be restricted to 7 or later and S should be allowed before 8, for instance.
I use it to study some theoretical stuff, it’s good at explaining theorems and definitions and producing good examples. It can even prove things pretty well, because it’s not actually doing the proof but just taking it from its database and pasting it to you. Of course it makes mistakes now and then, but they’re so dumb they’re easy to catch. And by “using it” i mean: as i’m studying from my notes or books i ask from time to time chatgpt things in order to understand the mind bogglingly abstract stuff i have to understand. Overall it has proven to be a fairly useful tool to learn math, at least for me, as i’m pursuing my bachelor degree in math.
Yes, you can copy the diagram. Thats no issue at all. You can just copy and paste an image into chtgpt (or click the image button) as long as you have access to full 4o, after a few prompts until you pay accordingly itll downgrade to 3 though.
I think Numberphile did a video on the Power of the Point Theorem and the counterintuitive properties of the Perpenuncle.
You can UPLOAD PDFS
I have this test coming up on the 20th, these questions are brutal.
Tom can you try the TMUA entrance exam paper 1 and 2
Yesss I've been asking this too
It works with the math needed for engineering but not what we come up with in Physics (theory)-we do rely on concepts freshly come out of pure math and a mathematician’s mind.
How about showing chatgpt o1 getting literally tossed in the storm with G(n) 😅 20:05 Yea Sabine and the rest don’t like it too.
Mathos is pretty decent compared with o1 but also fails later.
You didin't use the newest model 1o, which is significantly better in every way at mathematics.
Whenever I am asking chatgpt for help with math questions, I almost always notice something went wrong. So I guess a tool made for helping me get the question right, made me help myself in knowing when things are wrong instead :3 (this makes sense in my head okay)
ChatGPT can't draw a simple cardioid. Even after I gave it the formula.
Would have been much more interesting with o1 preview model instead of 4o
One time I asked it what an abelian group was as a test and it told me all abelian groups are dihedral groups and spit out a bunch of complete nonsense math and i was so sad because at first i saw all the math and thought it might be actually real
I tried getting Gemini to draw its 'solution' to 3) and it responde with the link to the solutions XD.
Did you consider trying their o1 model
Have you ever had a question that used the arc length of equal sized circles to solve the question?
Can you repeat the exercise with o1-preview?
You should try o1 Preview, which is supposed to be very good at logic and reasoning.
O1? O1 preview?
This oversight makes no sense, is he not aware these models exist???
the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@@TomRocksMaths Makes sense. :) However, like I mentioned earlier, given the title of the video, it might be apt to include a discussion on o1 or drawn a comparison with o1.
Damn. I sound like a reviewer now. 😅
The newlines might confuse it slightly
7:08 Rewriting a(i)=2 a(i-2) - a(i-1) as a(i-2) - a(i-1) = a(i-1) - a(i) doesn't look right.
"Cirbmcircle and Perpenimctle" is the title of a lost work by Rabelais. Unfortunately we will never read it because it is lost.
The new Chat GPT o1 doesn't have this problem, it can reason about math on the research level
Not if it's a llm
@@mattschoolfield4776 lol then neither can 80% of humans
@@IsZomg This is probably the most accurate way to think about chatGPT... Yes, his answers seem like it tries to remember and rewrite an answer that it had seen before, but never understood it, however, as someone who checked many math exams, it does not seem too far from the average student's answers. So in this sense, chatGPT does exactly what it suppose to do: answer like a human...
@@eofirdavid o1 scores 120 on IQ tests which means it's beating more than half of humans now. There's no reason to think the progress will stop either.
@@IsZomg Then create a reply video demonstrating that o1 can solve all the math problems that ChatGPT failed at in Tom's video. This would be very instructive for the Tom Rocks Maths audience
I once asked chatgpt to use my algorithm to find the no of prime numbers from 1 to 173, It said 4086.99
The way chat gpt answers questions it makes us laugh. But it has the capability to understand hints and solve the problems.
@35:00 : what??? an equilateral convex polygon is NOT necessary circumscribable. An usual equilateral kite (not a square) cannot be inscribed in a circle. ......only one word is coming to my mind : "bluff" ;-)
Thanks for the breakdown! A bit off-topic, but I wanted to ask: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
Power of a point is very much a real theorem. It is involved, for example, in Geometrical Inversion through a circle. ChatGPT completely misapplied it though, and the formula it provided has nothing to do with it.
It feels like ChatGPT is a still quite a way from being able to solve these sorts of problems. I made a similar video recently putting it up against this year's (2024) Senior Maths Challenge and I found its results quite surprising! th-cam.com/video/crMeD37Q49U/w-d-xo.html
Hi @TomRocksMaths, will you upload celeberation video of 200k subscribers?
it's coming before the end of the year :)
A while back I saw a "research" paper written by CHatGPT about an issue in game theory It was absolute nonsense, the vocabulary and sentence structure was all OK, but the "logical steps" were all outright nonsense.
power of points is a niche set of tricks for olympiads
now time for the o1-mini model if you have premium
7:21 - Surely this is meta. How can AI deal with maths involving Ai?
Comparing Gemini vs chat GTP
for the time being Gemini is worse than chat GPT. However, Gemini doesnt limit the amount of questions you may do but chat gpt does. That would be a decisive factor in the dominance of Gemini vs Chat GPT, depending upon how many of us start teaching Gemini or Chat GTP to do Maths properly. ¿Do you want to be redundant? That is the main question!
No AI is as good as humans when it comes to Mathematics.
AIs failed so many prompts I've given them
Math so hard even chatgpt ain't mathing
The obvious problem confusing ChatGPT is your use of terms involving letters "a_i" when describing the equations :)
Do you ever mark igcse papers?
Hi, please try Singapore's H2 math and H2 further math A level papers
Khan's Academy explains the "power of a point theorem".
Next time I'll have to argue with anything, I'll say it's "by the power of a point theorem!!"
Thanks chatgpt!!!