@@volodyadykun6490 easiest example (f(n)-f(n)*(-1)^(5-n))/2 to gobble up everything past it and by the same logic could add any value at any specific place in the sequence (by a second function (x)=k cut at both ends the exact same way ), you could get to the formula yourself even without too much education, just some practice doing math magic. 'tho higher education would definitely help.
I was taught this in high school without realising how under-the-radar it is! whenever I see a non linear sequence, I immediately look at the differences. glad to see word being spread!
I was an actuarial student in the early 1970s. Part of the studies was a semester long subject called Finite Differences which covered this topic in great detail. I have used the processes many times, but this is the first time I have seen anything of it on TH-cam. Thank you very much for the trip down memory lane.
This was the final exercise on our discrete mathematics test twenty years ago. My solution was to described the pentagonal numbers as a nonhomogeneous recurrence relation which I then solved using the characteristic equation (assuming a second degree polynomial because of the second differences all equalling 3). Apparently, this was unusual because someone wrote "wow!" in the margin.
I did it too with the recurrence relation, except I don't really know what a characteristic polynomial is in that context. I wrote u_{n+1} = u_n + 3n +1, and I recognized that it would be a sum of an arithmetic sequence (with r=1) and an arithmetic series (with r=3). You then have u_n = n + n*3*(n-1)/2 = 3/2n^2 + 1/2n as we know
I can only think of one other case of someone writing “Wow!” in the margin and some people think that was aliens, so your method must’ve been pretty unusual 😂
@@jonasdaverio9369 Sorry! Got my terminology mixed up. It's supposed to be the characteristic equation. Or is it the same thing? It's been 20 years. In short, substitute a_{n}=cr^n and find the roots. I came up with the recurrence relation P_{n}=P_{n-1}+3n-2, P_{1}=1. The associated homogeneous equation is P_{n}=P_{n-1}. Its solutions are of the form P^{(h)}_{n}=P_{n-1}=\alpha1^n. For the particular solution I assumed P_{n}=an^2+bn+c, ended up with the equation an^2+bn+c=a(n-1)^2+b(n-1)+c+3n-2, simplified to (-2a+3)n+(a-b-2)=0, got a=1.5, b=-0.5 and finally P^{(p)}_{n}=1.5n^2-0.5n. P_{n}=P^{(h)}_{n}+P^{(p)}_{n}=1.5n^2-0.5n+\alpha1^n. P_{1}=1 gives \alpha=0.
I was "taught" this in 5th grade (New York public school), where the teacher simply asked us to determine the formula for pentagonal numbers. We did it. Best math teacher ever. HERE'S THE KEY: He didn't tell us it was a tough problem.
This looks suspiciously like a discretised version of a Taylor series. Taking one order of difference is like taking a derivative. And just like how the n-th derivative of a degree n polynomial is a constant, the n-th order difference of a degree n polynomial is also constant. In fact it seems that differentiating a polynomial and taking the difference between two values work in exactly the same way, same expanding the bracket, same cancellations, except in differentiation, you take the limit afterwards.
It's not really suspicious. It's one of the foundation of numerical analysis. You can look at finite difference method, finite element method, Z transform, discrete Fourier transform, etc... All of which are of critical importance in engineering and science (simulation, control theory, signal theory)
That's exactly what it is. Babbage's difference engine used (or would have used, had it been built in his lifetime) truncated power series of various special functions to calculate their values to high precision and thus tabulate them, primarily for navigation, which used a lot of trig functions for example. We know sin x is x - x^3/6 + x^5/120 = ... which is amenable to iteration using Newton's method of differences. Babbage's engine was capable of evaluating polynomials up to degree seven to 31 decimal digits of precision.
It actually is a derivative, because, in the end, it is rate of growth. The first one of the first derivative, aka rate of growth of out series. The second one if the second derivative, aka rate of growth of the rate of growth etc. In the example where the polynomial is a second degree one, then it means that the second rate of growth is standard, meaning that the initial sequence keeps increasing with "speed" that keeps increasing (accelerating) on a standard value. It becomes more obvious when you take the third derivative, which would be 0, aka the second derivative doesn't change, it's constant. The third differences are exactly that. Zeros. It is exactly rate of growth, simplified over every value of X.
I think like many others I've never heard anyone talk about this, but I feel slightly proud that I figured this method out myself during job interviews that had those find the pattern style questions
*This actually works for ANY of those number sequence questions!* By merely exercising the finite differences, then adding the next number up, you instantly PROVE that your calculated number IS in fact the next number following that logic! The person who asked you to figure out the missing number might've had a different logic in mind... but that's not your problem, is it now? 😌
John Conway and Richard Guy explain how this method can be further extended in their book _'The Book of Numbers'._ They share 2 of them, the first works for simple exponential functions as well (such as 4^n - 3^n), and the second works for sequences where each term is a sum of previous terms (like Fibonnaci's sequence). For anyone interested: wolfram has info on both of them, they're called _"Jackson's Difference Fan"_ and _"Quotient-Difference Table"._
As a numerical analyst, I'm glad to see this material being covered more by the edutainment community on TH-cam! This same idea can be used to find approximate solutions to partial differential equations! A finite difference of order N gives you an exact derivative of polynomials up to order N (this can be seen through Taylor remainders theorem), and so you use this technique to find polynomial or piecewise polynomial (splines) approximations of the solution to a PDE.
Yours is a peerless presentation platform. Triangular numbers get an inordinate amount of air time in the US. They now have a pentagonal competitor! Your reference to Babbage engine makes this video an upper percentile elite viewing experience!!
Even if a sequence cannot be represented by a polynomial, it is sometimes possible to find an explicit formula for the n-th term if you know a recurring pattern and use generating functions or the Z-transform (which also involves power series). For example, the Fibonacci sequence is such a sequence.
I think we were taught a variation on this in 4th grade or something - though I don't remember being taught it I do remember using it extensively to figure out sequences back then and for a long time, often also trying it out on sequences it just strictly didn't work on. The difference, I believe, is that we didn't look at the polynomial expansion but for after reaching the constants, start backsolving from bottom to top by inserting the coefficients at each level and recalculating the differences. Really interesting to see it stems from Newton!
It's very interesting to see this here, I was never taught this but I somehow stumbled upon it myself, I didn't realize it was ever an official thing though Edit: Turns out the way I did it in the past was different, I would have reached the constant of 3 and then antiderived it twice, then plugged in the numbers of the sequence to calculate the coefficients, whereas here you turn the difference into variables to work them out which is probably more applicable then what I did. Edit2: Yeah I need to watch the whole video before commenting, love how it can be generalized, makes me wonder how many other tricks I have seen that can be expanded upon
This makes me incredibly happy, in a sort of holistic mathematics kind of way. I wasn't very old when I noticed that the difference of two consecutive squares is all odd numbers, in order. Now, you may say I stumbled across a special case of binomial formula, but on the other hand, you can take it as a first stepping stone towards differentiation, and, as we see here, polynomial interpolation. (I really really enjoy these kinds of "yes, this is the garden path in The Shire, but look where it leads you" things.)
Nice! Finite differences are a very useful tool, although the way I learned it was from a very different perspective. We want to replace actual derivatives in a differential equation with something algebric, which can be calculated, so we take the expansion of the Taylor series, truncate it at the term with the desired degree of derivative, then shove it to one side of the equation and everything else to the other side. If the local gradient is not very steep*, we get a system of iterative equations which can be calculated, generally depending on the values of the neighboring nodes and the neighbor's neighbors, to the degree of the derivative.
I was taught the finite difference method in university, in the subject of numerical analysis. We only used it to interpolate functions from a sample of known points of it; honestly, I did not find it interesting beyond that use. Later on, I realized this is, in fact, a very interesting tool with more uses than just routinely interpolating a function, as you have just shown :). As a *suggestion* on this method, I think you could talk about its use in time series. In the context of ARIMA models one has to apply finite differences to the series until it is (or is close enough to) a stacionary series (stochastic process, i. e. make the series as uniform as possible). This is the first step to start the ARIMA modeling of the series, so you do not need to dive in the details of the model or the statistics themselves. There are even statistical tests to advice you when to stop differencing. So it is quite a different point of view.
We studied this in Year 10 GSCE Mathematics when determining expressions for quadratic sequences 🙂 I never looked into it for higher order polynomials, but it seemed intuitively obvious that it would work, and indeed it seems that the end result is just a discrete analogue of the Taylor series!
I also learned this in high school but never saw the generalized differences that James showed (on the left) as a way to determine the coefficients. I was taught to write and solve a three-variable system of linear equations for quadratic models. This seems much easier. Also, after taking Calculus, I always looked at the differences as being applications of the Mean Value Theorem between consecutive terms. Since the derivative of a polynomial decreases in degree by 1 each time, the differences ALWAYS resolved into a sequence of constants.
I recently did a project on the finite difference time domain method and got excited this was an explanation of that. This was way more interesting anyway hahah
It blew my mind when he shown that Babbage difference machine worked by calculating finite difference. It makes so much sense because it is an iterative process which makes it perfect for a machine to do.
Always done a variation of this. What I've always done (for quadratics as an example) is taken the second derivative and divided it by 2 to get a, then I'd take away the ax^2 sequence from the sequence I'm evaluating, find the nth term of the new sequence I've made and add that to ax^2.
I'm sure all of the mathematically inclined recognized this phenomenon when they were first introduced to x^2. 0,1,4,9,16 maps to a linear increase of differences, of course. It betrayed some deep seed of derivatives years before I learned them.
Ive never gone as far as formalizing it, but ive frequently had a series of numbers where I found differences of the series (and sometimes differences of those). Sometimes id find a constant and sometimes i didnt. Never knew I might have been able to apply a formula to them. Interesting.
The formula for finding simplex numbers is one of the most useful things in math. Triangular numbers are used more than any other simplex numbers. If you ever wondered about the notation for the sum of consecutive natural numbers starting with one, it's just the triangular number for n (the highest value). simplex-2 (triangular numbers): Tₙ = C(n+1, 2) simplex-3 (tetrahedral numbers): Teₙ = C(n+2, 3) and so on. These are listed on Wikipedia under Triangular Numbers.
The finite difference method was invented by a French mathematician from Lyon, François de Regnauld and published in 1670 by a friend of him Gabriel Mouton. Leibniz had also discovered it but he aknoledged the priority of Regnauld. Regnauld's purpose was interpolation because Mouton needed it for astronomical computations.
2n^2 - n for the nth hexagonal numbers is my best guess. I remember being intrigued by this sort of thing when I started paying attention to the amount of exp needed to level a pokemon in the "medium fast" exp group. The formula for the level of a pokemon is the (floor of the) cube root of it's exp value, so the amount of exp needed to get from level n to n+1 is (n+1)^3 - n^3. I wanted a general formula for the amount of experience I would need to get from level n to n+1 that didn't involve cubing two numbers and taking the difference every time, so I started looking into the first few terms of the sequence and specifically the difference between them. I quickly realized this was a simple quartic, then I looked into the same sort of polynomial for other, higher powers of the form (m^n + 1) - m^n. I found patterns that helped me get to n=12, but I never went as far as Newton to make so many more useful general equations.
When I was in highschool, the first time I encountered this kind of problem, I discovered for myself the pattern but written in summation notation. Never knew you do most series like this! my formula for a polygon with points P is: Sum from n=0 to x of ( (P-1) + (P-2)(n-1) ) Triangle have P = 3 Square have P = 4 Pentagon have P = 5 And so on… Looking back, it has more computations making it worse time-wise. Not bad for your typical highscool student.
I remember doing something similar when I was younger, looking at differences in sequences. Glad to see it's an actual thing, might read up on it so I can understand how it actually works.
Did you try working it out yourself, along with the video? I found my understanding came from expanding the sequence to the left and making a rough plot of the graph of all the levels of differences. Reading is one thing, but true understanding comes from doing it yourself.
The only time I've ever seen this before was in a (Martin Gardner?) book, where he used it to create a mathematical trick. The trick was to ask a friend who knew a bit of maths to think of any polynomial of the form ax^2 + bx + c. You'd then ask them to give you the values of the polynomial at x=0, x=1 and x=2. You'd then be able to name the polynomial using finite differences, which with a bit of practice was reasonably easy to do in your head for order 2 polynomials.
Oh my goodness. I stumbled upon this in middle school. Bored one day in math class, I was thinking about taking the difference of a series. I saw that all the differences were off by a very specific number. I tried it with a few other series and found the same thing happened. Thinking that's how all series worked, I promptly ignored the result. LOL I guess that's the difference between myself and Newton. He wanted to know why that happened. I just took it as an oddity of series and never thought of it again.
I knew you would mention Babbage :) This sort of thing is also used a lot in computer graphics to generate cubic spline surfaces etc. It's a very useful and satisfying set of mathematical tools!
Can you guys help me with a problem? I was actually wondering about something similar on my own and miraculously this video popped up for me. I've noticed (and I don't know how to prove it) that if you have a sequence let's say 1^6, 2^6, 3^6, ... , you'd get a constant on the 7-th step and for 1^n, 2^n, 3^n... you'd get a constant on the (n+1)-th step. Also by writing a little program I've noticed that for n=2 the constant is 2, for n=3 the constant is 6, for n=4 it's 24 and for any n the constant seems to be (the constant of n-1)*n, which is mindblowing to me. If anyone knows anything about that problem or if you can work it out on your own, please share it with me. Edit: Just realised that if this is true, then the constant difference would just be n!... THAT'S EVEN MORE AWESOME
Yes, this video is perfect for you. The sequence 1^n, 2^n, 3^n, 4^n, etc. comes from plugging the values x = 1, x = 2, x = 3, x = 4, etc. into the polynomial f(x) = x^n. You can then look at what James does in this video from 4:30 - 5:46. The nth order differences (which I imagine you are considering to be the (n+1)st step) will result in a constant sequence where the constant is n! times the leading coefficient of the polynomial. Indeed, since the leading coefficient of f(x) = x^n is just 1, you will, indeed, get n!. So yeah, this video completely explains the pattern you have been noticing. Quite astonishing how that works out :)
I had found this exact phenomena a long time ago as well and was blown away! Good job on finding it -- I proved it many years later using some bipartite graph matching argument to prove a combinatorial identity: \Sum_{i=0}^n [(-1)^(n-i)*(n choose i)*(k+i)^n] = n! for any k,n -- turns out it's a very similar problem. That said, I think this video gives a much simpler analytical argument at: th-cam.com/video/scQ51q_1nhw/w-d-xo.html. Here your sequence is just f(x) = a_n*x^n, with a_n = 1 for all n. Thus after the n^th difference, you're left with D^n(x) = n!
The basic differences method for sequences is in the textbooks here in Scotland which I discovered as a tutor but can’t remember from my own school days. Would have helped me a lot as I hated sequences haha
Here I was thinking that FDM is just for solving differential equations. But, it's not too crazy; kind of like how the definition of convergence can be applied to sequences and to functions, so we can apply a Finite Difference to sequences and DEs. Thanks for this
Even if I've never seen this application exactly, it all looks very familiar to me. In my day job, I do a lot with software that solves partial differential equations on a finite grid of points (as part of numerical weather prediction), and this is sort of the same idea.
0 1 6 15 28 ; 1 5 9 13 ; 4 4 4. So x + 4x(x-1)/2. Or if you clean it x + 2x(x-1), or x + 2x² - 2x, or 2x² - x. If we add another four at the end of the second differences, it makes 17 on the first differences, and then that makes the next number 45. :)
This feels like some sneaky calculus, since taking the difference of each step is like taking the difference in the y value at a fixed X interval, which is calculating the gradient. You go down n steps (where n is the highest power in the final equation) before getting a constant difference, which is the straight line
Newton's Forward Difference Formula 🔥🔥🔥 The formula makes sense when you think about starting from an order 0 polynomial and building up from there. Each time you make the polynomial 1 degree higher by raising the degree of all terms by 1, and then adding the next starting difference term.
This is fantastic, but my only question is when you truly have a "constant" sequence at n-th order: how do we know it will be constant for all time? I guess if we prove that the sequence must take on integer values of some polynomial, then the polynomial must have constant n-th order differences, but taking further differences from 3,3,3,... will give 0,0,0,... and if that ever becomes nonzero then we will never have constant differences, a contradiction. Is that the idea?
Eventually you will get down to one - the point of the downward triangle. That will automatically be constant with itself, and you will have a polynomial of the order of the number of items in the sequence. Note that you could extend the sequence with any one number in fact. All that would happen is that the coefficients would change.
Someone asked a similar question in another comment, and I thought about it and came up with some answers. "when you truly have a "constant" sequence at n-th order: how do we know it will be constant for all time?" Yeah, the problem is you don't. I was able to devise a method to come up with a sequence where the nth order differences has an initially constant string of length k, for any positive integers n and k you want, but the sequence doesn't satisfy a polynomial. To do this, you can fix a polynomial f(x) of degree n. Then take the beginning of your sequence to be f(0), ..., f(n+k-1). From there, you pick some other thing, like a different polynomial g(x) (which doesn't agree with f(x) at x = 0, x = 1, ..., or x = n+k-1), then you choose the rest of your sequence to be g(n+k), g(n+k+1), g(n+k+2), ... The sequence cannot possibly satisfy a polynomial relationship. If did, since that polynomial would agree with g on infinitely many points, it would have to be equal to g. But the full sequence cannot match with g, by how the sequence is chosen. Now, since the first n+k terms of the sequence come from a degree n polynomial, when you get to the nth order differences, you will get a constant string of length k. So yeah, you can't tell whether the sequence actually satisfies a polynomial just by getting a really long constant string. Because you can cook up non-polynomial sequences with arbitrarily long constant strings. But it seems you expected that! After all, you suggest: "I guess if we prove that the sequence must take on integer values of some polynomial, then the polynomial must have constant n-th order differences" I decided to look into this too! If you can prove that the sequence must satisfy a polynomial relationship, it's still _not_ good enough. Because you can use the same sort of recipe to create counterexamples here, too. What I mean by this is you can create a polynomial sequence where the nth order differences has a constant string of length k, but changes in the kth position. Being a polynomial sequence, it will, eventually, reach a constant sequence, but it will be further along. To do this, you start again with a polynomial f(x) of degree n, and you choose the beginning of your sequence to be f(0), ..., f(n+k-1). Just like in the previous example, this guarantees that with the nth order differences, you get a constant string of length k. But then what you do is you pick any number c, other than f(n+k), and make it the next term of the sequence. You now have n+k+1 points: (0,f(0)), ..., (n+k-1,f(n+k-1)), (n+k,c). These n+k+1 points pin down a polynomial g(x) of degree n+k passing through them. But by this construction, we have g(0) = f(0), ..., g(n+k-1) = f(n+k-1), and g(n+k) = c, so all terms of the sequence constructed so far satisfy g. Then you just use g to get the rest of the sequence. So the sequence _is_ a polynomial sequence, but it has a sort of "false start" of length k at the nth order differences, since g(x) agrees with a degree n polynomial f(x) for the first n+k terms of the sequence. And, of course, you can create multiple "false starts" by continuing with g for a while, then repeating the process with a new polynomial h, etc. So you can create a polynomial sequence with an arbitrary number of arbitrarily long initial constant strings. So, even _knowing_ that you have a polynomial sequence, just getting several terms constant in a row in some difference sequence is not enough to conclude that you have found the polynomial relationship. Proving it's polynomial is not enough! But if you can prove your sequence satisfies a polynomial of degree n, then yeah, that's good enough. Because you know the nth order differences _must_ be constant then. In practice, how I imagine this works, is that you have some scenario you're trying to find a sequence for (like the pentagonal numbers, as in the video). You can try the finite difference method and get to a seemingly constant sequence, and then develop a polynomial from there. But then you have to prove that your polynomial really works for the full sequence in the scenario you care about. Having a candidate polynomial actually gives you a *_monumental_* leg up on proving what you want :)
So what this excersice does, is basically take the derivative of the underlying formula. I'm not sure if you have any calculus under your belt, but imagine that he did this excersice of a sequence of numbers like 1,4,7,10,13,16... So the first order difference is always 3. If you imagine those points as a graph, it will be a straight line pointed up to the right. So in my example the rate of change is 3. But because the polynomial that describes a line is consistent from infinity to negative infinity we can know that the rate of change won't change. It will stay a line with the same slope. I'm his example he has to get to the 2nd irder differences, because the rate of change (1st order) is also changing. However, if you can get to a difference that remains constant, you can know it will stay that way because it describes a line. To my knowledge this only works with polynomials though due to the way the derivitaves are calculated.
I learned this method in highschool first or second grade I applied it before you talked about it I was wondering what it was!! I use it often to figure out sequences :)
I’ll do you one better than a general equation for pentagonal numbers, here’s a general equation for all nested shape equations. For nested shapes with a number of sides n that is greater than or equal to 2, fₙ(x) = x + (x² - x)(n-2)/2, where x is the greatest side length among the nested shapes.
True. And you can get that from the Newton Formula in this video. Because D^(0)(0) = 0, D(0) = 1, and D^2(0) = n -2, and the Newton formula does the rest.
Awesome, love it! Finding pattern is always an interesting activity. Now if I want to be naughty, I can make a long sequence that looks like simple quadratic function but turns out it has x^13, just for fun.
I've heard about before in relation to the differential engine :) I think it's pretty neat, basically discrete version of derivatives and Taylor sequences.
Great video! I've had a little familiarity with finite differences and finite calculus, but I've never realized that Taylor polynomials also carry over to the finite calculus world in the exact same form as their continous sibling! I'm now curious, are those Taylor polynomials also "good" approximations to non-polynomial sequences? What sufficient conditions are known for which this discrete version of the Taylor series converges to a sequence in question?
It converges point-wise, as for any n, for all K >= n, a(n) = sum(k=0..K)[(∆^k a)(0) * {n choose k}. I think uniform convergence only holds for polynomial sequences though. This is still a nice tool to get some identities involving sums of binomial coefficients if we utilize patterns in infinite sequences of differences: (∆^k a)(0) = 1, 1, 1, ... a(n) = 2^n sum(k=0..n)[{n choose k}] = 2^n (∆^k a)(0) = 1, x, x^2, ... a(n) = (x+1)^n sum(k=0..n)[{n choose k} * x^k] = (x+1)^n (∆^k a)(0) = 1, -1, 1, -1, ... a(n) = 0^n sum(k=0..n)[{n choose k} * (-1)^k] = 0^n
Oh my god I actually figured out the pattern in the differences in square numbers in 5th grade by looking for patterns in the times table chart, asked my teacher about it, and she had _no idea_ it was a thing. I became obsessed with it and I ended up asking all my math teachers about this throughout middle school and high school and none of them knew about this, it was _wild._ I wasn't able to actually _prove_ it until I learned what polynomials were in high school (and figured out how to get the degree of a polynomial by just looking at differences of differences of differences until I reached a constant), and I didn't understand the general formula until calculus and learning about Taylor series. I even used it to stumble into the right answer in some math puzzles in Star Wars: Knights of the Old Republic long before I knew what a polynomial was, it was hilarious.
That's amazing! I stumbled upon this when I was looking for a fast circle drawing algorithm. I was really proud of myself up until the point when my research uncovered Bresenham algorithm. Turns out he discovered circle drawing algorithm *in addition to* line drawing algorithm! Previously, I had to use Calculus to solve it. So, it's arithmetic problem, after all.
Interesting - I always found myself looking at the difference between numbers on sequences, to get a bit of a feeling of what's happening. I never thought that method could be used like that :)
I remember figuring out that the number of times we had to find the difference was the degree of the polynomial that could be used to generate that sequence
I solved this a different way that ended up with a general formula for polygonal numbers (with g(0) = 0): g(n) = 1 + (v-1)(n-1) + (s-2)((n-1)(n-2)/2) where n = iteration number (also the number of points on one side from vertex to vertex, inclusive) 1 = common point for all iterations (lower left in your diagram) v = # of vertices of the polygon (each level adds the same amount of vertices) s = # of sides of the polygon For the second term, (v-1) is multiplied by (n-1) because the additional vertices only appear at n = 2. The third term, (s-2) * [(n-1)(n-2)/2], represents the fact that the number of points on each side, excluding the vertices, ('interior points") grows by one each iteration. Two of the sides - bottom and left in your diagrams - are accounted for by the growth in vertices, so it's (s-2) instead of s. The total number of interior points for any particular iteration is equal to the sum (0 + 1 + 2 + 3 + ... + m), where m = (n-2) because there are no interior points on a side until n = 3. This sum is equal to (m * (m+1)) / 2, which then becomes (n-2)(n-1) / 2. Multiply the cumulative interior points per side, (n-2)(n-1) / 2, by the number of sides missing interior points - (s-2) - and you have the third term. This results in a similar - though not the same - representation as using the D^(0) method you show.
So, triangular numbers have a constant difference of 1 at D^2. Working out the hexagonal numbers gets us 4 at D^2. Therefore, the polygon a sequence is representing will have a constant difference of the number of sides - 2 at D^2. Does it also hold that the constant occurring at D^n also corresponds to the sides/faces/higher dimensional "edge" of said shape and 'n' is the dimension of the shape? I.e. triangular numbers' constant is at D^2 so triangles are 2-dimensional shapes.
I once derived a formula for the nth k-gonal number. So you could plug in, say, n = 3 and k = 5 to get the third pentagonal number. Sadly lost it in a move, but sure I could do it again.
I don't have enough brain power at the moment to work out the general formula right now, but the common difference for the sequence o' n-gonal numbers is n-1.
first i think to fill in the pentagons, because those layers are just nonsense. full pentagons only! (though only AFTER i did this did i discover that the pentagon points couldn't actually ever fit on a grid) 1+0 = 1 5+0 = 5 12+1 = 13 22+3 = 25 the differences here are 4, 8, 12; multiples of 4 which means the pentagonal numbers are 1 + 4*tri(a) - tri(b), where tri gets a triangular number tri(n) being a triangle with n points on the bottom is n(n+1)/2 points. 1 + 4 * tri(0) - tri(-1) = 1 1 + 4 * tri(1) - tri(0) = 5 1 + 4 * tri(2) - tri(1) = 1 + 12 - 1 = 12 ... so, expanding: 1 + 4 * (n-1)n/2 - (n-2)(n-1)/2 = 1 + 2n^2 - 2n - n^2/2 + 3n/2 - 1 = 3n^2 /2 - n/2 = (3n-1)n/2
It's funny when someone gives you a challenge sequence that they intended to have some other pattern, and you just interpolate a polynomial to it.
Also with high enough degree you could have any number as the next, funny too
"What do you think the next number is?"
"A billion."
"What? No, it isn't!"
"Sure it is. Here's the formula."
*shocked pikachu face*
I see that a lot on Quora. Some people find it funny. (I don't)
Overfitting... 😈
@@volodyadykun6490 easiest example (f(n)-f(n)*(-1)^(5-n))/2 to gobble up everything past it and by the same logic could add any value at any specific place in the sequence (by a second function (x)=k cut at both ends the exact same way ), you could get to the formula yourself even without too much education, just some practice doing math magic. 'tho higher education would definitely help.
I was taught this in high school without realising how under-the-radar it is! whenever I see a non linear sequence, I immediately look at the differences. glad to see word being spread!
Same here and I didn't know it was Newton!
Michael Scott is a top salesman, best boss, AND a mathematical genius?
Michael Scott.. You..?
I was an actuarial student in the early 1970s. Part of the studies was a semester long subject called Finite Differences which covered this topic in great detail. I have used the processes many times, but this is the first time I have seen anything of it on TH-cam. Thank you very much for the trip down memory lane.
ohh was that topic on Finite Differences by any chance related to time series analysis?
This was the final exercise on our discrete mathematics test twenty years ago. My solution was to described the pentagonal numbers as a nonhomogeneous recurrence relation which I then solved using the characteristic equation (assuming a second degree polynomial because of the second differences all equalling 3). Apparently, this was unusual because someone wrote "wow!" in the margin.
I did it too with the recurrence relation, except I don't really know what a characteristic polynomial is in that context. I wrote u_{n+1} = u_n + 3n +1, and I recognized that it would be a sum of an arithmetic sequence (with r=1) and an arithmetic series (with r=3).
You then have u_n = n + n*3*(n-1)/2 = 3/2n^2 + 1/2n as we know
I can only think of one other case of someone writing “Wow!” in the margin and some people think that was aliens, so your method must’ve been pretty unusual 😂
@@jonasdaverio9369 Sorry! Got my terminology mixed up. It's supposed to be the characteristic equation. Or is it the same thing? It's been 20 years.
In short, substitute a_{n}=cr^n and find the roots.
I came up with the recurrence relation P_{n}=P_{n-1}+3n-2, P_{1}=1.
The associated homogeneous equation is P_{n}=P_{n-1}. Its solutions are of the form P^{(h)}_{n}=P_{n-1}=\alpha1^n.
For the particular solution I assumed P_{n}=an^2+bn+c, ended up with the equation an^2+bn+c=a(n-1)^2+b(n-1)+c+3n-2, simplified to (-2a+3)n+(a-b-2)=0, got a=1.5, b=-0.5 and finally P^{(p)}_{n}=1.5n^2-0.5n.
P_{n}=P^{(h)}_{n}+P^{(p)}_{n}=1.5n^2-0.5n+\alpha1^n. P_{1}=1 gives \alpha=0.
I was "taught" this in 5th grade (New York public school), where the teacher simply asked us to determine the formula for pentagonal numbers. We did it. Best math teacher ever. HERE'S THE KEY: He didn't tell us it was a tough problem.
what kind of school were you in?? algebra didnt start till 6th grade in most schools
This looks suspiciously like a discretised version of a Taylor series. Taking one order of difference is like taking a derivative. And just like how the n-th derivative of a degree n polynomial is a constant, the n-th order difference of a degree n polynomial is also constant. In fact it seems that differentiating a polynomial and taking the difference between two values work in exactly the same way, same expanding the bracket, same cancellations, except in differentiation, you take the limit afterwards.
It's not really suspicious. It's one of the foundation of numerical analysis. You can look at finite difference method, finite element method, Z transform, discrete Fourier transform, etc... All of which are of critical importance in engineering and science (simulation, control theory, signal theory)
That's exactly what it is. Babbage's difference engine used (or would have used, had it been built in his lifetime) truncated power series of various special functions to calculate their values to high precision and thus tabulate them, primarily for navigation, which used a lot of trig functions for example. We know sin x is x - x^3/6 + x^5/120 = ... which is amenable to iteration using Newton's method of differences. Babbage's engine was capable of evaluating polynomials up to degree seven to 31 decimal digits of precision.
You will probably find this link useful. :) en.wikipedia.org/wiki/Umbral_calculus#Umbral_Taylor_series
It actually is a derivative, because, in the end, it is rate of growth. The first one of the first derivative, aka rate of growth of out series. The second one if the second derivative, aka rate of growth of the rate of growth etc.
In the example where the polynomial is a second degree one, then it means that the second rate of growth is standard, meaning that the initial sequence keeps increasing with "speed" that keeps increasing (accelerating) on a standard value. It becomes more obvious when you take the third derivative, which would be 0, aka the second derivative doesn't change, it's constant. The third differences are exactly that. Zeros.
It is exactly rate of growth, simplified over every value of X.
You're correct. The analog of antiderivative is the falling factorial
I think like many others I've never heard anyone talk about this, but I feel slightly proud that I figured this method out myself during job interviews that had those find the pattern style questions
Mathologer has a video on it: m.th-cam.com/video/4AuV93LOPcE/w-d-xo.html
*This actually works for ANY of those number sequence questions!*
By merely exercising the finite differences, then adding the next number up, you instantly PROVE that your calculated number IS in fact the next number following that logic! The person who asked you to figure out the missing number might've had a different logic in mind... but that's not your problem, is it now? 😌
John Conway and Richard Guy explain how this method can be further extended in their book _'The Book of Numbers'._
They share 2 of them, the first works for simple exponential functions as well (such as 4^n - 3^n), and the second works for sequences where each term is a sum of previous terms (like Fibonnaci's sequence).
For anyone interested: wolfram has info on both of them, they're called _"Jackson's Difference Fan"_ and _"Quotient-Difference Table"._
Thanks. I've added this to the description.
that's crazy, i've never seen this before despite doing a ton of stuff with sequences in school. pretty dang useful
Mathologer also did a video about this some time ago, very glad that this gets some more screen time.
Us engineers use this quite often. It is used in simulation software for solid mechanics and fluid mechanics.
As a numerical analyst, I'm glad to see this material being covered more by the edutainment community on TH-cam! This same idea can be used to find approximate solutions to partial differential equations! A finite difference of order N gives you an exact derivative of polynomials up to order N (this can be seen through Taylor remainders theorem), and so you use this technique to find polynomial or piecewise polynomial (splines) approximations of the solution to a PDE.
His videos look exact the same as a decade ago, love it.
paused after just three minutes to comment .. so elegant! I like your style of presentation. Clear .. concise .. swank ..
another very good trick for finding out the formula for a series of integers: searching for it in the on-line encyclopedia of integer sequences 👍
Yours is a peerless presentation platform. Triangular numbers get an inordinate amount of air time in the US. They now have a pentagonal competitor!
Your reference to Babbage engine makes this video an upper percentile elite viewing experience!!
That sure was a lot of words and also some numbers. I'm just glad to be part of the team, really.
Even if a sequence cannot be represented by a polynomial, it is sometimes possible to find an explicit formula for the n-th term if you know a recurring pattern and use generating functions or the Z-transform (which also involves power series).
For example, the Fibonacci sequence is such a sequence.
I discovered this by myself but I never went the next step of generalizing the method and always solved by using first principles. This is amazing
I knew the name and even used this some times, but i had no idea of the intuition and the general formula! So interesting and well explained!
This is like taking a discrete derivative and then integrating at every step back to get the original function. Neat trick.
I think we were taught a variation on this in 4th grade or something - though I don't remember being taught it I do remember using it extensively to figure out sequences back then and for a long time, often also trying it out on sequences it just strictly didn't work on. The difference, I believe, is that we didn't look at the polynomial expansion but for after reaching the constants, start backsolving from bottom to top by inserting the coefficients at each level and recalculating the differences. Really interesting to see it stems from Newton!
Excellent! Your explanations are always clear.
It's very interesting to see this here, I was never taught this but I somehow stumbled upon it myself, I didn't realize it was ever an official thing though
Edit: Turns out the way I did it in the past was different, I would have reached the constant of 3 and then antiderived it twice, then plugged in the numbers of the sequence to calculate the coefficients, whereas here you turn the difference into variables to work them out which is probably more applicable then what I did.
Edit2: Yeah I need to watch the whole video before commenting, love how it can be generalized, makes me wonder how many other tricks I have seen that can be expanded upon
This makes me incredibly happy, in a sort of holistic mathematics kind of way. I wasn't very old when I noticed that the difference of two consecutive squares is all odd numbers, in order. Now, you may say I stumbled across a special case of binomial formula, but on the other hand, you can take it as a first stepping stone towards differentiation, and, as we see here, polynomial interpolation.
(I really really enjoy these kinds of "yes, this is the garden path in The Shire, but look where it leads you" things.)
This is a really nice intuitive way to introduce derivatives in calculus
Nice! Finite differences are a very useful tool, although the way I learned it was from a very different perspective. We want to replace actual derivatives in a differential equation with something algebric, which can be calculated, so we take the expansion of the Taylor series, truncate it at the term with the desired degree of derivative, then shove it to one side of the equation and everything else to the other side. If the local gradient is not very steep*, we get a system of iterative equations which can be calculated, generally depending on the values of the neighboring nodes and the neighbor's neighbors, to the degree of the derivative.
I could follow this one it wasn't over my head this time. Thank you so much that was fun to see.
How delightful--I understood this one! Thanks for a very clear guide through a new topic to me.
I'm glad!
I was taught the finite difference method in university, in the subject of numerical analysis.
We only used it to interpolate functions from a sample of known points of it; honestly, I did not find it interesting beyond that use.
Later on, I realized this is, in fact, a very interesting tool with more uses than just routinely interpolating a function, as you have just shown :).
As a *suggestion* on this method, I think you could talk about its use in time series.
In the context of ARIMA models one has to apply finite differences to the series until it is (or is close enough to) a stacionary series (stochastic process, i. e. make the series as uniform as possible).
This is the first step to start the ARIMA modeling of the series, so you do not need to dive in the details of the model or the statistics themselves.
There are even statistical tests to advice you when to stop differencing. So it is quite a different point of view.
We studied this in Year 10 GSCE Mathematics when determining expressions for quadratic sequences 🙂 I never looked into it for higher order polynomials, but it seemed intuitively obvious that it would work, and indeed it seems that the end result is just a discrete analogue of the Taylor series!
Mathologer made a brilliant video on this
Love you man! Make more of these videos, really helpful and useful too.
I also learned this in high school but never saw the generalized differences that James showed (on the left) as a way to determine the coefficients. I was taught to write and solve a three-variable system of linear equations for quadratic models. This seems much easier.
Also, after taking Calculus, I always looked at the differences as being applications of the Mean Value Theorem between consecutive terms. Since the derivative of a polynomial decreases in degree by 1 each time, the differences ALWAYS resolved into a sequence of constants.
I recently did a project on the finite difference time domain method and got excited this was an explanation of that. This was way more interesting anyway hahah
It blew my mind when he shown that Babbage difference machine worked by calculating finite difference. It makes so much sense because it is an iterative process which makes it perfect for a machine to do.
I was taught this in secondary school, never realised it was relatively niche! It always seemed the logical thing to do to me!
Always done a variation of this. What I've always done (for quadratics as an example) is taken the second derivative and divided it by 2 to get a, then I'd take away the ax^2 sequence from the sequence I'm evaluating, find the nth term of the new sequence I've made and add that to ax^2.
Wow, so many uploads in a short time. Is there a place to recommend topics to cover, like Sangakus?
Just in the comments like this. That Sangakus stuff is interesting.
(4n^(2)-2n)/2 for 6-gon
general formula for k-gon is:
((k-2)n^(2)+(4-k)n)/2
It's taken me a long time to get the reference to broadcaster John Ebdon at the end of every one of James's videos.
Haha! About 15 years?
@@singingbanana Well, I've only been watching for about half of that! 😆
th-cam.com/video/lxV0JfFNuis/w-d-xo.html
I think the relevant youtube link is at v=lxV0JfFNuis at time code 13:28
Thanks for this videos as one of the math teachers from Myanmar.
I'm sure all of the mathematically inclined recognized this phenomenon when they were first introduced to x^2. 0,1,4,9,16 maps to a linear increase of differences, of course. It betrayed some deep seed of derivatives years before I learned them.
Ive never gone as far as formalizing it, but ive frequently had a series of numbers where I found differences of the series (and sometimes differences of those). Sometimes id find a constant and sometimes i didnt. Never knew I might have been able to apply a formula to them. Interesting.
The formula for finding simplex numbers is one of the most useful things in math. Triangular numbers are used more than any other simplex numbers. If you ever wondered about the notation for the sum of consecutive natural numbers starting with one, it's just the triangular number for n (the highest value).
simplex-2 (triangular numbers):
Tₙ = C(n+1, 2)
simplex-3 (tetrahedral numbers):
Teₙ = C(n+2, 3)
and so on.
These are listed on Wikipedia under Triangular Numbers.
So cool!
Another fascinating video. Thank you, James!
The finite difference method was invented by a French mathematician from Lyon, François de Regnauld and published in 1670 by a friend of him Gabriel Mouton. Leibniz had also discovered it but he aknoledged the priority of Regnauld. Regnauld's purpose was interpolation because Mouton needed it for astronomical computations.
2n^2 - n for the nth hexagonal numbers is my best guess.
I remember being intrigued by this sort of thing when I started paying attention to the amount of exp needed to level a pokemon in the "medium fast" exp group. The formula for the level of a pokemon is the (floor of the) cube root of it's exp value, so the amount of exp needed to get from level n to n+1 is (n+1)^3 - n^3. I wanted a general formula for the amount of experience I would need to get from level n to n+1 that didn't involve cubing two numbers and taking the difference every time, so I started looking into the first few terms of the sequence and specifically the difference between them. I quickly realized this was a simple quartic, then I looked into the same sort of polynomial for other, higher powers of the form (m^n + 1) - m^n. I found patterns that helped me get to n=12, but I never went as far as Newton to make so many more useful general equations.
When I was in highschool, the first time I encountered this kind of problem, I discovered for myself the pattern but written in summation notation. Never knew you do most series like this!
my formula for a polygon with points P is:
Sum from n=0 to x of
( (P-1) + (P-2)(n-1) )
Triangle have P = 3
Square have P = 4
Pentagon have P = 5
And so on…
Looking back, it has more computations making it worse time-wise. Not bad for your typical highscool student.
You can also add a preferred number as next of sequence and apply this method to justify with a polynomial the next in sequence you want
I remember doing something similar when I was younger, looking at differences in sequences. Glad to see it's an actual thing, might read up on it so I can understand how it actually works.
You can try the Mathologer video - m.th-cam.com/video/4AuV93LOPcE/w-d-xo.html
Did you try working it out yourself, along with the video? I found my understanding came from expanding the sequence to the left and making a rough plot of the graph of all the levels of differences.
Reading is one thing, but true understanding comes from doing it yourself.
I regularly utilize the finite difference method in Boolean algebra where exclusive or (XOR) becomes the difference operation.
i remember being taught this in high school (9th grade) and being fascinated by this
The only time I've ever seen this before was in a (Martin Gardner?) book, where he used it to create a mathematical trick. The trick was to ask a friend who knew a bit of maths to think of any polynomial of the form ax^2 + bx + c. You'd then ask them to give you the values of the polynomial at x=0, x=1 and x=2. You'd then be able to name the polynomial using finite differences, which with a bit of practice was reasonably easy to do in your head for order 2 polynomials.
I saw that, and considered presenting this video as a magic trick. But decided against it in the end.
This brings me back to differential equations
Oh my goodness. I stumbled upon this in middle school. Bored one day in math class, I was thinking about taking the difference of a series. I saw that all the differences were off by a very specific number. I tried it with a few other series and found the same thing happened. Thinking that's how all series worked, I promptly ignored the result. LOL
I guess that's the difference between myself and Newton. He wanted to know why that happened. I just took it as an oddity of series and never thought of it again.
The differences don’t have to be the same… it depends on the sequence. See the Mathologer video: m.th-cam.com/video/4AuV93LOPcE/w-d-xo.html
I knew you would mention Babbage :) This sort of thing is also used a lot in computer graphics to generate cubic spline surfaces etc. It's a very useful and satisfying set of mathematical tools!
Can you guys help me with a problem?
I was actually wondering about something similar on my own and miraculously this video popped up for me. I've noticed (and I don't know how to prove it) that if you have a sequence let's say 1^6, 2^6, 3^6, ... , you'd get a constant on the 7-th step and for 1^n, 2^n, 3^n... you'd get a constant on the (n+1)-th step.
Also by writing a little program I've noticed that for n=2 the constant is 2, for n=3 the constant is 6, for n=4 it's 24 and for any n the constant seems to be (the constant of n-1)*n, which is mindblowing to me.
If anyone knows anything about that problem or if you can work it out on your own, please share it with me.
Edit: Just realised that if this is true, then the constant difference would just be n!... THAT'S EVEN MORE AWESOME
Yes, this video is perfect for you.
The sequence 1^n, 2^n, 3^n, 4^n, etc. comes from plugging the values x = 1, x = 2, x = 3, x = 4, etc. into the polynomial f(x) = x^n.
You can then look at what James does in this video from 4:30 - 5:46.
The nth order differences (which I imagine you are considering to be the (n+1)st step) will result in a constant sequence where the constant is n! times the leading coefficient of the polynomial. Indeed, since the leading coefficient of f(x) = x^n is just 1, you will, indeed, get n!.
So yeah, this video completely explains the pattern you have been noticing. Quite astonishing how that works out :)
I had found this exact phenomena a long time ago as well and was blown away! Good job on finding it -- I proved it many years later using some bipartite graph matching argument to prove a combinatorial identity: \Sum_{i=0}^n [(-1)^(n-i)*(n choose i)*(k+i)^n] = n! for any k,n -- turns out it's a very similar problem. That said, I think this video gives a much simpler analytical argument at: th-cam.com/video/scQ51q_1nhw/w-d-xo.html. Here your sequence is just f(x) = a_n*x^n, with a_n = 1 for all n. Thus after the n^th difference, you're left with D^n(x) = n!
I think you can use binomial expansion on (n+1)^m and n^m and see the difference, should get you a recurrence relation for lower m.
The basic differences method for sequences is in the textbooks here in Scotland which I discovered as a tutor but can’t remember from my own school days. Would have helped me a lot as I hated sequences haha
Wow this is awesome, great video
Here I was thinking that FDM is just for solving differential equations. But, it's not too crazy; kind of like how the definition of convergence can be applied to sequences and to functions, so we can apply a Finite Difference to sequences and DEs. Thanks for this
Even if I've never seen this application exactly, it all looks very familiar to me. In my day job, I do a lot with software that solves partial differential equations on a finite grid of points (as part of numerical weather prediction), and this is sort of the same idea.
0 1 6 15 28 ; 1 5 9 13 ; 4 4 4. So x + 4x(x-1)/2. Or if you clean it x + 2x(x-1), or x + 2x² - 2x, or 2x² - x. If we add another four at the end of the second differences, it makes 17 on the first differences, and then that makes the next number 45. :)
This feels like some sneaky calculus, since taking the difference of each step is like taking the difference in the y value at a fixed X interval, which is calculating the gradient. You go down n steps (where n is the highest power in the final equation) before getting a constant difference, which is the straight line
Newton's Forward Difference Formula 🔥🔥🔥 The formula makes sense when you think about starting from an order 0 polynomial and building up from there. Each time you make the polynomial 1 degree higher by raising the degree of all terms by 1, and then adding the next starting difference term.
This is fantastic, but my only question is when you truly have a "constant" sequence at n-th order: how do we know it will be constant for all time? I guess if we prove that the sequence must take on integer values of some polynomial, then the polynomial must have constant n-th order differences, but taking further differences from 3,3,3,... will give 0,0,0,... and if that ever becomes nonzero then we will never have constant differences, a contradiction. Is that the idea?
Eventually you will get down to one - the point of the downward triangle. That will automatically be constant with itself, and you will have a polynomial of the order of the number of items in the sequence. Note that you could extend the sequence with any one number in fact. All that would happen is that the coefficients would change.
Someone asked a similar question in another comment, and I thought about it and came up with some answers.
"when you truly have a "constant" sequence at n-th order: how do we know it will be constant for all time?"
Yeah, the problem is you don't. I was able to devise a method to come up with a sequence where the nth order differences has an initially constant string of length k, for any positive integers n and k you want, but the sequence doesn't satisfy a polynomial. To do this, you can fix a polynomial f(x) of degree n. Then take the beginning of your sequence to be f(0), ..., f(n+k-1). From there, you pick some other thing, like a different polynomial g(x) (which doesn't agree with f(x) at x = 0, x = 1, ..., or x = n+k-1), then you choose the rest of your sequence to be g(n+k), g(n+k+1), g(n+k+2), ... The sequence cannot possibly satisfy a polynomial relationship. If did, since that polynomial would agree with g on infinitely many points, it would have to be equal to g. But the full sequence cannot match with g, by how the sequence is chosen. Now, since the first n+k terms of the sequence come from a degree n polynomial, when you get to the nth order differences, you will get a constant string of length k.
So yeah, you can't tell whether the sequence actually satisfies a polynomial just by getting a really long constant string. Because you can cook up non-polynomial sequences with arbitrarily long constant strings.
But it seems you expected that! After all, you suggest: "I guess if we prove that the sequence must take on integer values of some polynomial, then the polynomial must have constant n-th order differences"
I decided to look into this too! If you can prove that the sequence must satisfy a polynomial relationship, it's still _not_ good enough. Because you can use the same sort of recipe to create counterexamples here, too. What I mean by this is you can create a polynomial sequence where the nth order differences has a constant string of length k, but changes in the kth position. Being a polynomial sequence, it will, eventually, reach a constant sequence, but it will be further along.
To do this, you start again with a polynomial f(x) of degree n, and you choose the beginning of your sequence to be f(0), ..., f(n+k-1). Just like in the previous example, this guarantees that with the nth order differences, you get a constant string of length k. But then what you do is you pick any number c, other than f(n+k), and make it the next term of the sequence. You now have n+k+1 points: (0,f(0)), ..., (n+k-1,f(n+k-1)), (n+k,c). These n+k+1 points pin down a polynomial g(x) of degree n+k passing through them. But by this construction, we have g(0) = f(0), ..., g(n+k-1) = f(n+k-1), and g(n+k) = c, so all terms of the sequence constructed so far satisfy g. Then you just use g to get the rest of the sequence. So the sequence _is_ a polynomial sequence, but it has a sort of "false start" of length k at the nth order differences, since g(x) agrees with a degree n polynomial f(x) for the first n+k terms of the sequence. And, of course, you can create multiple "false starts" by continuing with g for a while, then repeating the process with a new polynomial h, etc. So you can create a polynomial sequence with an arbitrary number of arbitrarily long initial constant strings.
So, even _knowing_ that you have a polynomial sequence, just getting several terms constant in a row in some difference sequence is not enough to conclude that you have found the polynomial relationship. Proving it's polynomial is not enough!
But if you can prove your sequence satisfies a polynomial of degree n, then yeah, that's good enough. Because you know the nth order differences _must_ be constant then.
In practice, how I imagine this works, is that you have some scenario you're trying to find a sequence for (like the pentagonal numbers, as in the video). You can try the finite difference method and get to a seemingly constant sequence, and then develop a polynomial from there. But then you have to prove that your polynomial really works for the full sequence in the scenario you care about. Having a candidate polynomial actually gives you a *_monumental_* leg up on proving what you want :)
So what this excersice does, is basically take the derivative of the underlying formula. I'm not sure if you have any calculus under your belt, but imagine that he did this excersice of a sequence of numbers like 1,4,7,10,13,16... So the first order difference is always 3. If you imagine those points as a graph, it will be a straight line pointed up to the right.
So in my example the rate of change is 3. But because the polynomial that describes a line is consistent from infinity to negative infinity we can know that the rate of change won't change. It will stay a line with the same slope.
I'm his example he has to get to the 2nd irder differences, because the rate of change (1st order) is also changing. However, if you can get to a difference that remains constant, you can know it will stay that way because it describes a line.
To my knowledge this only works with polynomials though due to the way the derivitaves are calculated.
I don’t know any math just here to witness this absolute wicked lad of a chad
Super-cool video. Thanks.
Thank you! Never made the connection between Method and Engine. Cool!
Good Job Mr. Grime.
I saw the pattern first glance but didn't think of the polinomic formula, thx
I learned this method in highschool first or second grade I applied it before you talked about it I was wondering what it was!! I use it often to figure out sequences :)
Even if we omit the zeroth hexagonal number, we will still get 45 as the next number after 28.
The function would then be y=2x²+3x+1
I’ll do you one better than a general equation for pentagonal numbers, here’s a general equation for all nested shape equations. For nested shapes with a number of sides n that is greater than or equal to 2, fₙ(x) = x + (x² - x)(n-2)/2, where x is the greatest side length among the nested shapes.
True. And you can get that from the Newton Formula in this video. Because D^(0)(0) = 0, D(0) = 1, and D^2(0) = n -2, and the Newton formula does the rest.
Awesome, love it! Finding pattern is always an interesting activity.
Now if I want to be naughty, I can make a long sequence that looks like simple quadratic function but turns out it has x^13, just for fun.
I've heard about before in relation to the differential engine :) I think it's pretty neat, basically discrete version of derivatives and Taylor sequences.
lovely scanned handwriting!
Great video! I've had a little familiarity with finite differences and finite calculus, but I've never realized that Taylor polynomials also carry over to the finite calculus world in the exact same form as their continous sibling!
I'm now curious, are those Taylor polynomials also "good" approximations to non-polynomial sequences? What sufficient conditions are known for which this discrete version of the Taylor series converges to a sequence in question?
It converges point-wise, as for any n, for all K >= n, a(n) = sum(k=0..K)[(∆^k a)(0) * {n choose k}.
I think uniform convergence only holds for polynomial sequences though.
This is still a nice tool to get some identities involving sums of binomial coefficients if we utilize patterns in infinite sequences of differences:
(∆^k a)(0) = 1, 1, 1, ... a(n) = 2^n
sum(k=0..n)[{n choose k}] = 2^n
(∆^k a)(0) = 1, x, x^2, ... a(n) = (x+1)^n
sum(k=0..n)[{n choose k} * x^k] = (x+1)^n
(∆^k a)(0) = 1, -1, 1, -1, ... a(n) = 0^n
sum(k=0..n)[{n choose k} * (-1)^k] = 0^n
Very nice. Was there also a mathologer video about this some time ago? Good times.
Yes - m.th-cam.com/video/4AuV93LOPcE/w-d-xo.html
Oh my god I actually figured out the pattern in the differences in square numbers in 5th grade by looking for patterns in the times table chart, asked my teacher about it, and she had _no idea_ it was a thing. I became obsessed with it and I ended up asking all my math teachers about this throughout middle school and high school and none of them knew about this, it was _wild._ I wasn't able to actually _prove_ it until I learned what polynomials were in high school (and figured out how to get the degree of a polynomial by just looking at differences of differences of differences until I reached a constant), and I didn't understand the general formula until calculus and learning about Taylor series.
I even used it to stumble into the right answer in some math puzzles in Star Wars: Knights of the Old Republic long before I knew what a polynomial was, it was hilarious.
That's amazing! I stumbled upon this when I was looking for a fast circle drawing algorithm. I was really proud of myself up until the point when my research uncovered Bresenham algorithm. Turns out he discovered circle drawing algorithm *in addition to* line drawing algorithm! Previously, I had to use Calculus to solve it. So, it's arithmetic problem, after all.
Now try it with the sequence of prime numbers.
Difference engine is also a Pascal's triangle on its side. But since a Pascal's triangle also contains the polynomials, that is no surprise.
Interesting - I always found myself looking at the difference between numbers on sequences, to get a bit of a feeling of what's happening. I never thought that method could be used like that :)
How do we know we reached a constant sequence? Maybe it's just constant for the 100 first terms
I remember figuring out that the number of times we had to find the difference was the degree of the polynomial that could be used to generate that sequence
More discrete maths pleeeease! Computer scientists love discrete mathematics :D
n-gonal numbers are fun. The xth n-gonal number is (x*(x-1)*n)/2 - x^2 + 2x.
I solved this a different way that ended up with a general formula for polygonal numbers (with g(0) = 0):
g(n) = 1 + (v-1)(n-1) + (s-2)((n-1)(n-2)/2)
where
n = iteration number (also the number of points on one side from vertex to vertex, inclusive)
1 = common point for all iterations (lower left in your diagram)
v = # of vertices of the polygon (each level adds the same amount of vertices)
s = # of sides of the polygon
For the second term, (v-1) is multiplied by (n-1) because the additional vertices only appear at n = 2.
The third term, (s-2) * [(n-1)(n-2)/2], represents the fact that the number of points on each side, excluding the vertices, ('interior points") grows by one each iteration. Two of the sides - bottom and left in your diagrams - are accounted for by the growth in vertices, so it's (s-2) instead of s.
The total number of interior points for any particular iteration is equal to the sum (0 + 1 + 2 + 3 + ... + m), where m = (n-2) because there are no interior points on a side until n = 3. This sum is equal to (m * (m+1)) / 2, which then becomes (n-2)(n-1) / 2.
Multiply the cumulative interior points per side, (n-2)(n-1) / 2, by the number of sides missing interior points - (s-2) - and you have the third term.
This results in a similar - though not the same - representation as using the D^(0) method you show.
Awesome video! 💯💪
So, triangular numbers have a constant difference of 1 at D^2. Working out the hexagonal numbers gets us 4 at D^2. Therefore, the polygon a sequence is representing will have a constant difference of the number of sides - 2 at D^2.
Does it also hold that the constant occurring at D^n also corresponds to the sides/faces/higher dimensional "edge" of said shape and 'n' is the dimension of the shape? I.e. triangular numbers' constant is at D^2 so triangles are 2-dimensional shapes.
I once derived a formula for the nth k-gonal number. So you could plug in, say, n = 3 and k = 5 to get the third pentagonal number. Sadly lost it in a move, but sure I could do it again.
I don't have enough brain power at the moment to work out the general formula right now, but the common difference for the sequence o' n-gonal numbers is n-1.
😮 that's really cool
Wow, I used to do this to figure out sequences. Didnt know it had a name :)
Very interesting!!
first i think to fill in the pentagons, because those layers are just nonsense. full pentagons only! (though only AFTER i did this did i discover that the pentagon points couldn't actually ever fit on a grid)
1+0 = 1
5+0 = 5
12+1 = 13
22+3 = 25
the differences here are 4, 8, 12; multiples of 4
which means the pentagonal numbers are 1 + 4*tri(a) - tri(b), where tri gets a triangular number
tri(n) being a triangle with n points on the bottom is n(n+1)/2 points.
1 + 4 * tri(0) - tri(-1) = 1
1 + 4 * tri(1) - tri(0) = 5
1 + 4 * tri(2) - tri(1) = 1 + 12 - 1 = 12
...
so, expanding:
1 + 4 * (n-1)n/2 - (n-2)(n-1)/2
= 1 + 2n^2 - 2n - n^2/2 + 3n/2 - 1
= 3n^2 /2 - n/2 = (3n-1)n/2
Funny to me I knew this going into the video thinking I wouldn’t. Goes to show how much public school really does teach you if you just pay attention
Really interesting! I'm surprised I'd never heard of it
Thanks! I love your videos!