I have a question. What if the Big O is not clear? I am trying to find the big O analysis of a DP algorithm, but there is not a lot of information on this. Let's say, that I have a program that prints all prime numbers from 2 to N. The program is a simple brute force solution. Checks to see if i % j == 0. If it is 0 then i must not be prime as it has a factor, and don't print. If it has no factors, then it must be prime, so print. def bruteForce(num): for i in range(2, num): if num % i == 0: return False return True for i in range(2, N +1): if bruteForce(i): print(i) Obviously this is O(N^2) as it has a nested for loop. And the worst case scenario is that you have to search from 2 to num to find out that it is not a prime number, meaning you have to search all the way to N. Another solution is to go all the way to sqrt(N) + 1. Assuming that sqrt(N) is a O(1) math function. So it would look like this. def bruteSqrtForce(num): for i in range(2, math.sqrt(num)+1): if num % i == 0: return False return True for i in range(2, N +1): if bruteSqrtForce(i): print(i) My guess is that this is O(N*Sqrt(N)). However, some people on the internet are telling me that this just improves run time, and the Big O doesn't actually change. And assuming that sqrt(n) is a math function with O(1) is a big assumption because that may not always be the case with all built in math functions. Another way to improve this is to use a DP approach. One DP approach is Sieve of Eratosthenes. But another approach is to just use memoization and keep track of previous primes to see if the next number is a new prime or not. And the big question is what is the Big O of this algorithm. Because I have no idea, and the internet doesn't seem to know exactly. res = [2] def DpPrime(num): for i in res: if num % i == 0: return False return True for num in range(3, N+1): if DpPrime(num): res.append(num) print(res) Basically what is going on is that the res list is used to keep track of previous answers. And the DpPrime function is being used to iterate past primes to see if we found a new prime and if we do we append to the res list. So in the end, we don't have to iterate through all possible factors, we just need to see if num has another prime factor. which is a smaller range of possiblities we need to iterate through. I cannot figure out the exact Big O of this. Chatgpt isn't helping and when I search up DP for prime numbers i get Sieve of Eratosthenes, which is another DP algorithm, that doesn't seem to be related to my algorithm. My guess is that this DP algorithm for primes is O(N*M), where N is the user input, and M is the number of primes that exist in our mathematical system and that we have to iterate through. I can't figure out a better way of finding the Big O of this algorithm.
The simple answer is that the algorithm you describe is O(N^2), because the number of primes in N is bounded by N. It's just a lower overhead. Considering it to be O(N*M) is a valid way to look at it, but problematic in that we don't know what M actually is. Remember that Big O is an upper bound, so we can generally even if it's not tight, though if we KNOW that it's not also the lower bound, we do want to use little o instead of big O.
@@maryelainecaliff I see. That makes sense. My simple DP algorithm for prime numbers, although simple, does not have a better Big O than the brute force, unless I am able to prove that the simple DP algorithm mathematically has a better Big O. I am currently in a rabbit hole of prime generating algorithms, and I just found out that the Sieve of Atkin has a time complexity of O(N). It was discovered in 2003, and there aren't many resources that covers this algorithm in depth. So, I think it would be awesome if you decided to make a video on all the different prime generating algorithms, with the pros and cons of each. Just a video idea.
One thing to remember is that reducing overhead can matter, even when it doesn't change the computational complexity. I might have two different O(n^2) algorithms and still have one take 5 times as long as the other, leading me to prefer the faster one despite the fact that they have the same complexity.
sorry if it's not related comment, about DP, is it essential to learn recurrence relations, proofs and some discrete mathematics staff before DP or just practice more DP problems, i'm trying to be goot at DP ty
@@A57278 Best way to learn DP, is to practice DP. First learn how to do problems with brute force, and then learn the more elegant DP solutions. An example of an easy problem to do DP with is Fist 100 fibinacci numbers or something similar to that. Recurrence relations is related to DP, but its more to prove the time complexity of functions. Same with discrete mathematics. Its proving time complexity and also proving if a function is actually solving the problem. Just understand DP problems for simple problems and then try to use DP for more complex problems, that can actually benefit from it. As not all problems may require DP. I think the professor can help give pointers to this.
I have a question. What if the Big O is not clear?
I am trying to find the big O analysis of a DP algorithm, but there is not a lot of information on this.
Let's say, that I have a program that prints all prime numbers from 2 to N.
The program is a simple brute force solution. Checks to see if i % j == 0. If it is 0 then i must not be prime as it has a factor, and don't print. If it has no factors, then it must be prime, so print.
def bruteForce(num):
for i in range(2, num):
if num % i == 0:
return False
return True
for i in range(2, N +1):
if bruteForce(i):
print(i)
Obviously this is O(N^2) as it has a nested for loop. And the worst case scenario is that you have to search from 2 to num to find out that it is not a prime number, meaning you have to search all the way to N.
Another solution is to go all the way to sqrt(N) + 1. Assuming that sqrt(N) is a O(1) math function.
So it would look like this.
def bruteSqrtForce(num):
for i in range(2, math.sqrt(num)+1):
if num % i == 0:
return False
return True
for i in range(2, N +1):
if bruteSqrtForce(i):
print(i)
My guess is that this is O(N*Sqrt(N)). However, some people on the internet are telling me that this just improves run time, and the Big O doesn't actually change. And assuming that sqrt(n) is a math function with O(1) is a big assumption because that may not always be the case with all built in math functions.
Another way to improve this is to use a DP approach. One DP approach is Sieve of Eratosthenes. But another approach is to just use memoization and keep track of previous primes to see if the next number is a new prime or not. And the big question is what is the Big O of this algorithm. Because I have no idea, and the internet doesn't seem to know exactly.
res = [2]
def DpPrime(num):
for i in res:
if num % i == 0:
return False
return True
for num in range(3, N+1):
if DpPrime(num):
res.append(num)
print(res)
Basically what is going on is that the res list is used to keep track of previous answers. And the DpPrime function is being used to iterate past primes to see if we found a new prime and if we do we append to the res list. So in the end, we don't have to iterate through all possible factors, we just need to see if num has another prime factor. which is a smaller range of possiblities we need to iterate through.
I cannot figure out the exact Big O of this. Chatgpt isn't helping and when I search up DP for prime numbers i get Sieve of Eratosthenes, which is another DP algorithm, that doesn't seem to be related to my algorithm.
My guess is that this DP algorithm for primes is O(N*M), where N is the user input, and M is the number of primes that exist in our mathematical system and that we have to iterate through. I can't figure out a better way of finding the Big O of this algorithm.
The simple answer is that the algorithm you describe is O(N^2), because the number of primes in N is bounded by N. It's just a lower overhead. Considering it to be O(N*M) is a valid way to look at it, but problematic in that we don't know what M actually is.
Remember that Big O is an upper bound, so we can generally even if it's not tight, though if we KNOW that it's not also the lower bound, we do want to use little o instead of big O.
@@maryelainecaliff I see. That makes sense. My simple DP algorithm for prime numbers, although simple, does not have a better Big O than the brute force, unless I am able to prove that the simple DP algorithm mathematically has a better Big O.
I am currently in a rabbit hole of prime generating algorithms, and I just found out that the Sieve of Atkin has a time complexity of O(N). It was discovered in 2003, and there aren't many resources that covers this algorithm in depth.
So, I think it would be awesome if you decided to make a video on all the different prime generating algorithms, with the pros and cons of each. Just a video idea.
One thing to remember is that reducing overhead can matter, even when it doesn't change the computational complexity. I might have two different O(n^2) algorithms and still have one take 5 times as long as the other, leading me to prefer the faster one despite the fact that they have the same complexity.
sorry if it's not related comment, about DP, is it essential to learn recurrence relations, proofs and some discrete mathematics staff before DP or just practice more DP problems, i'm trying to be goot at DP ty
@@A57278 Best way to learn DP, is to practice DP.
First learn how to do problems with brute force, and then learn the more elegant DP solutions.
An example of an easy problem to do DP with is Fist 100 fibinacci numbers or something similar to that.
Recurrence relations is related to DP, but its more to prove the time complexity of functions.
Same with discrete mathematics. Its proving time complexity and also proving if a function is actually solving the problem.
Just understand DP problems for simple problems and then try to use DP for more complex problems, that can actually benefit from it. As not all problems may require DP.
I think the professor can help give pointers to this.