How, I am happy to share with you that I entirely wrote the code for this Shakespeare Monkey in Python by watching your first 2,3 videos. I already had very optimised code. And since I wasn't able to deal with 1000 initial variations and pool space complexity, I did implement this accept reject kind of thing, but not exactly you did. What I did was same, I picked and checked if it's probability is smaller. Moreover, I on the first fitness check of initial population discarded the ones with 0 fitness. So, you can see why I had 1000 more initial variations, but it converged very fast, to be or not to be converged in I think around 10-20 generations.
What I've seen most difficult part so far (Unless you reveal on the next videos) is how can we convert real life things (Like you've shown in the path problem) to fitness. Could be awesome if you tell us some more examples on how to "fitnessize" RL things. BTW Awesome videos. Thanks.
alrik11es It all depends on what you're . If you're using the algorithm to evolve a Pac Man AI for example, your fitness function would be the number of pellets he ate before he died. Actually, if you're interested in this stuff, you should look up "evolved antennas." NASA has actually used genetoc algorithms to design a better antenna for a satellite. The algorithm designed a population of random antennae. The ones that were selected and bred were those that closest matched the specifications and performance they needed. That's a pretty cool real work example, because it actually went to space in 2006 I believe.
Sir, great video. But remember, the sinclair zx81 was a very economic computer that allowed many people to be introduced to programming back in the day.
Could also do: Pick 1,2,3. Multiply with number of overall elements(11 in this example: 8+2+1=11), devide by number of different elements(3 in your example). If number is lower or equal than amount of amount of first element type (8), pick A. else if lower or equal than first+second(8+2=10), pick B. else if lower than first+second+thirds(8+2+1=11) pick C. This seems simpler and also scales better (if you are working with a ton of different elements, lets say 1.000.000 you run the risk of having to do “step 1+2” function up to 1 million times. while running the other would always only be 1 run through. although with 1m numbers added together and result divided by 1m (if statements would need to be some generic function, especially with high number of elements). which should be vastly faster).
Well it increases your time complexity Polynomially or use dymamice program to make it linear, and also increase space complexity linearly because for n^th element, you need to add all n-1 elements by iterating over them. This might be a little better in terms of space complexity yes, but it comes at the cost of time complexity. So, accept reject is more efficient, and in this case better.
Can someone explain something to me. What is the difference/benefit of creating functions in an object in the constructor... constructor() { this.acceptReject = function()... } vs creating it after the constructor constructor { } acceptReject() { } I have move functions back and forth and haven't been able to find a benefit one way or another. Thanks.
You are using the fitness value directly to pick elements. Studies have shown that an indirect way works better. Step 1. Order the pool based on the fitness values. Step 2. The chance that the first element is chosen is p, 2nd (1-p)*p, 3th (1-p)^2*p and the last (1-p)^(n-1). Step 3. Repeat step 2 until your new pool is full.
Hi - you could select random number less than sum of the fitness score like (8+2+1=11) and select for first if it is more than 8 and second if it is less than 10 and larger than 8 and chose as third one if it is 11.
another simple thing would that you have a Array with the same length as the Population and you accumulate the propabilities and then you use a for Loop to find your element: (i come from python an try to write it in Java Kind of; btw im german and auto-corretion let big letters appear )A: 40% B:20 %C: 10%D: 30% (i dont know where These numbers come from :D)prop=[40 ,60,70,100] -> x=random(100); for (int i=0; ix; return pop[i] (so return your individual that should be reproduced)//edit: your can also say that the random number can go from 0 to your highest value , so your final value, in that list
I know I already posted about this, but I belive that this method is even faster (perhaps not, I don't really know), since there is no selection loop, the only loop only goes to the size of the population, and rarely: for(int i = 0; i < populationSize; i++){ int select = 0; double selector = Math.random(); while(selector > 0){ selector-=scores[select]; /*scores[] is the table containing the percentage of selection of each element, for example, if element 3 has a 12 percent chance of being selected, scores[3] = 0.12*/ select+=1; } select-=1; //Here, add element at index select to the new population } Sorry for posting again about it, but the "hack" thing kind of bothers me for a reason... Still a really cool video !
I was about to post a very similar algorithm. And I agree that it's much better. No infinite loop; no need for the "besafe" variable, and each random number *always* picks a new member for the new population. I would think this would be the optimal way to do this.
This algorithm isn't the monte carlo algorithm, in the monte carlo algorithm, you pick a random number and decide if you want to use it or not, with my algorithm, every random number is user to select an element, so I think that it is fundamentally different in that sense :)
The way I came up with this idea (although I'm obviously not the first to think of it) was to imagine all the population elements as line segments arranged end to end--each segment with a length equal to its fitness score. Now throw a dart at the whole string of segments (pick a random number between 0 and the total length of them all) and select the element you hit. That's all this algorithm does--might be a nice way to explain it.
I have done exactly the same issue separately of yours and I have to say, that you were not using roulette wheel algorithm correctly. You should make an array of fitnesses of all elements in population and while you are traversing an array obtaining fitnesses, you can sum up all fitnesses to a variable in time complexity of N. Then simply generate 0-1 number and start sum up fitness[i] / fitnessSum values and when you exceed the generated 0-1 number, then you know, that i-th element is your element for selection according to its probability of fitness value. And my solution gets best solution always around 130th generation with probability of mutatio equal to 0,04 (4%)
I implemented this code in python and I got a question: I pick as a random number 8 and the randomly picked cell contains an individual with fitness = 7, I don't pick that individual, I repeat the process and I pick 2 as a random number and the newly picked cell contains an individual with fitness = 3 so I pick it. Is it right? Should the algorithm work like this? Because initially I thought that the random number should be picked only once and searched for a fitness > than the number so like this I give the possibility to be picked using the same requirement (the number) to a number of individuals. I hope I was clear because english is not my language
Hello my friends!! I have an assignment in Artificial Intelligence course. It's about scheduling staff using genetic algorithms. As individual( chromosome) we consider a 2-D array in which the each row represent an employee and each column a particular day. What's your opinion/suggestion about the selection function i should use and why? Thank you.
What about a coding challenge where you try to place as FEW ellipses (with a fixed size) as possible in a canvas? I know you've done a brute force kinda thing where you try loads of places, but what about placing as few as possible?
Now that's interesting. I'd imagine you'd have the largest possible ellipse in the center, then recursively fill in the four corners with the largest possible ellipse there, until you reach a single pixel in radii.
Why not just use a logarithmic mapping? i just derived one, let (steepness=s, randomNumber between zero and n, result must be between zero and r) the equation becomes (r*x^s/n^s), so you can make (n=100, x=random(n), r=3, s=2), so the chosen one between (A,B,C) is (3*x^2/100^2)
New to this algorithm; When they pick one out of the population for mating, is that then removed from the array in order to prevent an item mating with itself?
in genetic algo ... a partner A can mate with the same Partner B ... partner a and b are chosen randomly according to their fitness value..in another loop .. partner A and partner B again are chosen and given values accordingly
A way to optimize the algorithm: You could say that r = random(maxFitness/2, maxFitness); This essentially kills off the members of the population that have bad fitness. Of course, you can fiddle with the lower bound and find an optimal one. (divide by a lower or higher number or subtract a number)
This isn’t great for all GA implementations, cutting off a portion of the population from the breeding space, even the “unfit” portion can lead to variation issues, in some cases, such as problems with a large seemingly uncorrelated search-space, low fitness individuals can provide high fitness offspring after the breeding process
matrix input ... matrix function .... matrix output .... how to train this to make matrix function by make input at the matrix output and input then matrx function as result ? maybe for forecasting ?
I'd say you're taking it in the right direction but you can certainly go further with this. For instance, using "hacks" like adding 0.01 to each fitness score and looping until 10000 is all very messy. There is a way to work around these to make a very clean solution. I'm half tempted to post my code somewhere to show you what I came up with that handles these cases you've made note of (all zero fitnesses, infinite looping).
Yes, I agree there are many improvements I could make. There's a nice discussion here that I plan to talk about more too: github.com/CodingRainbow/Rainbow-Topics/issues/119
know this is old but: kind of a useless comment if you don't post the code(or just explain the concept of improvement, which would properly be better as people reading it can likely code quite a bit already). instead of the "ow look at me i got my code which is much better but i won't show it to you guys". there is always ways to improve a setup and if you want to share you thought on how, then do that instead of this "look at me stuff" ^^
Why not implement elitism? When creating your next generation, you sort the old population by fitness and pick your crossover partners from the top 20% best. Usually anything between 10% and 30% works well, but if you're unsure about how to set the elitism value, make a different GA that finds the optimal elitism factor.
Alex Tuduran This can lead to variation issues, especially in large search fields. I did just that I used the top 50% of the population for breeding process and after 5-6 generations my GA became stuck on a local maxima. In some problems, elitism in the form of passing the top 10% of the population directly to the next generation can be useful however it’s definitely can cause problems
Had the this issue, when implementing my GA I just assumed the top 50% of the population would be the ideal choice to produce offspring with, after 4-5 generations the GA would stagnate and wouldn’t converge on an optimal result at all. GA can be tricky to get right especially when you don’t have a simple/perfect fitness function or if there’s a wide uncorrelated search field, you need to keep variation high in order to converge on an optimal result
That is an inefficient way to select/reject. Why not just use a random number between 0-10(11 numbers) and if the number is 0 choose the first parent and if the number is between 1 and 2 then choose the 2nd parent and if the number is 3-10 then choose the 3rd parent.
@@Glademist You don't sort them. You use a larger range of numbers. Eg, Instead of 3 , we increase it to 6 parents. Then use a random number r between 0 - 50. If r=0-1 you choose parent1, if 2-4 use parent2, if 5-8 use parent3,if 9-13 use parent4, if r=14-19 use parent5, if r=20-50 use parent 6
@@georgechristoforou991 ok, i seem to understand it now, or at least know how to try to implement it. The next part of the series helped with that too. Thank you.
Wanna try this? #include #include #include #include int main() { // array of fitness values std::vector fitnesses; fitnesses.push_back(2050); // 0 fitnesses.push_back(1800); // 1 fitnesses.push_back(1500); // 2 fitnesses.push_back(789); // 3 fitnesses.push_back(456); // 4 fitnesses.push_back(123); // 5 fitnesses.push_back(12); // 6 // for logging results std::vector summary(fitnesses.size()); // Sum of all fitnesses size_t total = std::accumulate(fitnesses.begin(), fitnesses.end(), 0); while (true) { // Offset for the next interval size_t accum = fitnesses[0]; // Pick a random number between 0 and the sum of all fitnesses size_t val = your_random_generator(total); // flags size_t selected = 0; bool found = false; // If the random value is in the range of the first value, select the first value if (val
"To be or quit to be." -Best AI ever
How, I am happy to share with you that I entirely wrote the code for this Shakespeare Monkey in Python by watching your first 2,3 videos. I already had very optimised code. And since I wasn't able to deal with 1000 initial variations and pool space complexity, I did implement this accept reject kind of thing, but not exactly you did. What I did was same, I picked and checked if it's probability is smaller. Moreover, I on the first fitness check of initial population discarded the ones with 0 fitness. So, you can see why I had 1000 more initial variations, but it converged very fast, to be or not to be converged in I think around 10-20 generations.
What I've seen most difficult part so far (Unless you reveal on the next videos) is how can we convert real life things (Like you've shown in the path problem) to fitness. Could be awesome if you tell us some more examples on how to "fitnessize" RL things. BTW Awesome videos. Thanks.
alrik11es It all depends on what you're . If you're using the algorithm to evolve a Pac Man AI for example, your fitness function would be the number of pellets he ate before he died. Actually, if you're interested in this stuff, you should look up "evolved antennas." NASA has actually used genetoc algorithms to design a better antenna for a satellite. The algorithm designed a population of random antennae. The ones that were selected and bred were those that closest matched the specifications and performance they needed. That's a pretty cool real work example, because it actually went to space in 2006 I believe.
Finally this enormous memory consumption has been fixed, Skynet can finally develope without performance issues.
Sir, great video.
But remember, the sinclair zx81 was a very economic computer that allowed many people to be introduced to programming back in the day.
Could also do:
Pick 1,2,3.
Multiply with number of overall elements(11 in this example: 8+2+1=11), devide by number of different elements(3 in your example).
If number is lower or equal than amount of amount of first element type (8), pick A. else if lower or equal than first+second(8+2=10), pick B. else if lower than first+second+thirds(8+2+1=11) pick C.
This seems simpler and also scales better (if you are working with a ton of different elements, lets say 1.000.000 you run the risk of having to do “step 1+2” function up to 1 million times.
while running the other would always only be 1 run through. although with 1m numbers added together and result divided by 1m (if statements would need to be some generic function, especially with high number of elements). which should be vastly faster).
What if there are 1 million different fitness values, how do you create if-else structures?
Well it increases your time complexity Polynomially or use dymamice program to make it linear, and also increase space complexity linearly because for n^th element, you need to add all n-1 elements by iterating over them. This might be a little better in terms of space complexity yes, but it comes at the cost of time complexity. So, accept reject is more efficient, and in this case better.
Can someone explain something to me. What is the difference/benefit of creating functions in an object in the constructor...
constructor()
{
this.acceptReject = function()...
}
vs creating it after the constructor
constructor
{
}
acceptReject()
{
}
I have move functions back and forth and haven't been able to find a benefit one way or another. Thanks.
You are using the fitness value directly to pick elements. Studies have shown that an indirect way works better. Step 1. Order the pool based on the fitness values. Step 2. The chance that the first element is chosen is p, 2nd (1-p)*p, 3th (1-p)^2*p and the last (1-p)^(n-1). Step 3. Repeat step 2 until your new pool is full.
Hi - you could select random number less than sum of the fitness score like (8+2+1=11) and select for first if it is more than 8 and second if it is less than 10 and larger than 8 and chose as third one if it is 11.
The problem is that there really arent only three parents ever. There are hundreds of them usually. How would you assign those numbers to them?
another simple thing would that you have a Array with the same length as the Population and you accumulate the propabilities and then you use a for Loop to find your element: (i come from python an try to write it in Java Kind of; btw im german and auto-corretion let big letters appear )A: 40% B:20 %C: 10%D: 30% (i dont know where These numbers come from :D)prop=[40 ,60,70,100] -> x=random(100); for (int i=0; ix; return pop[i] (so return your individual that should be reproduced)//edit: your can also say that the random number can go from 0 to your highest value , so your final value, in that list
I used a similar algorithm in my project. But I did it with a while loop and subtraction so that you don't need to create an array first.
The sin clear is a magic yeah!
I know I already posted about this, but I belive that this method is even faster (perhaps not, I don't really know), since there is no selection loop, the only loop only goes to the size of the population, and rarely:
for(int i = 0; i < populationSize; i++){
int select = 0;
double selector = Math.random();
while(selector > 0){
selector-=scores[select];
/*scores[] is the table containing the percentage of selection of each element,
for example, if element 3 has a 12 percent chance of being selected, scores[3] = 0.12*/
select+=1;
}
select-=1;
//Here, add element at index select to the new population
}
Sorry for posting again about it, but the "hack" thing kind of bothers me for a reason... Still a really cool video !
I was about to post a very similar algorithm. And I agree that it's much better. No infinite loop; no need for the "besafe" variable, and each random number *always* picks a new member for the new population. I would think this would be the optimal way to do this.
This is great, thank you. I am going to make a note on github to cover this in the next live stream.
+Spectron Isn't that example the same as sinclairzx81's, but done with a while loop?
This algorithm isn't the monte carlo algorithm, in the monte carlo algorithm, you pick a random number and decide if you want to use it or not, with my algorithm, every random number is user to select an element, so I think that it is fundamentally different in that sense :)
The way I came up with this idea (although I'm obviously not the first to think of it) was to imagine all the population elements as line segments arranged end to end--each segment with a length equal to its fitness score. Now throw a dart at the whole string of segments (pick a random number between 0 and the total length of them all) and select the element you hit. That's all this algorithm does--might be a nice way to explain it.
I have done exactly the same issue separately of yours and I have to say, that you were not using roulette wheel algorithm correctly. You should make an array of fitnesses of all elements in population and while you are traversing an array obtaining fitnesses, you can sum up all fitnesses to a variable in time complexity of N. Then simply generate 0-1 number and start sum up fitness[i] / fitnessSum values and when you exceed the generated 0-1 number, then you know, that i-th element is your element for selection according to its probability of fitness value. And my solution gets best solution always around 130th generation with probability of mutatio equal to 0,04 (4%)
I implemented this code in python and I got a question: I pick as a random number 8 and the randomly picked cell contains an individual with fitness = 7, I don't pick that individual, I repeat the process and I pick 2 as a random number and the newly picked cell contains an individual with fitness = 3 so I pick it. Is it right? Should the algorithm work like this? Because initially I thought that the random number should be picked only once and searched for a fitness > than the number so like this I give the possibility to be picked using the same requirement (the number) to a number of individuals. I hope I was clear because english is not my language
I know nothing about JS, I can't understand why there is a while loop in the function acceptReject
Hello my friends!! I have an assignment in Artificial Intelligence course. It's about scheduling staff using genetic algorithms. As individual( chromosome) we consider a 2-D array in which the each row represent an employee and each column a particular day. What's your opinion/suggestion about the selection function i should use and why?
Thank you.
It may look silly, but I wanna ask please, why not just taking as parents the two elements with the highest fitness ?
What about a coding challenge where you try to place as FEW ellipses (with a fixed size) as possible in a canvas? I know you've done a brute force kinda thing where you try loads of places, but what about placing as few as possible?
Zero ellipses.
+Kyle Amoroso But with the whole screen filled
Now that's interesting. I'd imagine you'd have the largest possible ellipse in the center, then recursively fill in the four corners with the largest possible ellipse there, until you reach a single pixel in radii.
At last he's trying to say that my method was perfect 🤔😂
Why not just use a logarithmic mapping? i just derived one, let (steepness=s, randomNumber between zero and n, result must be between zero and r) the equation becomes (r*x^s/n^s), so you can make (n=100, x=random(n), r=3, s=2), so the chosen one between (A,B,C) is (3*x^2/100^2)
yeah, mapping from numbers to indices makes more sense
New to this algorithm;
When they pick one out of the population for mating, is that then removed from the array in order to prevent an item mating with itself?
in genetic algo ... a partner A can mate with the same Partner B ... partner a and b are chosen randomly according to their fitness value..in another loop .. partner A and partner B again are chosen and given values accordingly
Elitism. No need to prevent it as long as it is a probability and not a fixed event.
if(r
wow, do you really have a screen that big!!!
😀 IT isnt Just green screen 😁
why not binary search on prefix sums? much easier
I was screaming that since the beggining
A way to optimize the algorithm:
You could say that r = random(maxFitness/2, maxFitness);
This essentially kills off the members of the population that have bad fitness.
Of course, you can fiddle with the lower bound and find an optimal one. (divide by a lower or higher number or subtract a number)
Thanks for the tip!
This isn’t great for all GA implementations, cutting off a portion of the population from the breeding space, even the “unfit” portion can lead to variation issues, in some cases, such as problems with a large seemingly uncorrelated search-space, low fitness individuals can provide high fitness offspring after the breeding process
matrix input ... matrix function .... matrix output .... how to train this to make matrix function by make input at the matrix output and input then matrx function as result ? maybe for forecasting ?
I'd say you're taking it in the right direction but you can certainly go further with this. For instance, using "hacks" like adding 0.01 to each fitness score and looping until 10000 is all very messy. There is a way to work around these to make a very clean solution. I'm half tempted to post my code somewhere to show you what I came up with that handles these cases you've made note of (all zero fitnesses, infinite looping).
Yes, I agree there are many improvements I could make. There's a nice discussion here that I plan to talk about more too: github.com/CodingRainbow/Rainbow-Topics/issues/119
know this is old but:
kind of a useless comment if you don't post the code(or just explain the concept of improvement, which would properly be better as people reading it can likely code quite a bit already).
instead of the "ow look at me i got my code which is much better but i won't show it to you guys".
there is always ways to improve a setup and if you want to share you thought on how, then do that instead of this "look at me stuff" ^^
Why not implement elitism? When creating your next generation, you sort the old population by fitness and pick your crossover partners from the top 20% best. Usually anything between 10% and 30% works well, but if you're unsure about how to set the elitism value, make a different GA that finds the optimal elitism factor.
Alex Tuduran This can lead to variation issues, especially in large search fields. I did just that I used the top 50% of the population for breeding process and after 5-6 generations my GA became stuck on a local maxima. In some problems, elitism in the form of passing the top 10% of the population directly to the next generation can be useful however it’s definitely can cause problems
my browser crashes as soon as i have more than like 360 characters, I must be doing something wrong.
why not pick the highest fitness straight away?
to increase variation, and prevent the population getting stuck in a local good fitness, the objective is the global good fitness
Had the this issue, when implementing my GA I just assumed the top 50% of the population would be the ideal choice to produce offspring with, after 4-5 generations the GA would stagnate and wouldn’t converge on an optimal result at all. GA can be tricky to get right especially when you don’t have a simple/perfect fitness function or if there’s a wide uncorrelated search field, you need to keep variation high in order to converge on an optimal result
That is an inefficient way to select/reject. Why not just use a random number between 0-10(11 numbers) and if the number is 0 choose the first parent and if the number is between 1 and 2 then choose the 2nd parent and if the number is 3-10 then choose the 3rd parent.
But there are hundreds of parents, how would you decide which random number goes to which of them without time consuming sorting of them by fitness?
@@Glademist You don't sort them. You use a larger range of numbers. Eg, Instead of 3 , we increase it to 6 parents. Then use a random number r between 0 - 50. If r=0-1 you choose parent1, if 2-4 use parent2, if 5-8 use parent3,if 9-13 use parent4, if r=14-19 use parent5, if r=20-50 use parent 6
@@georgechristoforou991 ok, i seem to understand it now, or at least know how to try to implement it. The next part of the series helped with that too. Thank you.
Population sizes can contain hundreds of individuals and should be able to be changed, hard coding else if statements is not a solution
Wanna try this?
#include
#include
#include
#include
int main()
{
// array of fitness values
std::vector fitnesses;
fitnesses.push_back(2050); // 0
fitnesses.push_back(1800); // 1
fitnesses.push_back(1500); // 2
fitnesses.push_back(789); // 3
fitnesses.push_back(456); // 4
fitnesses.push_back(123); // 5
fitnesses.push_back(12); // 6
// for logging results
std::vector summary(fitnesses.size());
// Sum of all fitnesses
size_t total = std::accumulate(fitnesses.begin(), fitnesses.end(), 0);
while (true)
{
// Offset for the next interval
size_t accum = fitnesses[0];
// Pick a random number between 0 and the sum of all fitnesses
size_t val = your_random_generator(total);
// flags
size_t selected = 0;
bool found = false;
// If the random value is in the range of the first value, select the first value
if (val
+Damian Reloaded thanks for this!
Daniel Shiffman It's kinda rough but it works ^_^