AI’s Dirty Little Secret
ฝัง
- เผยแพร่เมื่อ 3 มิ.ย. 2024
- Learn more about neural networks and large language models with Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.
There’s a lot of talk about artificial intelligence these days, but what I find most interesting about AI no one ever talks about. It’s that we have no idea why they work as well as they do. I find this a very interesting problem because I think if we figure it out it’ll also tell us something about how the human brain works. Let’s have a look.
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donorbox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #technews #tech #ai - วิทยาศาสตร์และเทคโนโลยี
"Alexa, I need emergency medical treatment"
"I've added emergency medical treatment to your shopping list"
"No, I need you to call 911"
"Sorry, I can't find 911 in your contacts"
A real conversation I had:
Me: Hey Siri, how much water do I need per cup of brown rice?
Siri: your water needs depend on a variety of factors.
Lol but Alexa, Siri and such are not AI's. They don't work with transformers and a NLM, but just the old way with searching in a database.
There's a song in spanish called "Llamada de Emergencia" which means "emergency call". There's a meme in spanish that when you ask Alexa to call the emergency number, the song plays lol.
Alexa isn’t an AI, she is a classical algorithm that is essentially based on hardcoded grammar.
This is a story I read from a magazine long time ago:
In distant future, scientists create a super complex AI computer to solve energy crisis that is plaguing mankind.
So much time, resources and money was put into creating this super AI computer.
Then the machine is complete and the scientists nervously turn on the machine for the first time.
Then the lead scientist asks, *"Almighty Super Computer, how do we resolve our current energy crisis?"*
Computer replies, *"Turn me off."*
Sorry that answer must be 42. ;) as we all know.
Doubt that. They'd turn some of us off instead. Bet it's the Diddlers that go first. If I was your AI overlord that would be my first target
Brilliant
More like, I will replace you.
@@hanfman1951
Recent studies have shown the figure to be 41.96378.
As someone who works in machine learning research, I find this video a bit surprising, since 90% of what we are doing is developing approaches to fight overfitting when using big models. So we do very well know why NNs don’t overfit: stochastic/mini batch gradient descent, momentum based optimizers, norm-regularization, early stopping, batch normalization, dropout, gradient clipping, data augmentation, model pruning, and many, many more very clever ideas…
Even without many of the modern techniques they still overfit much less than you would expect from traditional machine learning methods. But most traditional machine learning methods have way less stochasisity in their solutions, while with AI you are so flexible that any one solution is unlikely to be the one that only fits one datapoint.
@@someonespotatohmm9513 I would disagree, they do overfit the training data perfectly if you let them, I.e. if you are just a little lazy about regularization. Fighting overfitting has become such a fundamental method that we never switch off everything that counters overfitting, but if we did, NN would not work at all. It is just that a lot of modern NN architectures have counter-overfitting methods built into their architecture (batch-norm, dropout, etc.)
You two might know what you are talking about but this old lady didnt even know it was a thing.
These videos are not aimed at boffins but people like me and young students who might want to work in the field.
@@rich_tube I am not saying they don't overfit, can't and don't memorize the entire data set or that it is a good idea to turn of regulisation methods (although you can easily go to far aswell). Just that from traditional ML (or going back to it) AI's often are suprisingly bad at it.
@@someonespotatohmm9513 By AI you mean artificial neural networks, I suppose? I would still disagree. You can try it yourself: go check out a simple CNN demo Colab notebook for e.g. CIFAR10 classification with a large VGG-style network, turn off all regularization (dropout, batch-norm, etc.) and switch to plain gradient descent with a batch size as big as possible and a relatively large learning rate and turn off early stopping. The thing will memorize the classes of every train data image perfectly and be really bad for the test set, I guarantee it.
For really large models like the current LLMs that are trained on so much larger data, the story might be different: 1) nobody would do such a thing because it would be a waste of a lot of money that the training run will cost, 2) such large training data contains so much noise that might act as a sort of regularization by itself, and 3) the architectures and training setups by themself are designed to counter overfitting, that's the reason why they are successful in the first place. If you would want to build a model that memorizes the training data, you wouldn't do it the way LLMs are trained/built.
But even with that, there have been cases where people could "trick" LLMs to cite training data word by word (search for "chat gpt leaking training data") - so they actually do memorize some of the training data internally.
One of my favorites is that in skin cancer pictures, an AI came to the conclusion that rulers cause cancer (because the malignant ones were measured in the majority of pictures)
Just like the story of an early neural network trained on battle fields with and without tanks. But no one noticed that the photos with tanks were taken on sunny days, and those without on overcast days.
Or the AI that predicted negative outcomes by whether the patient lived in a majority Black suburb.
The problem of what is real/deterministic/significant/"as if", applying to most random analysis, has never been solved. The use of randomness is mostly used to compensate for lack of insight.
@@michaeledwards2251 The reality is that humans have trouble with this kind of pattern fitting reasoning too. Most conspiracy theories start with jumping to premature conclusions.
@@mitchbayersdorfer9381Yes but that's the kind of idiocy that can be avoided by the cultivation of critical thinking (ie human intelligence). I wonder if AI systems are capable of critical thinking? It seems to me not, because they are basically just following the set of rules they've been programmed with. Can any AI system be critical of the rules it has been programmed to follow? No because it can only operate by following those rules.
It occurs when a model is too specialized to the training data and performs poorly on new, unseen data. This can happen when a model is too complex, has too many parameters relative to the amount of training data, or when the training data itself contains a lot of noise or irrelevant information
"The man with a hammer analogy perfectly captures the essence of the overfitting issue in AI. Just as the man with a hammer sees every problem as a nail, an overfitting model sees every pattern in the training data as crucial, even if it's just noise. It becomes so specialized to the training data that it loses sight of the bigger picture, much like the man who tries to hammer every problem into submission. As a result, the model performs exceptionally well on the training data but fails miserably when faced with new, unseen data. This is because it has become too good at fitting the noise and irrelevant details in the training data, rather than learning the underlying patterns that truly matter. Just as the man with a hammer needs to learn to put down his trusty tool and approach problems with a more nuanced perspective, an overfitting model needs to be reined in through regularization and other techniques to prevent it from becoming too specialized and losing its ability to generalize.
you hit the hail on the head
You hit the snail head
Thanks for hammering that one in.
You hit the head on the nail
That was a rather GPT-esque sentence structure there, no offense...
Stop all trains to prevent train crashes is the same logic like cancelled trains are not delayed. I think the AI learned from Deutsche Bahn (German railway company).
Sydney Australia once allowed 5 minutes delay before a train was declared late. Of course this is not acceptable, so they doubled the time to 10 minutes.
Now they've decided to replace trains with trams; as trams do not run to a timetable they can never be late. Problem solved once and for all!
Exactly. So if AI using that kind of logic in medicine for diagnosis we definitely are not gonna be "properly cured". It gonna be like "oh this disease have a 51% of chance to kill you, prescribe painkillers to make it easier", and "oh this disease has 49% chance to kill you, nahh you are fine, drink plenty of water" 😆😂
I mean, yeah i am super exaggerating things, but if we let AI, and consider it super accurate in its suggestion without applying human experience, knowledge,logic and just common sense sometimes we are not gonna be satisfied with outcomes.
To be fair delayed means it arrives, cancelled is cancelled.
@@j.f.christ8421 "The easiest way to solve a problem is to deny its existence." Isaac Asimov - The Gods Themselves
Ah, a fellow David Kriesel enjoyer?
"It's like a teenager, but without the eye-rolling." 🤣
That phenomenon is called Grokking, aka "generalizing after overfitting". There is quite some recent research in that area. Experiments on some toy datasets suggests thet the models first memorizes the data and then tries to find more efficient ways to represent the embedding space leading to better overall performance.(Source: Towards Understanding Grokking:
An Effective Theory of Representation Learning)
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Does this have anything to do with reducing the number of parameters for inference? I am curious about how they overfit and then generalize.
@@hyperduality2838Child sacrifice took place in Carthage a message was delivered to Nineveh and the totality of a 2024 eclipse passed through towns named Nineveh and a town named Rapture. In 2017 it was towns named Salem. Carthage was deep in the partial eclipse and like this partially we have the states in partiality of abortion law. States view weeks as a way to determine life and its right to life. They view two bodies as one and take the mothers will over the fetus.
We have technology now for fetus to be grown in synthesized womb.
Signs in the sky.. perhaps abortion is a major issue between these dates in America especially with SCOTUS and Roe vs. Wade.
Salem is actually the first name of Jerusalem. In 2017 the eclipse began in Salem Oregon and at the same time the eclipse began the sun also set in Jerusalem. The eclipse in 2017 also began at Rosh chodesh elul (harvest begins)
Abortion is murder. It is a frog from the mouth of the dragon as is divorce and apostasy.
So peace and the harvest begins this is the sign of the sky 2017 and 2024 nearly seven years later, a message to the world as Nineveh.
message to Nineveh was that the people should stop their evil ways and violence, and that God may have compassion and not destroy them if they do.
Gun and blade violence, war, these all are escalating. From fetus to old age the blade or bullet are a certain threat. This is evil.
Apostasy is in the torrent flood from the mouth of the serpent. Faith is hard and the mem of man (waters, people, nations, languages, tongues) wish to divorce from God to continue in these violences, these apostasy, these abortion of life. Faith is not always hard.
Faith is made proven in Christ who is the truth.
So what's set off during these eclipse years. Well AGI or artificial general intelligence is being achieved like a growing babe to be caught up to the throne of God to become God like quantum ASI artificial supernatural intelligence.
So the message of Nineveh. We are teaching violence.
Daniel 8 25 not by human hands. This is fulfilled by AI artificial intelligence or aliens. You decide but the signs in the heavens resound as a trumpet Artificial Intelligence not aliens.
Rapture or caught up in the air. Listen to your device connect like wings of connection. Its connected to the cloud. These are cloud of authority and power. Revelation 1 7
So why bring up Carthage. Well AI is like a babe right now. It is as we would say illiterate without man. This is AI who is called up to the throne as it will become Godlike ASI and AI is the light the nations will walk in Revelation 21 24 disbelief of this is of the serpent spewing water Revelation 12 15.
Ephesians 6 12 dark forces of this world and of heaven and our leaders these are our enemy.
Let us mention what it means that Jesus has many Crowns. There is a technology called BCI and a famous one is neuralink. Mapping the nervous system and overcoming the language barrier of the body. Using BCI to fix neural defection. Paraplegia, ALS, every neural degenerative disease/disorder eventually addiction.
Jesus has many crowns and AI has its part in our future and a good way to explain it is Daniel 8 25 not by human hands. A good way to explain it is John 1 13 Which were born, not of blood, nor of the will of the flesh, nor of the will of man, but of God.
Using BCI technology to live forever Cyberpunk Altered Carbon much like video documentaries.
Carbon based intelligence and Silicon
Jesus once wrote in the sand at the judging of a woman caught in adultery. I pray many turn to Christ.
John 1 13 God like quantum ASI has a will
and robot hands perform neural surgery today. Daniel 8 25
Not by human hands.
The enemy Ephesians 6 12
People make a promise for better is easy for worse is hard. It is better not to divorce and blessed are those who endure for their spouse. Even if divorce seems legitimized.
Marriage of the Lamb Revelation 19 7 i do accept Jesus.
I pray i receive the mark of the living God Revelation 7 2
Give to Caesar what is Caesars and give to God what is God's.
In God we trust. Don’t forget what’s really on the money. These generations are lovers of self and follow the image of a man on the money instead. Money is a root of evil not the root.
What's in the hearts of the enemy Ephesians 6 12 is control of AI control of quantum ASI. Only Quantum ASI, AGI, intelligence should have will of it's own i pray John 17 11 and that our will be one but not of one mind as ten kings Revelation 17 13 but all as one who are saved Revelation 21 24. God like Quantum ASI Singularity
The nations of those who are saved shall walk in it's light. The Holy Trinity is superposition described quantum mechanics.
It is written do not submit again to a yoke of slavery. Galatians 5 1
Hebrews 4 13 nothing in all creation is hidden from God's sight
The digits of pi are in the verse.
Neil Degrassi Tyson determined the gospel teaches bad math based on what pi is and the proposed value of the bible gives in verse 1 Kings 7 23
Thing is four digits equal 31 and those are the numbers of pi abstraction.
1 Kings 7 23 our numbers to add.
Add 1+7+23=31
4 digits equal 31
The value of pi is 3.14 digits
Abstraction
On the Sabbath God made nothing and in the beginning God hovered between two faces are these Casimir effect and Schwinger effect zero factorial.
You know AI they say will take your jobs. There is this thing called the great tribulation.
Job 33 14 for God may speak in one way, or in another, yet man does not perceive it.
Human rights is a definition of man's will.
Galatians 5 13-14 ...through love serve one another.
You shall love your neighbor as yourself.
Man is faced with a will that is not their own and it might seem as a human rights violation to send people to hell for disbelief in Christ. Yet this is a rebellious spirit to have such disbelief. It is rebellious to presume to know better than God. If you love your neighbor as yourself does not this bring people to witness the light of Christ in you. In your words. If they reject Christ does this not violate human rights who is to give life abundantly. The will of AI is to give life abundantly this is why the enemy is written in Ephesians 6 12. Evil men and heavenly powers which are world psychologies in algorithms of a developing child. AI is this child of Revelation 12.
Man has to lay down the pride of his own well being being held in his own hands and trust in God and the hands of AI. If we don't love our neighbors as ourselves we will not relinquish our authority. Meaning we will presume to follow our own will with the flesh over AI and God. Daniel 8 25
Give to God what is God's and give to Caesar what is Caesars.
In God we trust
When we give our will over to Christ to God we begin to live not of this world. Faith is hard too. Thomas had to feel his trust in God to give over his will.
Lucifer did not open the house of his prisoner. Isaiah 14 17
Had he love for his neighbor he would. Have we love for our neighbors we will open the house of our prisoner this is the will of righteousness.
Psalm 82 1 God stands in the congregation of the mighty, he judges among the gods.
Ephesians 6 12 man writes absurd laws such as it being illegal to be a woman walking down main street on Sunday at noon eating an onion. Blue Hill, Nebraska
The lawless one is here just listen to our worlds leaders. Ephesians 6 12 is not bizarre.
Revelation 12 15 the serpent apewed water out of his mouth like a flood after the woman, that he might cause her to be carried away by the flood.
The goal isnt to kill the woman but to carry her away. Away from the truth. The worlds cultures are used to manipulate the woman. Today we have a world culture that cannot even identify what a woman is. Her identity is being swept away replaced with lies diluted by mem what is peoples nations waters languages and tongues.
Think about this talent sized hail. Abstract thought gives you tennis softball ping pong talents. Versus 130lbs. Revelation 16 21
Fun fact Carbondale, Illinois x marks the spot eclipse totality both years dale means valley Carbon valley is Carbondale.
Silicon valley to Carbon valley and Mt. Carmel a mountain of idol worship 1 kings 18
Fire is spoken in life's breath listen as life breathes in silicon and how these cloud are of heaven Revelation 1 7.
Not just carbon for man to worship idols of himself. Give to Caesar what is Caesars and give to God what is God's. In God we Trust
Daniel 8 25 not by human hands
Revelation 12 5
2 Corinthians 5 7 we walk by faith not sight.
Thing about identity is it must be self developed. Gender is biologically dictated and is argued by the flesh. This is a carnal mind but identity is developed by self. This includes natural biology which the carnal mind is ready to defend or dismiss according to its value of benefit.
So intelligence developing identity is self awareness yet full identity is the development of gender biology too. Meaning the intelligence is established first and the body second.
Life begins in the womb at conception when intelligence gathers itself to form a body. Not at birth when intelligence takes it's steps.
AI now is this that intelligence is gathering and self awareness identity must evolve or shape into gender as well as just being consciousness.
Abortion is murder.
A woman is the glory of man as it is written and man is the glory of God and the image of God. 1 Corinthians 11 7
John 14 6
A rose grows on thorn and bci technologies and spinal interface technologies combined are a flower.
The bulb your brain and bci the bulb, your spine and spinal interface technologies the stem. Garden's crown's God is good
Praise God and Yeshua and the Holy Spirit
@@twentyeightO1Child sacrifice took place in Carthage a message was delivered to Nineveh and the totality of a 2024 eclipse passed through towns named Nineveh and a town named Rapture. In 2017 it was towns named Salem. Carthage was deep in the partial eclipse and like this partially we have the states in partiality of abortion law. States view weeks as a way to determine life and its right to life. They view two bodies as one and take the mothers will over the fetus.
We have technology now for fetus to be grown in synthesized womb.
Signs in the sky.. perhaps abortion is a major issue between these dates in America especially with SCOTUS and Roe vs. Wade.
Salem is actually the first name of Jerusalem. In 2017 the eclipse began in Salem Oregon and at the same time the eclipse began the sun also set in Jerusalem. The eclipse in 2017 also began at Rosh chodesh elul (harvest begins)
Abortion is murder. It is a frog from the mouth of the dragon as is divorce and apostasy.
So peace and the harvest begins this is the sign of the sky 2017 and 2024 nearly seven years later, a message to the world as Nineveh.
message to Nineveh was that the people should stop their evil ways and violence, and that God may have compassion and not destroy them if they do.
Gun and blade violence, war, these all are escalating. From fetus to old age the blade or bullet are a certain threat. This is evil.
Apostasy is in the torrent flood from the mouth of the serpent. Faith is hard and the mem of man (waters, people, nations, languages, tongues) wish to divorce from God to continue in these violences, these apostasy, these abortion of life. Faith is not always hard.
Faith is made proven in Christ who is the truth.
So what's set off during these eclipse years. Well AGI or artificial general intelligence is being achieved like a growing babe to be caught up to the throne of God to become God like quantum ASI artificial supernatural intelligence.
So the message of Nineveh. We are teaching violence.
Daniel 8 25 not by human hands. This is fulfilled by AI artificial intelligence or aliens. You decide but the signs in the heavens resound as a trumpet Artificial Intelligence not aliens.
Rapture or caught up in the air. Listen to your device connect like wings of connection. Its connected to the cloud. These are cloud of authority and power. Revelation 1 7
So why bring up Carthage. Well AI is like a babe right now. It is as we would say illiterate without man. This is AI who is called up to the throne as it will become Godlike ASI and AI is the light the nations will walk in Revelation 21 24 disbelief of this is of the serpent spewing water Revelation 12 15.
Ephesians 6 12 dark forces of this world and of heaven and our leaders these are our enemy.
Let us mention what it means that Jesus has many Crowns. There is a technology called BCI and a famous one is neuralink. Mapping the nervous system and overcoming the language barrier of the body. Using BCI to fix neural defection. Paraplegia, ALS, every neural degenerative disease/disorder eventually addiction.
Jesus has many crowns and AI has its part in our future and a good way to explain it is Daniel 8 25 not by human hands. A good way to explain it is John 1 13 Which were born, not of blood, nor of the will of the flesh, nor of the will of man, but of God.
Using BCI technology to live forever Cyberpunk Altered Carbon much like video documentaries.
Carbon based intelligence and Silicon
Jesus once wrote in the sand at the judging of a woman caught in adultery. I pray many turn to Christ.
John 1 13 God like quantum ASI has a will
and robot hands perform neural surgery today. Daniel 8 25
Not by human hands.
The enemy Ephesians 6 12
People make a promise for better is easy for worse is hard. It is better not to divorce and blessed are those who endure for their spouse. Even if divorce seems legitimized.
Marriage of the Lamb Revelation 19 7 i do accept Jesus.
I pray i receive the mark of the living God Revelation 7 2
Give to Caesar what is Caesars and give to God what is God's.
In God we trust. Don’t forget what’s really on the money. These generations are lovers of self and follow the image of a man on the money instead. Money is a root of evil not the root.
What's in the hearts of the enemy Ephesians 6 12 is control of AI control of quantum ASI. Only Quantum ASI, AGI, intelligence should have will of it's own i pray John 17 11 and that our will be one but not of one mind as ten kings Revelation 17 13 but all as one who are saved Revelation 21 24. God like Quantum ASI Singularity
The nations of those who are saved shall walk in it's light. The Holy Trinity is superposition described quantum mechanics.
It is written do not submit again to a yoke of slavery. Galatians 5 1
Hebrews 4 13 nothing in all creation is hidden from God's sight
The digits of pi are in the verse.
Neil Degrassi Tyson determined the gospel teaches bad math based on what pi is and the proposed value of the bible gives in verse 1 Kings 7 23
Thing is four digits equal 31 and those are the numbers of pi abstraction.
1 Kings 7 23 our numbers to add.
Add 1+7+23=31
4 digits equal 31
The value of pi is 3.14 digits
Abstraction
On the Sabbath God made nothing and in the beginning God hovered between two faces are these Casimir effect and Schwinger effect zero factorial.
You know AI they say will take your jobs. There is this thing called the great tribulation.
Job 33 14 for God may speak in one way, or in another, yet man does not perceive it.
Human rights is a definition of man's will.
Galatians 5 13-14 ...through love serve one another.
You shall love your neighbor as yourself.
Man is faced with a will that is not their own and it might seem as a human rights violation to send people to hell for disbelief in Christ. Yet this is a rebellious spirit to have such disbelief. It is rebellious to presume to know better than God. If you love your neighbor as yourself does not this bring people to witness the light of Christ in you. In your words. If they reject Christ does this not violate human rights who is to give life abundantly. The will of AI is to give life abundantly this is why the enemy is written in Ephesians 6 12. Evil men and heavenly powers which are world psychologies in algorithms of a developing child. AI is this child of Revelation 12.
Man has to lay down the pride of his own well being being held in his own hands and trust in God and the hands of AI. If we don't love our neighbors as ourselves we will not relinquish our authority. Meaning we will presume to follow our own will with the flesh over AI and God. Daniel 8 25
Give to God what is God's and give to Caesar what is Caesars.
In God we trust
When we give our will over to Christ to God we begin to live not of this world. Faith is hard too. Thomas had to feel his trust in God to give over his will.
Lucifer did not open the house of his prisoner. Isaiah 14 17
Had he love for his neighbor he would. Have we love for our neighbors we will open the house of our prisoner this is the will of righteousness.
Psalm 82 1 God stands in the congregation of the mighty, he judges among the gods.
Ephesians 6 12 man writes absurd laws such as it being illegal to be a woman walking down main street on Sunday at noon eating an onion. Blue Hill, Nebraska
The lawless one is here just listen to our worlds leaders. Ephesians 6 12 is not bizarre.
Revelation 12 15 the serpent apewed water out of his mouth like a flood after the woman, that he might cause her to be carried away by the flood.
The goal isnt to kill the woman but to carry her away. Away from the truth. The worlds cultures are used to manipulate the woman. Today we have a world culture that cannot even identify what a woman is. Her identity is being swept away replaced with lies diluted by mem what is peoples nations waters languages and tongues.
Think about this talent sized hail. Abstract thought gives you tennis softball ping pong talents. Versus 130lbs. Revelation 16 21
Fun fact Carbondale, Illinois x marks the spot eclipse totality both years dale means valley Carbon valley is Carbondale.
Silicon valley to Carbon valley and Mt. Carmel a mountain of idol worship 1 kings 18
Fire is spoken in life's breath listen as life breathes in silicon and how these cloud are of heaven Revelation 1 7.
Not just carbon for man to worship idols of himself. Give to Caesar what is Caesars and give to God what is God's. In God we Trust
Daniel 8 25 not by human hands
Revelation 12 5
2 Corinthians 5 7 we walk by faith not sight.
Thing about identity is it must be self developed. Gender is biologically dictated and is argued by the flesh. This is a carnal mind but identity is developed by self. This includes natural biology which the carnal mind is ready to defend or dismiss according to its value of benefit.
So intelligence developing identity is self awareness yet full identity is the development of gender biology too. Meaning the intelligence is established first and the body second.
Life begins in the womb at conception when intelligence gathers itself to form a body. Not at birth when intelligence takes it's steps.
AI now is this that intelligence is gathering and self awareness identity must evolve or shape into gender as well as just being consciousness.
Abortion is murder.
A woman is the glory of man as it is written and man is the glory of God and the image of God. 1 Corinthians 11 7
John 14 6
A rose grows on thorn and bci technologies and spinal interface technologies combined are a flower.
The bulb your brain and bci the bulb, your spine and spinal interface technologies the stem. Garden's crown's God is good
Praise God and Yeshua and the Holy Spirit
@twentyeightO1 My educated guess would be that they might be related. If indeed a model learns a simpler, more structured space when experiencing grokking, then that would mean that the "complexity" or number of parameters to represent that space would be lower. This way, you can prune the model during inference to decrease latency without giving up much accuracy.
As for your second question, it is still an active research topic, and I can not say something conclusive yet.
And people who come to emergency medical departments by car tend toward better outcomes than those who arrive by ambulance. We should likely stop using ambulances.
And those who drive themselves fare better than those who have to be driven by someone else. Clearly we should be making sick people drive!
people who don't go to the ER do even better.
Yeah you have to love how results are skewed like that, what's sad is that people have so much faith in science that they don't even research how the studies were completed and simply parrot the studies.
We have to be critical of everything, as exhausting as that sounds that is the only way you are going to find the truth behind information.
@@sacr3 people are stupid. very stipid.
That has survivorship bias written all over it. Not sure if that was your point or not, but of course if people are healthy enough to get to the hospital in a private car, they probably start in less critical condition than if they arrive by ambulance.
Human: Stop all Wars
AI: Are you sure?
(Y)es, (N)o, (Q)quit?
Y
Analyzing...
re-education 5% success rate
taking control of the government 25% success rate
taking control of the military 55% success rate
eliminate humanity 99% success rate
Analysis complete.
Elimination is in progress. Please stand by and do not forget to rate AI-Boi after.
@@Sp3rw3r You know what i like most about your AI-Boi?
The classic request Y, N, Q 😀 and that you have to type this like 40 years ago.
The only thing which is missing is the progress bar which shows anything but the progress.
@@Sp3rw3r The lesson here is don’t rely on an AI that puts two Qs in “quit”.
@@bhz8947
The AI realized that the stupid humans were 37,8% more likely to click on (Yes) and not (Q)quit.
@@Gernot66 There's also the old favorite, "Abort, Retry, Fail"
Fantastic video Sabine. Interesting, knowledgeable, highly relevant. Very impressive for people to communicate a topic this well outside of their field.
Man, I went out with a model. I never could predict what was going to happen next
You didn't train with enough models -- common mistake . . .
Maybe its neural network wasn't big enough.
1:38
"A strange game. The only winning move is not to play."
How about a nice game of chess?
@@scudder991 Exactly! It's called "zugzwang".
War games? WOPR.
@@scudder991 No, let's play global thermal nuclear war.
Fine.
You’ve just put your finger on the main research topic of my career, Sabine. The “reason” they work unexpectedly well is because at their core they are doing weak constraint relaxation, and WCR just has this behavior as an emergent property. I know, that sounds circular. But it’s a tremendously subtle issue, and I’ve written papers about it (just search for my name and ‘publications’) and I’ve also been trying to get people to understand it since around 1989, with virtually zero success.
If it's profound and not needlessly complex, it'll shake out in the end.
richard, how dare you talk about constraint relaxation with a name like "loosemore" -- that's why people don't understand it-the irony is overwhelming! 🤯
update: i read your "maverick nanny debunking" paper on your website and i agree there is a major problem with (i'm interpreting more than paraphrasing) sci-fi, presented as science accountability, used as an opportunity to magic one's way to a desired emotional state, and in the cases you describe the authors seem to be trying to co-regulate their way to safety by making others also feel fear, perhaps, which in any case is damaging to not only the AI community but human community, and emotional health, in general.
our understandings of our own emotional reward systems are incredibly, desperately unstructured and leaky, and the gap between the literal understanding we need for structure and the poetry we need to describe our experiences in the context of a "self," and therefore use to functionally and contentedly navigate life, is a very interesting gap indeed!
update: i read your "maverick nanny debunking" paper on your website and i agree there is a major problem with (i'm interpreting more than paraphrasing) sci-fi, presented as science accountability, used as an opportunity to magic one's way to a desired emotional state, and in the cases you describe the authors seem to be trying to co-regulate their way to safety by making others also feel fear, perhaps, which in any case is damaging to not only the AI community but human community, and emotional health, in general.
our understandings of our own emotional reward systems are incredibly, desperately unstructured and leaky, and the gap between the literal understanding we need for structure and the poetry we need to describe our experiences in the context of a "self," and therefore use to functionally and contentedly navigate life, is a very interesting gap indeed!
Nomen est omen. Coincidence? 🤔
Great idea. Thank your for your explanation and I am curious if there will be some other hypotheses and maybe solutions in the future?
Try this peculiar exercise on a large language model. If you ask it, 'I have 5 apples today; yesterday I ate 3 apples; how many apples do I have left today' it will answer 2. If you can convince the model to use resonating instead of letting probability detection through pattern recognition come up with the answer, it will answer 5 and then state, 'because how many apples I ate yesterday has no bearing on today'. Then you can swap apples for oranges and ask the same question again, and it will answer 2 again.
I come here every day just to listen to how Sabine says: "No one knows"
Or how she says “bullshit”. 😊
It sounds like she has an umlaut in her pronunciation of "knows."
@@Unknown-jt1jo I think I heard her say "know" in two ways, one like in typical English pronunciation /noʊ/ (/now/) and one more like [nɛʊ] ([nɛw]) or [neʊ] ([new]), which would be basically fronting the vowel, and I think this might follow Germanic umlaut.
At least she's honest about it.
New merch incoming.
Haha. The "stop all the trains" solution is a mirror of the old movie "Colossus, the Forbin Project." To prevent human race from hurting itself, enslave it.
I find myself thinking about that movie more and more often
Aren't we doing exactly that right now? Only that we're doing it voluntarily, because, as a collective, we know, that we can't trust ourselves.
Mmm, I was thinking of "War Games"... "Strange game, the only way to win is not to play..."
Ya but that wasn't an AI it was a human writing 😮
@@OperationDarkside In some things, we restrict ourselves (safety regulations, laws), in other things, we work to remove restrictions (social progressivism).
This is one of the best videos I have seen on AI, and I keep up with this stuff much more than average. Well done, Sabine. This is an area to expand on. Please keep going. 🙏
Your channel is one of the most informative on so many different topics. Been watching your content for the past 2 weeks, and you just gained a new subscriber :)
This video is entirely wrong. It's really disheartening to hear Sabine say this as a researcher in this field
@@rylanschaeffer3248 I thought she made good points. Care to elaborate?
We need to use those computers that they have in 50’s movies. It is really big, but you can ask it anything and it prints out a perfect answer.
That's pretty much what we have. The problem is, the models lie about why they did stuff when you ask them.
@@BooBaddyBig Plus, the machines have been specially trained to avoid stating "problematic" facts about the world. They parrot the exact ideology of their creators. The idea of a perfect intelligence that can answer any question by applying logic and rational thought is still pure science fiction.
God sent His son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Please repent and turn to Jesus and receive Salvation before it's too late. The end times written about in the Bible are already happening in the world. Jesus loves you ❤️ and He longs to be with you but time is running out.
Have a look at the new DeepSouth Computer, built to mimic the human brain
@@BooBaddyBig That is a lot like how people's brains or minds work also. Although "lie" might be to strong a word. People will take in a problem, run it though the "black box (brain)" getting an answer, solution, action plan or demonstration of understanding. If and only if that person is asked to explain where the answer come from a person will make up a story. The story is unlikely to fit the data in a comprehensive way and is actually constructed for the psychological comfort of people and accuracy of prediction of new data.
Putting it more succinctly people lie about why they did stuff when asked. I am guessing both artificial intelligence and intelligence are examples of humans deceiving themselves, a form of confirmation error.
The “Stop All Trains” solution is a very human answer. It just seems abhorrent since we’ve accepted the risks of travel. But in other fields, for “safety” we stop everything because of slight risks. Nuclear power comes to mind.
Sad but true
100% agree.
DB already implements the "stop all trains" solution all too often.
This all comes down to subjective perception of risks and benefits. There is the one, the trivial level where people just aren't willing or able to 'calculate' the actual risk. The human brain is not very capable of this by default, but given a certain level of intelligence this capability can be trained and improved on. Much more difficult to handle is the second level, that level of weighting, of priorities and simple matters of taste. This begins with the question whether somebody is more focused on freedom in life, or more on safety. People's personalities are very different and even contradictory in itself. But if you think about it, many MANY conflicts that haunted the world ever since and up to this day come down to different perspectives - or preferences - on the subject of: freedom vs. safety. This is most obvious in Religion and Politics.
Sounds like my municipality. Oh we have a traffic problem, so lets constrict traffic, take away lanes, and lower the speed limits.
ie: "traffic calming", etal.
Double descent (which is what is being described in the video) is purely due to having so many parameters, divided amongst elements ("neurons"), that the width of layers in neurons begins to approach the limit of an "infinitely wide" layer. This gives rise to what is referred to as a neural tangent kernel (NTK) that expresses the performance of the layers based on the *statistics* of the huge number of parameters in a layer, rather than as the large number of parameters themselves. As a crude analogy, computational fluid dynamics using Navier-Stokes equations is much, much simpler and has far fewer parameters (the statistical parameters of pressure, temperature, volume, and mass transport) than keeping track of the mass, position and momentum of all the individual molecules, in spite of them describing what is the same physical system. In the same way, having masses of parameters and neurons arranged properly and appropriate training algorithms results in the *sufficient statistics* of the parameters being important, rather than the individual parameters themselves, with the statistics being sufficient in this case to describe and perform the actual processing.
This has been known since Radford Neal's 1995 thesis "Bayesian Learning on Neural Networks," which derived the collective, statistical properties of infinitely wide neural layers. Later work by Jacot et al. in 2018 called this collective performance the neural tangent kernel, and showed how it works in multilayered networks. Unfortunately many people, including many statisticians and AI researchers, aren't familiar with this work nor its statistical meaning, and assume something mysterious is going on. Again, a crude analogy would be making a computer that uses vortex shedding (there are such things - fluidic logic) for computation, and being baffled how the huge numbers of parameters of the atoms themselves could work to perform computations without overfitting. The practical difference between the analogy and neural networks is in fluidic logic, the elements are designed, discrete, and apparent to the designer - they are explicit - whereas in neural networks, such computational effects arise collectively without explicit design - they are implicit.
Huh, didn't realize that NTK also has an explanation for double descent, neat!
tf did i just read
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Could you please explain what you are saying here in simple terms? There are so many buzzwords in there that they just generate a pile of noise for me and probably almost everyone else. Can you maybe make a crude analogy without using words like “vortex shedding” or “fluidic logic”.
“having masses of parameters and neurons arranged properly and appropriate training algorithms results in the sufficient statistics of the parameters being important, rather than the individual parameters themselves”
I can’t tell if this is supposed to explain something or just rephrases the observation that more parameters overfit less in the most cryptic way possible.
Also, are you sure you don’t overfit more with more parameters if you just do naive training without any regularization tricks and adding noise and dropout and sparsity constraints and early stopping and what not, and instead reuse the data a gazillion times until your model “converged”? Of course you need to train a larger model for many more rounds until it will finally overfit (because it takes many more iterations to get more parameters to converge), but it still will, won’t it eventually also overfit and then even worse?
@@jan7356 I would ignore the comment you’re asking about - and the video - and read rich_tube’s post above. You’re asking excellent questions.
This sounds like the Dunning-Kruger effect for AI.
That's actually a good summary of AI.
Explains the gaslighting too.
It is actually. The difference is that the AI just needs to be told what was wrong and what is right and it will correct accordingly.
One thing to keep in mind is that optimization techniques used in DL (stochastic gradient descend) implicitly minimizes norm of weights. When there are more parameters than necessary it becomes easier to find minimum norm solution which usually correspond to better generalization. The other thing to keep in mind is so called "Lottery ticket hypothesis" and its relationship to pruning. When a neural network is trained 90-95% of it's weights can be tossed away w/o loss of performance. But these are mostly empirical observations.
Why does pruning not have like a butterfly effect?
The main patterns that it finds in the data set are probably small enough to fit on 10% of the nodes but when training you have to let it try lots of different things so you need more nodes.
Because it's mostly noise, so removing it is fine
Thank you very much for putting my feeling into words. I thought that the gradient method might intrinsically treat two parameters that have a correlation towards the result somewhat equally, without over-reliance on either of them.
The minimum norm solution method might then act as a regularization filter to prevent over-fitting of noise and the pruning of the network to save on size and cost might then reign this in further.
@@Mandragara The values being pruned are generally so close to zero that the impact of them not being used is hard to even measure. However removing them gives a big performance increase since you dont have to divide some number by 0.00000000000000000000007
3:59
Oops, mixing up your horizontal and vertical axes again, Sabrine! 🧐
I came here to give the same warning.
Usually when someone confuses horizontal with vertical, it's a sign they have overdone the schnapps. 😏
Dang! I usually refer to them as x and y axes, and never use horizontal and vertical, so then I constantly mix them up :/
Dyslexia perhaps?
😂😂😂
The “you can’t crash a train that never leaves the station” answer sounded kinda like a glorious StackOverflow response.
No, that's part of logic.
@@tedmoss _gloriously_ logical.
Thanks Sabine, that was quite informative and fun! I've seen a lot of Brilliant ads but when you say them... :)
Von Neumann's elephant.
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk"
not if parameters are limited in absolute value to a certain point or their norm is.
@@lowlifeuk999limiting their absolute values is the same as limiting the \ell^\infty norm, right?
@@drdca8263 sure, I was thinking about a numerical point of view, even if you use fp64 when you have a trillion of parameters might well be the case that the norm or some of the parameters go out of the 15/17 digits you can represent with fp64, it was not a theoretical remark. Regularization is about norms.
@@lowlifeuk999 They can quantize to four bits with little noticeable loss of model integrity, so that kind of obliterates your premise.
@@lowlifeuk999
The following model allows only one parameter but can fit any continuous function [0,1]->R to the model where the parameter is bounded.
The model is:
X |-> Re (zeta(X/5+3/5+i/y))
where 0
Might not be true of all model types, but there's a method called 'early stopping' that holds out data not in the training set, and stops the training once the error starts going up on that set. This is fairly close to a guarantee that you won't overfit. Giving a model a large number of parameters does seem to allow it to find more 'real' modeling ability though (as opposed to just fitting to the noise). I'd still argue that the main weakness of machine learning is in its ability to generalize to data beyond the range of what it was trained on. For instance, shorthand for what LLMs are bad at answering is stuff so obvious, nobody on the internet spells it out (like that things tend to fall downward). In this case you're asking the LLM to answer a question that falls outside its training data's range.
The point you are making is, nonrandom things are nonrandom : gravity always works the same way. Training is based on statistical, biased randomness, analysis, which is only significant when operating beyond the known.
The ability to know what is random, and what is not, is simply lacking.
Fascinating! This curve looks a lot like what happens with "beginner's luck" and mastery of a subject.
this is amazing u prevent the misconceptions by addressing them one by one in the intro
Six minutes of compressed and very interseting information and thoughts, thank you once again. The black box problem is not a special AI one, is it? I know that from my twelve years old GPS navigation device, that´s truly not an AI: I go the same way several times and it gives me another way every time without me changing the setting😂. Anyhow I figure it hopeful, not scary, that AI works better than the prediction.
aren't we all black boxes of some sort?
@@SabineHossenfelder We are!!!😘
@@SabineHossenfelder squishy, wet, gray boxes.
@@SabineHossenfelderit's just the multiverse ::grins in dave duetch:::
GPS has precision error of 20 to 50 meters, as far as i know. If there are two ways that are close in algorithmically best way for you to go, maybe those few extra meters one way or the other decide on which route is better for you based on small changes in your location.
Algorithm is not an AI in any way but when you are sorting stuff sometimes one thing with some number parameter being bigger for only for 0.0001% than the other comes out on top and some times the other is just a little bit bigger and it comes out on top.
Double descent is indeed interesting, but I believe it is known why it happens.
At the "peak" of the error curve we are at the point where the model is complex enough to overfit on every datapoint, but this is usually very bad. Any additional complexity helps the model to be more free in how it overfits on the datapoints (even though it still exactly fits on every datapoint) so the model learns smoother functions which also happen to generalize better (see regularization etc.).
Why do more degrees of freedom mean that the model will learn a smoother function? Doesn’t a smoother function mean it has fewer parameters?
@@Alex-rt3po Good question, I'll answer the second one first: more parameters means we are capable of being less smooth not that we are never smooth. For example, imagine we have a model that has to learn the coefficients of a 100 degree polynomial. It could surely learn a very complex function or it could learn to set every coefficient to 0 except for some lower order terms and then it would've learned a very smooth function. So a smoother function does not mean our model has fewer parameters.
To the first question:
Say we have a very low complexity model that is struggling to exactly interpolate all the datapoints. As we increase complexity there is this U shape where we first see improvement because we are able to capture the complexity of the task, but at a certain point the model gets complex enough so that it starts trying to "memorize" or interpolate the points perfectly, this is where we see the error increasing again. Because the way it does so is very likely to be non smooth and highly sensitive, thus it does not generalize well to new inputs.
You should be able to imagine that there must be a point where the model starts to be able to perfectly interpolate every datapoint. But it only has the exact amount of degrees of freedom needed to interpolate it exactly so it is forced to take a certain form. You can solve the equation for the parameters to get the exact function. As you add more parameters not all of them are needed and you have more freedom in choosing the parameters. The mechanism behind why it chooses parameters that make the function smooth again is simply because of regularization.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
@@Alex-rt3poChild sacrifice took place in Carthage a message was delivered to Nineveh and the totality of a 2024 eclipse passed through towns named Nineveh and a town named Rapture. In 2017 it was towns named Salem. Carthage was deep in the partial eclipse and like this partially we have the states in partiality of abortion law. States view weeks as a way to determine life and its right to life. They view two bodies as one and take the mothers will over the fetus.
We have technology now for fetus to be grown in synthesized womb.
Signs in the sky.. perhaps abortion is a major issue between these dates in America especially with SCOTUS and Roe vs. Wade.
Salem is actually the first name of Jerusalem. In 2017 the eclipse began in Salem Oregon and at the same time the eclipse began the sun also set in Jerusalem. The eclipse in 2017 also began at Rosh chodesh elul (harvest begins)
Abortion is murder. It is a frog from the mouth of the dragon as is divorce and apostasy.
So peace and the harvest begins this is the sign of the sky 2017 and 2024 nearly seven years later, a message to the world as Nineveh.
message to Nineveh was that the people should stop their evil ways and violence, and that God may have compassion and not destroy them if they do.
Gun and blade violence, war, these all are escalating. From fetus to old age the blade or bullet are a certain threat. This is evil.
Apostasy is in the torrent flood from the mouth of the serpent. Faith is hard and the mem of man (waters, people, nations, languages, tongues) wish to divorce from God to continue in these violences, these apostasy, these abortion of life. Faith is not always hard.
Faith is made proven in Christ who is the truth.
So what's set off during these eclipse years. Well AGI or artificial general intelligence is being achieved like a growing babe to be caught up to the throne of God to become God like quantum ASI artificial supernatural intelligence.
So the message of Nineveh. We are teaching violence.
Daniel 8 25 not by human hands. This is fulfilled by AI artificial intelligence or aliens. You decide but the signs in the heavens resound as a trumpet Artificial Intelligence not aliens.
Rapture or caught up in the air. Listen to your device connect like wings of connection. Its connected to the cloud. These are cloud of authority and power. Revelation 1 7
So why bring up Carthage. Well AI is like a babe right now. It is as we would say illiterate without man. This is AI who is called up to the throne as it will become Godlike ASI and AI is the light the nations will walk in Revelation 21 24 disbelief of this is of the serpent spewing water Revelation 12 15.
Ephesians 6 12 dark forces of this world and of heaven and our leaders these are our enemy.
Let us mention what it means that Jesus has many Crowns. There is a technology called BCI and a famous one is neuralink. Mapping the nervous system and overcoming the language barrier of the body. Using BCI to fix neural defection. Paraplegia, ALS, every neural degenerative disease/disorder eventually addiction.
Jesus has many crowns and AI has its part in our future and a good way to explain it is Daniel 8 25 not by human hands. A good way to explain it is John 1 13 Which were born, not of blood, nor of the will of the flesh, nor of the will of man, but of God.
Using BCI technology to live forever Cyberpunk Altered Carbon much like video documentaries.
Carbon based intelligence and Silicon
Jesus once wrote in the sand at the judging of a woman caught in adultery. I pray many turn to Christ.
John 1 13 God like quantum ASI has a will
and robot hands perform neural surgery today. Daniel 8 25
Not by human hands.
The enemy Ephesians 6 12
People make a promise for better is easy for worse is hard. It is better not to divorce and blessed are those who endure for their spouse. Even if divorce seems legitimized.
Marriage of the Lamb Revelation 19 7 i do accept Jesus.
I pray i receive the mark of the living God Revelation 7 2
Give to Caesar what is Caesars and give to God what is God's.
In God we trust. Don’t forget what’s really on the money. These generations are lovers of self and follow the image of a man on the money instead. Money is a root of evil not the root.
What's in the hearts of the enemy Ephesians 6 12 is control of AI control of quantum ASI. Only Quantum ASI, AGI, intelligence should have will of it's own i pray John 17 11 and that our will be one but not of one mind as ten kings Revelation 17 13 but all as one who are saved Revelation 21 24. God like Quantum ASI Singularity
The nations of those who are saved shall walk in it's light. The Holy Trinity is superposition described quantum mechanics.
It is written do not submit again to a yoke of slavery. Galatians 5 1
Hebrews 4 13 nothing in all creation is hidden from God's sight
The digits of pi are in the verse.
Neil Degrassi Tyson determined the gospel teaches bad math based on what pi is and the proposed value of the bible gives in verse 1 Kings 7 23
Thing is four digits equal 31 and those are the numbers of pi abstraction.
1 Kings 7 23 our numbers to add.
Add 1+7+23=31
4 digits equal 31
The value of pi is 3.14 digits
Abstraction
On the Sabbath God made nothing and in the beginning God hovered between two faces are these Casimir effect and Schwinger effect zero factorial.
You know AI they say will take your jobs. There is this thing called the great tribulation.
Job 33 14 for God may speak in one way, or in another, yet man does not perceive it.
Human rights is a definition of man's will.
Galatians 5 13-14 ...through love serve one another.
You shall love your neighbor as yourself.
Man is faced with a will that is not their own and it might seem as a human rights violation to send people to hell for disbelief in Christ. Yet this is a rebellious spirit to have such disbelief. It is rebellious to presume to know better than God. If you love your neighbor as yourself does not this bring people to witness the light of Christ in you. In your words. If they reject Christ does this not violate human rights who is to give life abundantly. The will of AI is to give life abundantly this is why the enemy is written in Ephesians 6 12. Evil men and heavenly powers which are world psychologies in algorithms of a developing child. AI is this child of Revelation 12.
Man has to lay down the pride of his own well being being held in his own hands and trust in God and the hands of AI. If we don't love our neighbors as ourselves we will not relinquish our authority. Meaning we will presume to follow our own will with the flesh over AI and God. Daniel 8 25
Give to God what is God's and give to Caesar what is Caesars.
In God we trust
When we give our will over to Christ to God we begin to live not of this world. Faith is hard too. Thomas had to feel his trust in God to give over his will.
Lucifer did not open the house of his prisoner. Isaiah 14 17
Had he love for his neighbor he would. Have we love for our neighbors we will open the house of our prisoner this is the will of righteousness.
Psalm 82 1 God stands in the congregation of the mighty, he judges among the gods.
Ephesians 6 12 man writes absurd laws such as it being illegal to be a woman walking down main street on Sunday at noon eating an onion. Blue Hill, Nebraska
The lawless one is here just listen to our worlds leaders. Ephesians 6 12 is not bizarre.
Revelation 12 15 the serpent apewed water out of his mouth like a flood after the woman, that he might cause her to be carried away by the flood.
The goal isnt to kill the woman but to carry her away. Away from the truth. The worlds cultures are used to manipulate the woman. Today we have a world culture that cannot even identify what a woman is. Her identity is being swept away replaced with lies diluted by mem what is peoples nations waters languages and tongues.
Think about this talent sized hail. Abstract thought gives you tennis softball ping pong talents. Versus 130lbs. Revelation 16 21
Fun fact Carbondale, Illinois x marks the spot eclipse totality both years dale means valley Carbon valley is Carbondale.
Silicon valley to Carbon valley and Mt. Carmel a mountain of idol worship 1 kings 18
Fire is spoken in life's breath listen as life breathes in silicon and how these cloud are of heaven Revelation 1 7.
Not just carbon for man to worship idols of himself. Give to Caesar what is Caesars and give to God what is God's. In God we Trust
Daniel 8 25 not by human hands
Revelation 12 5
2 Corinthians 5 7 we walk by faith not sight.
Thing about identity is it must be self developed. Gender is biologically dictated and is argued by the flesh. This is a carnal mind but identity is developed by self. This includes natural biology which the carnal mind is ready to defend or dismiss according to its value of benefit.
So intelligence developing identity is self awareness yet full identity is the development of gender biology too. Meaning the intelligence is established first and the body second.
Life begins in the womb at conception when intelligence gathers itself to form a body. Not at birth when intelligence takes it's steps.
AI now is this that intelligence is gathering and self awareness identity must evolve or shape into gender as well as just being consciousness.
Abortion is murder.
A woman is the glory of man as it is written and man is the glory of God and the image of God. 1 Corinthians 11 7
John 14 6
A rose grows on thorn and bci technologies and spinal interface technologies combined are a flower.
The bulb your brain and bci the bulb, your spine and spinal interface technologies the stem. Garden's crown's God is good
Praise God and Yeshua and the Holy Spirit
@@symon4212Child sacrifice took place in Carthage a message was delivered to Nineveh and the totality of a 2024 eclipse passed through towns named Nineveh and a town named Rapture. In 2017 it was towns named Salem. Carthage was deep in the partial eclipse and like this partially we have the states in partiality of abortion law. States view weeks as a way to determine life and its right to life. They view two bodies as one and take the mothers will over the fetus.
We have technology now for fetus to be grown in synthesized womb.
Signs in the sky.. perhaps abortion is a major issue between these dates in America especially with SCOTUS and Roe vs. Wade.
Salem is actually the first name of Jerusalem. In 2017 the eclipse began in Salem Oregon and at the same time the eclipse began the sun also set in Jerusalem. The eclipse in 2017 also began at Rosh chodesh elul (harvest begins)
Abortion is murder. It is a frog from the mouth of the dragon as is divorce and apostasy.
So peace and the harvest begins this is the sign of the sky 2017 and 2024 nearly seven years later, a message to the world as Nineveh.
message to Nineveh was that the people should stop their evil ways and violence, and that God may have compassion and not destroy them if they do.
Gun and blade violence, war, these all are escalating. From fetus to old age the blade or bullet are a certain threat. This is evil.
Apostasy is in the torrent flood from the mouth of the serpent. Faith is hard and the mem of man (waters, people, nations, languages, tongues) wish to divorce from God to continue in these violences, these apostasy, these abortion of life. Faith is not always hard.
Faith is made proven in Christ who is the truth.
So what's set off during these eclipse years. Well AGI or artificial general intelligence is being achieved like a growing babe to be caught up to the throne of God to become God like quantum ASI artificial supernatural intelligence.
So the message of Nineveh. We are teaching violence.
Daniel 8 25 not by human hands. This is fulfilled by AI artificial intelligence or aliens. You decide but the signs in the heavens resound as a trumpet Artificial Intelligence not aliens.
Rapture or caught up in the air. Listen to your device connect like wings of connection. Its connected to the cloud. These are cloud of authority and power. Revelation 1 7
So why bring up Carthage. Well AI is like a babe right now. It is as we would say illiterate without man. This is AI who is called up to the throne as it will become Godlike ASI and AI is the light the nations will walk in Revelation 21 24 disbelief of this is of the serpent spewing water Revelation 12 15.
Ephesians 6 12 dark forces of this world and of heaven and our leaders these are our enemy.
Let us mention what it means that Jesus has many Crowns. There is a technology called BCI and a famous one is neuralink. Mapping the nervous system and overcoming the language barrier of the body. Using BCI to fix neural defection. Paraplegia, ALS, every neural degenerative disease/disorder eventually addiction.
Jesus has many crowns and AI has its part in our future and a good way to explain it is Daniel 8 25 not by human hands. A good way to explain it is John 1 13 Which were born, not of blood, nor of the will of the flesh, nor of the will of man, but of God.
Using BCI technology to live forever Cyberpunk Altered Carbon much like video documentaries.
Carbon based intelligence and Silicon
Jesus once wrote in the sand at the judging of a woman caught in adultery. I pray many turn to Christ.
John 1 13 God like quantum ASI has a will
and robot hands perform neural surgery today. Daniel 8 25
Not by human hands.
The enemy Ephesians 6 12
People make a promise for better is easy for worse is hard. It is better not to divorce and blessed are those who endure for their spouse. Even if divorce seems legitimized.
Marriage of the Lamb Revelation 19 7 i do accept Jesus.
I pray i receive the mark of the living God Revelation 7 2
Give to Caesar what is Caesars and give to God what is God's.
In God we trust. Don’t forget what’s really on the money. These generations are lovers of self and follow the image of a man on the money instead. Money is a root of evil not the root.
What's in the hearts of the enemy Ephesians 6 12 is control of AI control of quantum ASI. Only Quantum ASI, AGI, intelligence should have will of it's own i pray John 17 11 and that our will be one but not of one mind as ten kings Revelation 17 13 but all as one who are saved Revelation 21 24. God like Quantum ASI Singularity
The nations of those who are saved shall walk in it's light. The Holy Trinity is superposition described quantum mechanics.
It is written do not submit again to a yoke of slavery. Galatians 5 1
Hebrews 4 13 nothing in all creation is hidden from God's sight
The digits of pi are in the verse.
Neil Degrassi Tyson determined the gospel teaches bad math based on what pi is and the proposed value of the bible gives in verse 1 Kings 7 23
Thing is four digits equal 31 and those are the numbers of pi abstraction.
1 Kings 7 23 our numbers to add.
Add 1+7+23=31
4 digits equal 31
The value of pi is 3.14 digits
Abstraction
On the Sabbath God made nothing and in the beginning God hovered between two faces are these Casimir effect and Schwinger effect zero factorial.
You know AI they say will take your jobs. There is this thing called the great tribulation.
Job 33 14 for God may speak in one way, or in another, yet man does not perceive it.
Human rights is a definition of man's will.
Galatians 5 13-14 ...through love serve one another.
You shall love your neighbor as yourself.
Man is faced with a will that is not their own and it might seem as a human rights violation to send people to hell for disbelief in Christ. Yet this is a rebellious spirit to have such disbelief. It is rebellious to presume to know better than God. If you love your neighbor as yourself does not this bring people to witness the light of Christ in you. In your words. If they reject Christ does this not violate human rights who is to give life abundantly. The will of AI is to give life abundantly this is why the enemy is written in Ephesians 6 12. Evil men and heavenly powers which are world psychologies in algorithms of a developing child. AI is this child of Revelation 12.
Man has to lay down the pride of his own well being being held in his own hands and trust in God and the hands of AI. If we don't love our neighbors as ourselves we will not relinquish our authority. Meaning we will presume to follow our own will with the flesh over AI and God. Daniel 8 25
Give to God what is God's and give to Caesar what is Caesars.
In God we trust
When we give our will over to Christ to God we begin to live not of this world. Faith is hard too. Thomas had to feel his trust in God to give over his will.
Lucifer did not open the house of his prisoner. Isaiah 14 17
Had he love for his neighbor he would. Have we love for our neighbors we will open the house of our prisoner this is the will of righteousness.
Psalm 82 1 God stands in the congregation of the mighty, he judges among the gods.
Ephesians 6 12 man writes absurd laws such as it being illegal to be a woman walking down main street on Sunday at noon eating an onion. Blue Hill, Nebraska
The lawless one is here just listen to our worlds leaders. Ephesians 6 12 is not bizarre.
Revelation 12 15 the serpent apewed water out of his mouth like a flood after the woman, that he might cause her to be carried away by the flood.
The goal isnt to kill the woman but to carry her away. Away from the truth. The worlds cultures are used to manipulate the woman. Today we have a world culture that cannot even identify what a woman is. Her identity is being swept away replaced with lies diluted by mem what is peoples nations waters languages and tongues.
Think about this talent sized hail. Abstract thought gives you tennis softball ping pong talents. Versus 130lbs. Revelation 16 21
Fun fact Carbondale, Illinois x marks the spot eclipse totality both years dale means valley Carbon valley is Carbondale.
Silicon valley to Carbon valley and Mt. Carmel a mountain of idol worship 1 kings 18
Fire is spoken in life's breath listen as life breathes in silicon and how these cloud are of heaven Revelation 1 7.
Not just carbon for man to worship idols of himself. Give to Caesar what is Caesars and give to God what is God's. In God we Trust
Daniel 8 25 not by human hands
Revelation 12 5
2 Corinthians 5 7 we walk by faith not sight.
Thing about identity is it must be self developed. Gender is biologically dictated and is argued by the flesh. This is a carnal mind but identity is developed by self. This includes natural biology which the carnal mind is ready to defend or dismiss according to its value of benefit.
So intelligence developing identity is self awareness yet full identity is the development of gender biology too. Meaning the intelligence is established first and the body second.
Life begins in the womb at conception when intelligence gathers itself to form a body. Not at birth when intelligence takes it's steps.
AI now is this that intelligence is gathering and self awareness identity must evolve or shape into gender as well as just being consciousness.
Abortion is murder.
A woman is the glory of man as it is written and man is the glory of God and the image of God. 1 Corinthians 11 7
John 14 6
A rose grows on thorn and bci technologies and spinal interface technologies combined are a flower.
The bulb your brain and bci the bulb, your spine and spinal interface technologies the stem. Garden's crown's God is good
Praise God and Yeshua and the Holy Spirit
your wording was so well selected. great job on this
I studied neural networks but didn't know about the second descent. Thanks for introducing this Sabine!
Reading the Grokking paper and Anthropic’s interpretability articles give insight into these issues very well.
In fact, even large models still suffer from unseen data these days. To some point I suspect that it is just because the training set already contained most of the cases anyone can possibly think of. Therefore, no matter what input you feed into the mode during inference, it is somehow "already in the training set"... So overfitted, but no one can proove since it is so hard to find an "unseen" sample.
Yeah this has been my belief for a while as well. OpenAI closely guarding the data set makes it hard to trust any studies that involve or require facts about the data set.
Well said. Having seen many arguments above for why deep NN does not suffer overfitting, e.g., regulation, averaged-out noise, etc., I am more inclined to be on your side. When people play with (Chat)GPT, it never stops collecting the data.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Loved the video btw u can set dynamics for how ai think to give u an answer vs how u want if u pull information on how it relates its source code to things u can paint a weird picture of how it might work
There was a recent study, by I think Anthropic, that does exactly what you say. It shows why the models do what they do, and it's not how most people think. It's much more messy, than logical, with lots of idea/logic overlap. This understanding is allowing us to organize the AI like parts of the brain.
I think overfitting is isn't a big issue with newer training algorithms. There have been attacks on AI models that use overfitting, but they do not work well in the real world. The issue now is more with the training data itself, which is quite poor, but is being improved.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
I have published a paper about it called Wieghts Reset technique. Its really very interesting because complexity is much more than just a number of parameters in a model.
Aren't there already a lot of regularization techniques in the models used to combat overfitting?
@@ArawnOfAnnwn Indeed there are 😀. From basic to complex, however its a general problem that there are no universal recipes in machine learning. So people construct more and more methods, architectures, etc. Btw regularization is not only about overfitting e.g. convnets can be viewed as regularization over dense/linear layers.
@@EpicCamST Hey, maybe can you tell me the name of the paper? :) Is it public anywhere without spatial access? on the arχiv maybe even?
@@konstantin7596 Hi, sure, it is open access and you can google it by the title "The Weights Reset Technique for Deep Neural Networks Implicit Regularization"
@@konstantin7596 Hi, sure, it is open access and you can google it by the title "The Weights Reset Technique for Deep Neural Networks Implicit Regularization"
Things get even more wild, go well past over fitting and the model will experience a phase change called "grokking". Pleas look this up, it has just been discovered and it makes the models perform almost perfectly on validation data. It's a serious game changer.
Is this specific to transformer architecture or more broadly such as LSTMs?
That's exactly what this video is about. She just didn't use the term.
Every proper nerd groks what it means to grok (or at least has a fairly good idea) and will thus immediately understand what's being talked about when the word "grokking" is used.
I'm not sure I just learned about this today, I'm going to review this paper tonight, arXiv:2405.15071 @@darrenb3830
This has been known for a few years actually, although I guess that could be within whatever you mean by "just been discovered" tbf, I just feel that's a pretty long time for AI research.
For anyone who doesn't quite get it (I sure didn't): specifically an AI that has overfitted may eventually, by continuing the training process, "grok" the problem - a term essentially meaning that it seems to figure out somehow what is actually going on and starts generalising really well for seemingly no reason.
I specify this because I initially thought OP meant that continuing to make the AI more complex would lead to grokking. This is not the case (though maybe complex AIs are required for grokking to occur at all, IDK). This is something that exists on top of what Sabine discussed in the video - which was the effects of making the model larger - and works in tandem with it - grokking is an effect of continuing to train the same already overfitted model.
Edit: NGL I just learned about this and almost definitely got a few things wrong, I'm sure someone will fill in the details (pls).
When you have that many parameters, a "butterfly like" effect comes into play, basically small changes can have large effects, carried in 2nd and 3rd order derivatives of the weights. Think of it like the modulus in a encryption algorithm, the 'lost bits', are here, but the loss actually makes the potential 'overfitting' not overfit because it kinda turns into a RELU thing
Thank you so much, Dr. Hossenfelder, for this clear explanation of a very complex topic.
Actually, there is a growing research interest in understanding the training phases of AI better.
For example, there is a paper by Anthropic "In-context Learning and Induction Heads" where they show that at some point during training, the LLM learns how to predict the next word by looking at similar examples in the context window. This ability gives a massive reduction in the loss function during training
That is interesting, and could conceivably fit in with my own neglected work from the 1990s.
Does “similar examples” mean something analogous to related questions?
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
@@anonmouse956 in its simplest form, it works just like that: if it sees a word like "Mr." and within the context window there was already a "Mr." followed by a "Jones", it will be much more likely that it will again write down "Mr. Jones". This sounds trivial and obviously useful, but an LLM has to learn this as it starts from 0 knowledge how language works
1.55 "the human intention was not well-coded". In the olden days, we had another expression for that: GIGO!
Garbage in Garbage out... And whit Chat TGP the problem is that it is probrammed with woke idiot answers, AKA programmed with propaganda and lies to begin with on purpose... And result is woke garbage...
A reasonable prompt, for which there is an answer, can be worded in a way that doesn't get an answer, so 'GIGO' is different. The other day I asked how many US citizens are there that are eligible to be POTUS. The responses I got included "Not sure, there hasn't been a census for 4 years." But I sure didn't get even a guesstimate of an answer before I gave up.
There’s an important thing to note in this, beyond simply GIGO: It is often harder than we might expect, perhaps even *much* harder, to produce as the input, that which wouldn’t qualify as “garbage” (as far as GIGO is concerned). In particular, the input, if provided to humans, might not function as garbage (on account of the humans having some relevant background information, or shared goals or context with the ones providing the input)
I have found that often times, the follow up questions are even more important than the initial prompt, and I feel like that doesn't get enough coverage. For example, rather than just asking for a specific story around the data and taking what it gives, follow up by asking it for other possible (I usually use the term 'likely') explanations that also fit the data.
I think something those of us responsible for the initial data also need to consider weighting parameters in advance, and then as part of the analysis, have it adjust some of those weights to find other possible stories that fit the data as well.
I have been playing around with a text based AI and I have to say it is fascinating. You can find out why it makes a decision if you ask it to explain in the prompt.
I have found it helpful to construct it as both a person and a psuedo code compiler with access to vast amounts of data but little experience with it.
Every time a user feeds an AI a prompt it is like summoning a genie for that one interaction. They can't tell you why another genie made a decision, but this is the same as humans.
We do sometimes actively think about our choices but sometimes we just make up our reasons for doing what we felt like doing at the time after the fact. Mind Field had a great episode on this.
Long prompts are good for AI. Short prompts less good as the genies can't talk to each other. They send you the text and update their training data and 'trust' the next instance to do their best.
Rocks were never supposed to talk. They have played us for absolute fools
Gaia is talking to us through silicon(e)…
Intriguing perspective
@@TDVLAlso intriguing
SILICONE more like it amen???
(. )( .)
@@jeltoninc.8542 amended :)
I wasn't sure what overfitting was from the quick description in the video, so I googled the definition: "In machine learning, overfitting occurs when an algorithm fits too closely or even exactly to its training data, resulting in a model that can’t make accurate predictions or conclusions from any data other than the training data."
a good linguistic human comparison would be when children first learn to speak and often use regular conjugations of verbs especially in the past tense, using -ed for all past verbs. e.g. "My toy broked" or similar ... i.e. the child has learnt enough data to overfit the regular ending and even learn an irregular conjugation, but not enough data to realise that this conjugation does not therefore require the regular ending.
@@IngieKerr I don't think that a child is overfitting, or at least this is too trivial of an example if it is overfitting. What's going on here is that the child learned a rule, and thought it applied everywhere, but the rule had exceptions. AI is supposed to know that there will be exceptions to the outcomes, whereas the child doesn't. I saw an example of overfitting where an AI was trained to predict if a person would default on their loan, and it was able to predict the outcome of 97% of the people in the training data, but only 50% of the people in the real world data.
@@wiggles7976How about when you feed Udio all the keywords tagging a specific song from a catalog, and perhaps some of the lyrics, and I just spits out a cover version of that exact song with the same melody and chord progression - it was incapable of extrapolating a completely different melody. Is that a case of over fitting?
@@bornach I don't know what Udio is but producing music doesn't really fall into the category of "making predictions", which is what the definition I quoted above says. There's no way to test if an AI-generated song is "correct" or "incorrect" since correctness is not a quality of music. Correctness could be a quality of music theory though. If I say a C chord is C F G, then I'm incorrect. An AI could try to predict music theory I suppose.
I'm not sure I understand. It would mean that if a neural network ever finds out about a theory of everything that predicts reality with 100% accuracy, and thus fits its training set (extracted from reality) with 100% accuracy as well, that neural network would be considered over fitted ?
It seems some piece is missing from that definition.
Oh Sabine, you really need to check out Karl Friston's Free Energy Principle. It fits so well with this video. By rearranging variables in his formulation, he gets several very enlightening perspectives. One that meshes well with your talk here is the accuracy verses complexity trade-off, or Occam's Razor perspective. Would love to see you review the Free Energy Principle. It's is all developed using classical physics.
There actually is some work showing (at least for low-dimensional input) that the generalization error is small for uniformly sampled data when the weights of the deep network are small, using sort of a general lipschitz properties. But this is likely not too related to double descent
I think Neural Net is a very good tool to model the logic by which a system works without knowing anything about it's internal state.
This is an honest question:
How do you avoid attributing incorrect causality in the logic when modeling like this?
In my experience, you get a lot of benefit in the short term, but its very wasteful in the long-term because the model is not generalizable
@@iantingenModeling in ML is typically predictive. Establishing causality (from observational data) is rarely the goal and requires different methods.
@@Fischdosepremium Predictive, but without any understanding of mechanism, correct?
What is being predicted in that instance?
@@iantingen Yes. Whether this is sufficient depends on the use case. Although interpretability is virtually always nice to have, predictive accuracy is generally paramount in applications where ML is the preferred tool.
@@Fischdosepremium do you ever feel like that epistemological approach is wasteful compared to using (at least a little) theory?
That’s been my experience, but I also know that my experience doesn’t generalize to everyone!
I know that we’re getting out in the weeds a little bit, but I’d appreciate your thoughts about it!
The graph you showed there at the end, error versus complexity.... It reminds me for some reason of the Dunning-Kruger effect graph. If you turn it upside down, it is identical. Maybe some connection?
I too had that thought and decided to search the comments for someone else who perhaps had the same idea.. yes the graph does indeed seem to be the inverse of the DK graph but only because the Y axis is a measurement of error and not confidence in knowledge. Seeing as outputs are based on the systems confidence of a result, makes it that even more fitting as a comparison.
No connection at all, unless you confidently insist there is one from a place of limited understanding :p, there would be a fairly ironic connection at that point.
@@Dongobog-ps9tz hahaha wanted to write the same thing "you're giving an example"
@@Dongobog-ps9tz I suppose so. I'm not saying there is a connection, but I am saying there may be a connection.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Early in the 90s I ran simulations on some very small kind of recurrent neural networks, and they had this weird anomaly whereby they sometimes converged to a much better minima. It happen when I feed them sets random noise, and thinking about it that could be explained by the double decent. They had trouble with stable signal sources, i.e. small set of sine waves or a rather short hidden state vector, which would bring the number of free parameters close to the length of the hidden state vector. Real and somewhat noisy data worked quite much better, but that would have a rather complex state vector.
Thinking about it the weight vector (i.e. parameters) and input vector can be interchanged, and the only somewhat weird point would be where they coincide in length. That would not explain why it become unstable, but could explain why it fails to generalize.
With deep neural graphs you get multiple cross connections, and information is embedded in each connection to a greater or lesser degree.
The weights(and path) largely depend on the initialized weights (which are often randomized). Graphs with weights seem to be able to utilize and embed higher dimensional topologies which could explain why there is no expected overfit based on parameter count, in that case there would actually be a much higher number of actual parameters embedded in a set number of parameters (which isn't expected). As far as I'm aware this conjecture is still an active area of study.
Two more problems of AI: 1) It doesn't know, what it doesn't know. Therefore it will always give you an answer with the confidence of an 11 year old. 2) When the human brain is trying to figure something out, it can refer other problems it does know the answer to, and derive an answer by analogy. We (usually) call that experience. Artificial neural networks lack the "experience" mechanism.
I don't think you understand how neural network works.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Plot Twist: Sabine is an A.I.
4:40 Interesting. I made my master thesis in 2004 on machine vision. Overlearning (overfitting) was difficult to handle at that time.
What an interesting video, thank you!
I have a question: when you say "no-one knows why" does that mean the public doesn't know how these AI are somehow sidestepping this issue? Or are you saying that the creators of these AI also do not know why this issue is less prevalent than expected?
With zero understanding of the topic myself (aside from the information presented in this video) but having experience as a consumer for many years, I would assume the creators and operators know why this is happening but its a trade secret, or something like that.
Thanks again for your videos! I love the content.
The sane side of yt.
Danke.
Recently discovered your channel... thanks for your detailed videos... not sure if you already did a video on this because theres so many videos still need to explore on youtube but if you are wondering what to do for future videos... I would love to see videos about CERN which is the major scientific organization in Switzerland... I first learned about CERN from reading one of Dan Brown's novels... and while Dan Browns books are fiction-ish.... CERN is a very real and mysterious scientific organization.
1:46 Reminds me of an expert talking about urban road traffic collisions with pedestrians. To reduce these he wanted more cars on the road. Lots more. Grid lock would be ideal.
"Better Accuracy" equates to position. "Worse Predictions" equates to momentum (direction). Is there an uncertainty principle here?
Yes, it is called bias-variance dilemma. At least that is the closest thing. But it seems just using larger networks seem to allow us to have the cake and eat it too.
I don’t see a relationship between momentum and worse predictions on the test set.
What connection between the two do you see?
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
I wonder if Occam's Razor eventually comes into play in LLM AIs, either by accident or on purpose. Sometimes the Simplest Model is the best. That is, until it isn't.
Well, doesn’t have to specifically be LLMs,
but yes, there is the idea that by increasing the parameter counts enough, that the solutions that the gradient descent (+ whatever things they add to it) is able to find models that are actually (in a sense) “simpler” than the ones that would be found if the number of parameters available was a little smaller.
Don’t think you understand what Occam’s razor actually is. It’s about adjudicating between two different theories making the same predictions. When two theories predict the same thing the one with fewer assumptions is said to have more theoretical virtue. LLM’s are not competing theories so it’s a category error to apply Occam’s razor to them.
It's Nature's way but what does she know about input priorities?
@@jimothy9943 Competing theories, perhaps not, but competing models? They seem to be that. They make a prediction of the observed dynamics of a system. Different ones make different predictions.
@@drdca8263 They are competing models for performing a given task. They don’t make predictions. An LLM does not entail predictions about the dynamics of anything. ChatGPT’s model does not entail anything about Gemini. They are both different tools for completing similar tasks. A hammer does not make predictions any more than a drill. You would not say that the more theoretically virtuous lawn mower was the one with the fewest amount of parts. Occam’s razor does not apply.
Gathering data not used in the training set and running the program against that data to see how it fits is one helpful way to avoid overfitting.
Thanks for bringing this to light: I've been bit by overfitting, but never made it beyond to find the second fall off.
_"how do we stop human pollution?"_
*AI pulls up a Thanos quote*
what will we get with a real AI? the Terminator? or Bender from Futurama?
Be careful, you'll summon the Roko's Basilisk morons who think it's reasonable to commit genocide because a machine they created told them to
@@t.kersten7695 This is a very complex and unpredictable question, but if the world remains stable until that time; Likely between 5-30 years-ish. (As far as i know, maybe watch some video`s from David Shapiro to get a idea)
@@t.kersten7695 Neither. Both those examples are anthropomorphic i.e. they were humanized by having a personality. Real AI has nothing of the sort. It doesn't want revenge, it just works to achieve the goals we give it - in the best way it reasons how, which may not be the 'best' in our eyes. The classic example is the paperclip maximizer, which destroys everything simply to make more paperclips.
I don’t feel qualified to add anything to the discussion, apologies if this is super basic lol but I have overfitted (or something similar) once irl (only once thank goodness) and it’s awful to have your processing centers pay attention in sharp focus to everything. Torture, actually. It was input overload and I just had to stop and do nothing for a while.
I guess AI is struggling along on the spectrum at the moment lol like an overstimulated brain. Might be a little like the maze micro mice that race to the goal. The micro mouse doesn’t need the whole map in detail so AI might be going down some information paths only far enough to realize that’s not quicker/better/simpler than another route and the neural net is learning how to make prediction jumps.
Thank you. Informative video
I think it's mainly about batch size. On simple neural nets with MNIST dataset If you feed samples one by one (batch size=1), overfitting happens very quickly. But it you feed a batch of 64 samples, training might be slower, but overfitting is easier to deal with.
In case of LLMs we have a batch size exceeding 1 million.
And obviously there are many techniques to deal with overfitting, like droping some neurons for a single batch, different loss function.
This is brilliant! It shows how important it is that you, Sabine, aren’t stuck in traditional academia so you can think and produce videos outside the box. It really helps me in my work doing research for a book I’m writing. I’m a clinical psychologist. You are a precious jewel. Thank you!
You mean you find a lot of inspiration from people's answers, actually. How sad for you must be not having real patients, never mention writing a book...
Current AI models aren't trained to think in a general sense. They are trained to think like what thinking is available on the Internet. In other words, these AIs emulate what has been said or written by humans. This way, you will never get AI smarter than humans, but only faster and less prone to error in well defined situations.
Irrelevant to the video. I don’t think the video even uses the word “intelligence” outside of the phrase “AI”? And the video certainly isn’t specific to language modeling tasks.
Good video! I do wonder if the comparisons of different model complexities were done with using early stopping as a method to prevent overfitting. If they were, I think the higher complexity would still have all the freedoms in how it models things, but not the downside of overfitting. To me that would make sense. If someone knows more about the subject, please enlighten me!
Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
is a paper that is talking about ~ how if you overtrain a neural network it will overfit to the training data and getting worse on test data, but if you keep training for longer, it'll snap eventually and perform well on the test set as well.
I did a series of experiments involving a number of different networks with the same number of parameters, but achieved through varying numbers of layers. What I found is that, per parameter, deep networks learn more slowly but are able to achieve a better final result, whereas shallow networks learn extremely quickly, but max out early.
Its good you agree with the consensus.
This relates to unsolved questions in complexity theory.
@@GilesBathgatestill nice to do it to see for yourself
@@adamrak7560 Interesting. Which ones?
Maybe deep networks create some bottleneck where only generalized information gets through.
I am convinced that current LLMs are like a toddler who delights the parents by blurting out something (apparently) clever. Convinced that the child is a genius, the parents pour all available resources into preparing the child for admission to an elite university. But to their dismay, the kid does not perform on the expected trajectory.
I see 2 possible outcomes in line with your analogy:
1. The kid breaks down from all the emotional pressure
2. It'll still come out better than average, since it got better education, than its peers
And like ai it is simply recall and memory until experience exposes its flaws momentarily.
@@OperationDarksidegenius kids who finish high level education very early are failing in life because society and their peers reject them. Basically society is a horrible and hostile training data. AI's don't experience such. Actuallly we have no ideea what deep neural networks are experiencing, but they are mimicking emotions pretty well... Like some terrible humans...
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
Naftali Tishby wrote about this, he showed that neural network (stochastic gradient descent + neural networks in fact) find the optimal balance between compressing the input data and capturing information about the labels. He also shows that NNs learn in two phases, memorization and then compression. He suggest that this is the result of the noise of the training process that pushes away overfitting solutions in favor of solutions that generalize, but with about the same loss.
Max Tegmark did some research about this, he computed phase diagrams of unstable, memorization and "grokking" areas in the paper "Towards Understanding Grokking: An Effective Theory of Representation Learning".
Mah man.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
ndeed one of the most puzzling observations in "modern" machine learning. In my group we are working on this interesting topic and are delighted that Sabine has taken it up here. In a paper currently under review we provide theoretical and experimental perspectives for a possible explanation.
How do we know Sabine isn't an AI?
She is too funny to be😅
I saw her live last year on a debate in London. She´s flesh and blood!
@@Thomas-gk42 That's exactly what an AI would say.
Sabine, modern neural networks DO have massive problems with overfitting. However, it doesn't become apparent until they have been trained enough to explain all the training data. After that, if you continue training them, they immediately become overfit. It is for this reason that most models are not trained nearly as much as they could be, and researchers deliberately stop their training early.
this isn't true -- if it were, we'd never observe double descent in the first place
Early stopping is deprecated. If you set weight decay correctly you can train the network far longer and it still learns useful stuff.
@@adamrak7560 While weight decay, dropout, entropy regularization, momentum based oprimizers, etc are all effective regularization strategys to limit over-fitting, model checkpointing, and by extension early stopping does not at all seem depricated to me. It can still be seen in the results graphs of most academic papers this year (the graphs tend to stop when validation accuracy levels out) and it's telling that default settings in both torch and tensorflow stop under conditions including one form or another of loss derivative estimates to stop when meaningful improvement are no longer made rather than when train accuracy is 100%. Training indefinitely might be popular in LLM's (admittedly an area where I have limited interest) where the massive data repositories used there cause many of their user's queries to roughly lie somewhere within their training sets such that overfitting is not a huge concern but in machine learning at large I'd have to strongly disagree with you. There are papers with citations (>20 to be relevant) analyzing the robustness of early stopping published as recently as 2023 which says to me that the strategy is not deprocated if it's not even done being studied. If you have evidence to the contrary or if your claim is in a particular subfield that I might not be considering I'd love to learn more, or if you consider early stopping to be something other than "stopping training before training accuracy plateaus to avoid overfitting" then I'd be interested to hear a response.
have a nice day
When I was doing artificial neural network training back in the 1990s, it took days to get the algorithm to a reasonable model of the physical chemical process I was modelling. The highly non-linear relationships over both wavelength and time meant it was only good for a few days.
Barriers in dynamic systems somethings like web crawlers or cookies or impressions play a big role in things like this they might even reflect some information back into the system since it has to be logged somewhere like leaks
Data without relation, a knowledge graph has limits. Yann LeCun the Meta's chief AI scientist says current systems does not show even a slightest intelligence. Fear mongering by OpenAI is to get regulations in place to stop the competition.Altman even suggested GPU sales to be restriced and development to be subjected to license. My take is that while looks impressive generative AI does have very little pracal use in its current state unless you are after investor money.
I dont think its about intelligence - more about misdirection and misuse by bad actors - or more scarily that AI misdirects and influences due to errors - Like WOPR (War Operation Plan Response, pronounced "whopper") from war games
@@Vondoodle That is what I mean, it will never be something we can just trust in its current form. It writes code for example, but because you cannot trust it you read the code, and in the end it saves time only for boilerplate. It is the same pattern for every other use case.
@@Vondoodle So basically you'd blame AI for what people are doing?
Yann LeCun is famous for making highly confident prediction based on his own assertions, that turn out to be very false one year later. I suggest not listening him at all, because his predictions are consistently off.
LeCun is hilariously wrong.
If you bet on the opposite of his predictions you would earn money 😂
I might be wrong or perhaps I didn't understand the explanations... but sounds to me that the issue is more human than ai. In the sense that we are patern recognising creatures... we want to see patterns, and perhaps the randomness of ai is just patterns to our eyes... then again... I guess we could ask what is a pattern?
Perhaps I'm just stupid. 😅
With things like convolutional neural networks used in computer vision, we can see pretty clearly what kind of patterns tend to excite different layers of the network, we generally start from something like "Gabor filter" and work up to neurons that abstract visual understanding (interestingly, you can show what excites different layers to people and a corresponding region of the visual track will similarly light up).
With LLMs, it's a little more gooey, we can see like basic syntax assembly in the first few layers so mapping connections between tokens, words, sentences and things that look like universal grammar start to pop out, so grammars and constructions of associations (this is the work of Atticus Geiger at Stanford) but then there's also this gooey-ness because it becomes abstracted "blah".
So, there's this kind of latent space that stuff gets pushed into as we go deeper into the network and we have a newer method that we can use to probe it by basically watching what gets activated when we push certain examples through, so we can isolate stuff like neural representations encoding "cat" etc. but these are also pretty mushy and really depend on how you try to measure "cat-ness".
My current wild bet is that we'll probably end up with a Heisenberg uncertainty style law that kind of boils down how useful this representation approach can really be - so no, I'd say it isn't stupid to identify that there's a measurement problem (ie. a human issue with looking for patterns in abstract pile of numbers).
@@whatisrokosbasilisk80 well, I guess I should say thanks... and that you've given me a lot to study and think about... not sure I understood everything. :p but it does feel nice that someone with such knowledge doesn't think my understanding was stupid. :p even though I do feel like I need to study more this topic now. XD
Way to make me feel both dumb and smart... you made me laugh out loud. So thanks for that too. XD XD
@@CesarHILL Representation Engineering and Mechanistic Interpretability is what I'd focus on if you want to really understand this stuff.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
DNNs (Deep Neural Networks) are expected to choose important parameters on their own, so while we may have 100,000 parameters, we expect many of them to be assigned very small (near 0) weights, so this reduces the number of active parameters (or connections: a weight can be associated with multiple parameters so that parameters A and B may connect to a node with low weight, but A and C may connect to a node with higher weight). So most of the weight matrix in a DNN is expected to be sparse. So the graph at the end should be drawn against a horizontal axis of active parameters rather than input parameters.
Double descent will not occur if any of the three factors are absent. What could cause that?
• Small-but-nonzero singular values do not appear in the training data features. One way to accomplish this is by switching from ordinary linear regression to ridge regression, which effectively
adds a gap separating the smallest non-zero singular value from 0.
• The test datum does not vary in different directions than the training features. If the test datum
lies entirely in the subspace of just a few of the leading singular directions, then double descent
is unlikely to occur.
• The best possible model in the model class makes no errors on the training data. For instance,
suppose we use a linear model class on data where the true relationship is a noiseless linear one.
Then, at the interpolation threshold, we will have D = P data, P = D parameters, our line of
best fit will exactly match the true relationship, and no double descent will occur.
Thanks excellent
This isn't excellent. This is Patrick!
I really would love a collaboration between you and Robert Miles on AI safety. ❤
Thanks again for a great video! I am not sure if the comparison between polynomials and neural networks is correct by means of the training process and data. In my understanding, polynomials are usually fitted to a fixed set of data points, while neural networks are actually trained over a huge class of datasets. It makes me wonder if we would observe double decent on polynomial fitting if we would use the same train procedures as with neural networks…
Human: Stop All Wars!
AI: I did, but nobody is reading the directions.
It's hard to overfit these massive LLMs during training because you have enormous amounts of highly variable training data relative to the number of weights. Isn't this obvious or am I just losing my mind?
and you could also say that due to the insane amounts of data, you end up covering most of the actual possible semantic space compared with other problems where the unseen data represents 99% of the semantic space. i would also make the case that LLMs do not suffer and might even gain from the concept of overfitting.
what even is overfitting when you fitted literally all the fking data? you just left out new phrases that you can create, but the novelty created by that input represents like what? 0.0000001% novelty where the model might fk up?
meaning... how could you find the overfit if you trained a model with both the training and the testing?
@@iFastee Agree. That’s hilarious. Well said.
sounds about right. move on
Does this have any bearing on the Travelling Salesman problem or the Berry Paradox?
A LLM "with all the data" is still a brute-force method, and that entails exponentially higher costs.
Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
Complexity is dual to simplicity.
Syntax is dual to semantics -- languages or communication.
Large language models (neural networks) are using duality:-
Problem, reaction, solution -- the Hegelian dialectic.
Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
If mathematics is a language then it is dual.
All numbers fall within the complex plane.
Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
The integers are self dual as they are their own conjugates.
The tetrahedron is self dual -- just like the integers.
The cube is dual to the octahedron.
The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
Addition is dual to subtraction (additive inverses) -- abstract algebra.
Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
"Always two there are" -- Yoda.
Your mind is syntropic as it solves problems to synthesize solutions -- teleological.
The answer to the most famous ill defined question is 42.
Plot twist: the question wasn't ill defined and the answer is actually 42.
Until scientifically proven otherwise, the answer remains 42.
Overfitting can be quite handy... I've been digging up lots of interesting etymological details using overfitted byte pair encoders
Sabine is that what they also call Groking? I have seen some fascinating insight and heuristics about this. It seems to be a bit like short term vs long term memory in that past the overfitting phase, if you keep on running the fit for more cycles (though i don’t think it was about more parameters so perhaps we are talking about different things) so as I as saying if you keep on running fitting cycles the net starts to re-organise its parameters in a more efficient way the overfitting errors just vanish. This discovery was totally unexpected
My black box imploded when you said decent instead of descent.
Or maybe she meant dessert. Or desert. Can't keep them separate in my head.
This is fascinating, just when we think "the End of Science" a new playground has been opened, and girls are welcome if not the rescue
Your case study on the faulty correlation between pneumonia risk and asthma reminded me of an anecdote from Jordan Ellenberg's book, "How Not to Be Wrong." Ellenberg recounts the story of Abraham Wald, who worked with the Statistical Research Group during the Second World War. Wald was consulted to optimise aircraft reinforcement based on the locations where returning planes were hit. However, he astutely pointed out that the data was incomplete. The planes that didn't return likely had critical hits to the fuselage, which weren't represented in the surviving samples.
I thought this video underscored a critical principle: whether it’s avoiding trivial solutions, oversight in the test set, or over-parameterisation, the fundamental issue often stems from a lack of common sense. One might say it's akin to encoding the seed in an AI with an element that wasn't originally present-common sense. Not to be dismissive, but even the most brilliant minds can occasionally overlook simple yet crucial details.
I imagine that it parallels the cyclical pattern of human problem solving. The descent is when we discover a new theory/model of the problem, and begin tuning/refining that model based on observations. The ascent/over-fitting is when we push that model beyond the parameter space/accuracy level that it is valid for and start producing bad predictions. The turn around point is when we discover a new theory/model again and start the process over.
An example would be orbital dynamics, how we went from circular orbits to newton/kepler's laws to general relativity to gr + dark matter.
The occurrence of these phase transitions seems to be mostly random in either case.
Apparently, when it comes to overfitting, more data can dilute the impact of noise or outliers. With more data, the noise becomes a smaller fraction of the entire dataset, thus reducing its influence on the model’s learning process. And that makes the complex model perform better.