The concept makes sense for things like industrial processes and aerospace, where the simulation is based on well-known physical laws, but it’s increasingly being applied to social systems. Once you expect your model of human behaviour to be a perfect simulation without taking account of culture, collective organisation and the like, you really need to expect the predictions to have massive error ranges. But far too often a senior executive will leap onto the buzzword and, as with “AI”, expect “Digital Twin” to be a box in a diagram that might as well say “and then a miracle happens”.
And if someone say "I'd like you to clarify that step", they're gonna answer "I'd like you to clarify that step", because he's your boss and you're a data scientist.
Not necessarily...humans tend to operate on chaotic or unpredictable fashion individually but taken collectively they tend to become more predictable. The example I'd give is heating of a gas: if you model this you won't model the individual molecules being heated because it becomes too unpredictable, you model simplified systems that will give approximations of the micro but usable data at the macro.
@@eddy66t6 true people can be predictable but I'd argue freakanomics style modern behavioural economics suggests we're pretty terrible at making those predictions
@@philroo1 The same is true of physics simulations. "All models are bad, but some are useful" is something you commonly hear in engineering organizations (including those using digital twins for manufacturing excellence).
It is something we are working on within electricity distribution, but I have never heard it referred to as a digital twin. We already have the SCADA and control system, we also have offline planning tools. By creating the digital twin we will be able to experiment with reinforcement and network configuration. And use it to drive investment decisions.
in medical devices, co-simulation often happens on the device. the software that runs the procedure and a model run together and the fault codes come from them not matching. the hardware and software can be really complex because you're making a different assertion, that it won't diverge from the model. the model can be developed and partially validated by surgeons and stuff before the instrument even exists
also, how they diverge in practice is useful, proprietary information that doesn't have to be shared with anyone outside the company. users only need to know how to continue (and "use another one" is basically always possible in a hospital)
I don't know much about this field directly, but a lot of the terms he's using remind me of the work of theoretical biologist Robert Rosen on the modeling relation and anticipatory behavior. In particular, the diagrams he's drawing, the word "effector," the notion that models are necessarily lossy...
So digital twin is a subset of IoT, another extremely hand wavey buzz word that just means having a controlling interface to some application of mechanical engineering?
Seems like it. Honestly only seen one comment on this entire video that seemed like an actual application that made it kinda make sense- the Berlin airport one. Even in the video I feel like they just kinda handwave it without actually properly explaining it
Sensors can be difficult thing to do right because their calibration can drift off course. Now you need people to check and calibrate the sensors, and they need some kind of standard or other testing equipment to calibrate to. And then you also have to figure out what to do with the poor data from any sensors that have drifted off from reality prior to calibration.
Could a real-world system and its digital twin be encapsulated with another "external" digital twin applied to it? Would such an architecture add any value or utility, or is it turtles all the way down from there?
You could encapsulate multiple twins into a single system that gives an overview, for sure. Multiple power stations with their individual twins could be part of a larger system that shows the whole electricity grid, for example.
yeah but unless i understood wrong the digital twin takes inputs from the real thing and acts accordingly. Hal is independent, but has copies or twins @@Cman21921
Sometimes they build digital twins of huge structures like an airport. I wonder how much it costs too simulate every stone and every cable. It would still have saved the Berlin Airport a lot of money though. The digital twin would have shown that the escalators are too short for example.
Your DT needs to be only as granular as your specific use case prescribes. For instance, you don't need to model "every stone and every cable" if you're only interested in clash detection tasks. In fact, because it's that expensive to model everything, as you implied, you usually create different DTs for different purposes and then sort of interconnect them.
All the examples he provided is just data pipeline with data analytics & modelling. And modelling allows you to do what he called what-if analysis which is not unique in Digital Twin. Don't forget building such a huge software system requires maintenance :)
I wonder if this would be good for people who have rare medical conditions: I know someone who has a fairly common birth deformity but it is much more extreme and doctors don't often want to treat it. I doubt many people with this extreme of a condition survive into adulthood... So it'd be interesting to simulate a set of human bodies and then give it these extreme conditions in order to create what amounts to a larger sample size. Is it possible to simulate humans or biological systems at a level that this would be helpful?
Would a heater/AC paired with a thermostat be a very basic form of a digital twin? The thermostat triggers the heat/AC depending on what the measured temperature. Or is there a more basic term for a yes/no or on/off system?
Totally seems like it could be. But maybe it'd be more like a targeted central heating system would be more like a digital twin? Like if it monitored each room's temperature and routed more air to the ones that need it?
This sounds like a great idea for mechanical systems, and horrible for anything imvolving humans managing other humans. I can't imagine anyone being capable of using such a system well for that, especially not someone who would typically be in a position to have access to such a system in order to attempt to do so.
It is defined as a Legal framework. Who owns the twin, and if it controls the behavior of the real object/system and digital representation, what takes precedence? The Rights? THE REGULATIONS AND APIs. The usual suspects, of course...
Uggh, my employer has to jump on every hype train so we've developed a digital twin. Except our model is just a 3d representation you have to walk round to see the same sensor readouts that are normally shown on a scada/hmi 🤦
The way this guy talks about Digital Twins being used to drive policy is _extremely_ worrisome. His language is consistently abstract, it's "socio-technical systems" instead of "people", and when computers are used to create a dehumanizing buffer between decision makers and the people their decisions will affect, it almost never ends well. This sort of thing is how we end up critically understaffed in nursing and teaching positions - because the models don't account for long-term burnout, and provide a clean numeric abstraction so all management sees is that reduced staffing levels aren't projected to unacceptably impact metrics over the near term.
I always teach the concept as a taxonomy of DT. The taxonomy considers the time resolution, the time scope, etc, but maybe more important, it classifies DT in terms of their connection to the real device: delayed, open loop, closed loop. From the latter I have almost no examples (autonomous robots, of course)... Definitely not a new comcept
This is just yet another case of The Lost Generations', having been fed nothing but reboots and recycles and buzzwords and other mind-numbing nonsense, desperately re _hashing_ (#!)... Absolutely nothing new besides the typically dopey _buzzword_ . Yes, as the description here ~asks immediately... the answer is indeed: new buzzword.
Interesting topic. Would have been a better video if he hadn't spent so much time getting defensive about whether it's new or not, and actually just dug into what it is and what kind of things it is useful for today and could be useful for in the future. He spent the first 8 minutes generalizing about some of it. Then when getting into specifics, still getting defensive.
Nah, you're/he's pushing the concept of digital twin into a specific specialization. That's not the general concept that it represent. Your/his definition is just your vision on it. I think the majority of researchers doesn't agree with your/his description.
@@WenirR yeah, maybe I should be more specific. I was responding to the second half of the video, where his description goes out of hand IMHO. His earlier "vague" description is one I find myself more in: it's a model/simulation/etc of a real system in a computer, mimicking is behavior by adaptation to measured aspects. I don't even think that control output (direct or indirect) is even required (, altough it would be a reason/motivation for having a digital twin at all.) And AI is definitely not required. A simple self learning model would also qualify, based on what detail level is required or what's being mimicked.
@@VikingTeddy Wouldn’t it have to be an infinitely high resolution for a 1-1 copy? I know that isn’t currently possible with current sensors but who knows what the future holds
You don't need to model every cell in your body, because, even though it may behave independently from others, it is the combined effect of sets of cells that truly matters. For instance, suppose you want to model your heart. For most practical purposes, it may be enough to just treat it as a literal pump with specific valves at specific places.
@@dimitrismantas6767I guess it depends on your expectations of how closely the digital twin should match the behavior of the original. We are rather chaotic systems, after all, so substitutions and approximations have a huge potential for vastly different results over time.
No, not really. The digital twin system we develop for customers gives you the ability to walk around a real world site with AR glasses that overlay information about any part you happen to be looking at. Overlaying the 3D representation (created from CAD diagrams, 3D models, laser scans, etc) allows you to see where the implementation deviates from the design, for example. Even without the AR headset, you can pull up part numbers, design schematics, real-time status information on any browser-capable device either on site, or from anywhere in the world.
@@NosyMo85 Well, you could say that about anything that isn't a book of diagrams and tables. But as with most user interfaces, they serve a purpose and exist for a reason. The hardware requirements for modelling and rendering hugely complex sites has only become commercially viable in recent years, and so in that regard digital twins are still an emerging technology.
@@-_James_- I never said it wasn't useful or there was no reason for it to exist. But to me your description didn't sound like it has any simulating capabilities, but that it just maps available Data to to existing Machines or Consoles or whatever via 3d Models or AR. Therefor making it an impressive piece of UI, just not what i understood would classify as a digital twin.
@@NosyMo85 That's all a digital twin is though. A digital representation of a real world asset that can display current real-world state, be used for simulations, be the basis for future changes/upgrades, or provide the user with access to schematics and other related documentation.
The concept makes sense for things like industrial processes and aerospace, where the simulation is based on well-known physical laws, but it’s increasingly being applied to social systems. Once you expect your model of human behaviour to be a perfect simulation without taking account of culture, collective organisation and the like, you really need to expect the predictions to have massive error ranges. But far too often a senior executive will leap onto the buzzword and, as with “AI”, expect “Digital Twin” to be a box in a diagram that might as well say “and then a miracle happens”.
And if someone say "I'd like you to clarify that step", they're gonna answer "I'd like you to clarify that step", because he's your boss and you're a data scientist.
Not necessarily...humans tend to operate on chaotic or unpredictable fashion individually but taken collectively they tend to become more predictable. The example I'd give is heating of a gas: if you model this you won't model the individual molecules being heated because it becomes too unpredictable, you model simplified systems that will give approximations of the micro but usable data at the macro.
@@eddy66t6 true people can be predictable but I'd argue freakanomics style modern behavioural economics suggests we're pretty terrible at making those predictions
0⁰999 i⁸⁸😮
@@philroo1 The same is true of physics simulations. "All models are bad, but some are useful" is something you commonly hear in engineering organizations (including those using digital twins for manufacturing excellence).
It is something we are working on within electricity distribution, but I have never heard it referred to as a digital twin. We already have the SCADA and control system, we also have offline planning tools. By creating the digital twin we will be able to experiment with reinforcement and network configuration. And use it to drive investment decisions.
@letsgocamping88 can you help me with this
in medical devices, co-simulation often happens on the device. the software that runs the procedure and a model run together and the fault codes come from them not matching. the hardware and software can be really complex because you're making a different assertion, that it won't diverge from the model. the model can be developed and partially validated by surgeons and stuff before the instrument even exists
also, how they diverge in practice is useful, proprietary information that doesn't have to be shared with anyone outside the company. users only need to know how to continue (and "use another one" is basically always possible in a hospital)
What you're talking about is testing, but this is more about real world events.
I don't know much about this field directly, but a lot of the terms he's using remind me of the work of theoretical biologist Robert Rosen on the modeling relation and anticipatory behavior. In particular, the diagrams he's drawing, the word "effector," the notion that models are necessarily lossy...
So digital twin is a subset of IoT, another extremely hand wavey buzz word that just means having a controlling interface to some application of mechanical engineering?
Seems like it. Honestly only seen one comment on this entire video that seemed like an actual application that made it kinda make sense- the Berlin airport one. Even in the video I feel like they just kinda handwave it without actually properly explaining it
Digital twins are an advanced form of closed loop control
I would say a complex form of CLC rather than advanced.
You should make a video on Kalman filtering, its a related topic!
Simulation or decision model with a stream of real-time data and ability to do what-if analysts
Sensors can be difficult thing to do right because their calibration can drift off course. Now you need people to check and calibrate the sensors, and they need some kind of standard or other testing equipment to calibrate to. And then you also have to figure out what to do with the poor data from any sensors that have drifted off from reality prior to calibration.
Very interesting, but you mostly focused on the sensors and mostly left out the actuators/effectors; would love to learn more!
Could a real-world system and its digital twin be encapsulated with another "external" digital twin applied to it? Would such an architecture add any value or utility, or is it turtles all the way down from there?
You could encapsulate multiple twins into a single system that gives an overview, for sure. Multiple power stations with their individual twins could be part of a larger system that shows the whole electricity grid, for example.
I love how 2001 a space odyssey already mentions digital twins in 1968
Where?
@@magicknight8412 I assume they just mean Hal. I don't think they specifically say the words "digital twin"
Creating buzz words decades ahead of time.
yeah but unless i understood wrong the digital twin takes inputs from the real thing and acts accordingly. Hal is independent, but has copies or twins @@Cman21921
Sometimes they build digital twins of huge structures like an airport. I wonder how much it costs too simulate every stone and every cable. It would still have saved the Berlin Airport a lot of money though. The digital twin would have shown that the escalators are too short for example.
Your DT needs to be only as granular as your specific use case prescribes. For instance, you don't need to model "every stone and every cable" if you're only interested in clash detection tasks. In fact, because it's that expensive to model everything, as you implied, you usually create different DTs for different purposes and then sort of interconnect them.
All the examples he provided is just data pipeline with data analytics & modelling. And modelling allows you to do what he called what-if analysis which is not unique in Digital Twin. Don't forget building such a huge software system requires maintenance :)
suggestion:the wizard "Gerald Jay sussman" sussman and low resources computing
I wonder if this would be good for people who have rare medical conditions: I know someone who has a fairly common birth deformity but it is much more extreme and doctors don't often want to treat it. I doubt many people with this extreme of a condition survive into adulthood... So it'd be interesting to simulate a set of human bodies and then give it these extreme conditions in order to create what amounts to a larger sample size. Is it possible to simulate humans or biological systems at a level that this would be helpful?
how to use Multi agentic framework to implement DT
Would a heater/AC paired with a thermostat be a very basic form of a digital twin? The thermostat triggers the heat/AC depending on what the measured temperature. Or is there a more basic term for a yes/no or on/off system?
Or does the digital twin need to know that it's representing something?
Totally seems like it could be. But maybe it'd be more like a targeted central heating system would be more like a digital twin? Like if it monitored each room's temperature and routed more air to the ones that need it?
This sounds like a great idea for mechanical systems, and horrible for anything imvolving humans managing other humans. I can't imagine anyone being capable of using such a system well for that, especially not someone who would typically be in a position to have access to such a system in order to attempt to do so.
Thanks for the explanation!
How do you ensure that the digital twin is aligned (like in AI alignment)?
Checking your axioms I suppose
tldr; it's just a buzzword
So it is like giving real-world objects (or organisations, structures) a "brain"?
It is defined as a Legal framework. Who owns the twin, and if it controls the behavior of the real object/system and digital representation, what takes precedence? The Rights? THE REGULATIONS AND APIs. The usual suspects, of course...
Uggh, my employer has to jump on every hype train so we've developed a digital twin. Except our model is just a 3d representation you have to walk round to see the same sensor readouts that are normally shown on a scada/hmi 🤦
I think nobody knows what Digital Twins are yet. openUSD is barely becoming an ISO standard and there is much yet to evolve
The way this guy talks about Digital Twins being used to drive policy is _extremely_ worrisome.
His language is consistently abstract, it's "socio-technical systems" instead of "people", and when computers are used to create a dehumanizing buffer between decision makers and the people their decisions will affect, it almost never ends well.
This sort of thing is how we end up critically understaffed in nursing and teaching positions - because the models don't account for long-term burnout, and provide a clean numeric abstraction so all management sees is that reduced staffing levels aren't projected to unacceptably impact metrics over the near term.
The W3C defined some standards under the umbrella of Web of Things that makes a lot of sense in this context.
I always teach the concept as a taxonomy of DT. The taxonomy considers the time resolution, the time scope, etc, but maybe more important, it classifies DT in terms of their connection to the real device: delayed, open loop, closed loop. From the latter I have almost no examples (autonomous robots, of course)... Definitely not a new comcept
buzzwords have a way of disappointing me the moment they are described.
This is just yet another case of The Lost Generations', having been fed nothing but reboots and recycles and buzzwords and other mind-numbing nonsense, desperately re _hashing_ (#!)...
Absolutely nothing new besides the typically dopey _buzzword_ .
Yes, as the description here ~asks immediately... the answer is indeed: new buzzword.
Untie the mane. Feel your inner lion.
Big Tron vibes
a simulation of a simulation?
Interesting topic. Would have been a better video if he hadn't spent so much time getting defensive about whether it's new or not, and actually just dug into what it is and what kind of things it is useful for today and could be useful for in the future. He spent the first 8 minutes generalizing about some of it. Then when getting into specifics, still getting defensive.
If the Apollo 13 accident happened today, they could use a digital twin to solve the CO2 scrubber connection problem.
Nah, you're/he's pushing the concept of digital twin into a specific specialization. That's not the general concept that it represent. Your/his definition is just your vision on it. I think the majority of researchers doesn't agree with your/his description.
So what is your/your description?
@@WenirR yeah, maybe I should be more specific. I was responding to the second half of the video, where his description goes out of hand IMHO. His earlier "vague" description is one I find myself more in: it's a model/simulation/etc of a real system in a computer, mimicking is behavior by adaptation to measured aspects. I don't even think that control output (direct or indirect) is even required (, altough it would be a reason/motivation for having a digital twin at all.) And AI is definitely not required. A simple self learning model would also qualify, based on what detail level is required or what's being mimicked.
@@jhbonarius Some would say any self learning fits under the umbrella of AI, although the usage of the term "AI" is changing over time, of course.
@@MNbenMN yeah, AI is definitely a buzzword (like digital twin, smart, cloud, web 3.0, etc,etc.)
Digital tuin in reel life?
I wonder if it’s possible to create a digital twin of me 😂
"TH-cam_name = thahleel" There is your digital Twin. Where may I send the bill to?
Not with current tech. But who knows, maybe 60 years from now we can scan a human with such high resolution that it's possible to recreate someone.
@@VikingTeddy Wouldn’t it have to be an infinitely high resolution for a 1-1 copy? I know that isn’t currently possible with current sensors but who knows what the future holds
You don't need to model every cell in your body, because, even though it may behave independently from others, it is the combined effect of sets of cells that truly matters. For instance, suppose you want to model your heart. For most practical purposes, it may be enough to just treat it as a literal pump with specific valves at specific places.
@@dimitrismantas6767I guess it depends on your expectations of how closely the digital twin should match the behavior of the original. We are rather chaotic systems, after all, so substitutions and approximations have a huge potential for vastly different results over time.
OMG NEW COMPUTERPHILE YISSS
So basically it IS just a horribly overrated buzzword for "model fed with up-to-date data".
No, not really. The digital twin system we develop for customers gives you the ability to walk around a real world site with AR glasses that overlay information about any part you happen to be looking at. Overlaying the 3D representation (created from CAD diagrams, 3D models, laser scans, etc) allows you to see where the implementation deviates from the design, for example. Even without the AR headset, you can pull up part numbers, design schematics, real-time status information on any browser-capable device either on site, or from anywhere in the world.
@@-_James_- What you're describing sounds to me just like a really fancy and advanced UI to access Data.
@@NosyMo85 Well, you could say that about anything that isn't a book of diagrams and tables. But as with most user interfaces, they serve a purpose and exist for a reason. The hardware requirements for modelling and rendering hugely complex sites has only become commercially viable in recent years, and so in that regard digital twins are still an emerging technology.
@@-_James_- I never said it wasn't useful or there was no reason for it to exist. But to me your description didn't sound like it has any simulating capabilities, but that it just maps available Data to to existing Machines or Consoles or whatever via 3d Models or AR. Therefor making it an impressive piece of UI, just not what i understood would classify as a digital twin.
@@NosyMo85 That's all a digital twin is though. A digital representation of a real world asset that can display current real-world state, be used for simulations, be the basis for future changes/upgrades, or provide the user with access to schematics and other related documentation.
I am doing academic research on applied digital twins for security 🔥🎉😊
Is there existing literature on this already?
If the Apollo 13 accident happened today, they could use a digital twin to solve the CO2 scrubber connection problem.