"all the ai researchers are offline and the energy bills are 20% higher, across entire states, maybe there will be a huge reaction to that about what the fuck is going on" 🤣
One of the analogy's major weaknesses is the fact that viral pandemics leave pretty unambiguous physical traces to allow for high-confidence judgements about their existence, origins and the effects attributable to them. In the case of a first-instance AGI, on the other hand, there simply are no artifacts of similar evidentiary status/attributability. Even worse, all plausible candidates will be competing against - even preceeded by - a whole bunch of conspiracy theories about AGI's supposed existence, origins and real-world effects. This type of inescapable "epistemic helplessness" trap doesn't really have a useful analogue in the historical COVID scenario (it may be more akin to how the pandemic response could have played out in the pre-germ theory era).
In the West, there was considerable distrust of reports from China in the early weeks of its pandemic public health efforts. It wasn't until it hit Italy in February that some of us were able roughly to calculate what we were facing. Had China been transparent about its situation and had it built up trust in preceding years, lives would have been saved and vaccine efforts would have commenced sooner. This was more a case of epistemic caution than epistemic helplessness. As to AI, so far the leading companies are fairly transparent in revealing the capabilities of their systems. They're driven to this by competitive pressure--both for market share and for talent recruitment. For now, I think we're okay on this front. I also think it's an unstable equilibrium. A major sign of this instability is that ever increasing amounts of AI research is not being published, it's becoming proprietary. The darkness is spreading. The other problem is that there's actually no bright line between AGI and ASI. And even the "owner" of an ASI probably will not realize the dimensions of his new pet.
What camera/lens was used to film this? The talk is intriguing, I just find myself constantly appreciating how beautiful it is to look at. The shallow depth of field and overall softness of the footage is wonderful.
Bear in mind that I am not a US citizen! I believe that the current political "spectrum" is not really operating at the level of the problems. When "AI happens", we should not try to see in it "what we have always said". Probably the biggest challenge will be what the implications of what happens then are. A factual and as sober as possible attitude will then probably be a good idea.
I kinda like the idea of a digital lockdown. Where everyone has to go offline as a lockdown to keep AI from spreading. I bet that digital lockdown would be harder on people than a physical lockdown.
Spent the first minute wondering why they posted a 4 year old video.
"all the ai researchers are offline and the energy bills are 20% higher, across entire states, maybe there will be a huge reaction to that about what the fuck is going on" 🤣
One of the analogy's major weaknesses is the fact that viral pandemics leave pretty unambiguous physical traces to allow for high-confidence judgements about their existence, origins and the effects attributable to them. In the case of a first-instance AGI, on the other hand, there simply are no artifacts of similar evidentiary status/attributability. Even worse, all plausible candidates will be competing against - even preceeded by - a whole bunch of conspiracy theories about AGI's supposed existence, origins and real-world effects. This type of inescapable "epistemic helplessness" trap doesn't really have a useful analogue in the historical COVID scenario (it may be more akin to how the pandemic response could have played out in the pre-germ theory era).
In the West, there was considerable distrust of reports from China in the early weeks of its pandemic public health efforts. It wasn't until it hit Italy in February that some of us were able roughly to calculate what we were facing. Had China been transparent about its situation and had it built up trust in preceding years, lives would have been saved and vaccine efforts would have commenced sooner. This was more a case of epistemic caution than epistemic helplessness.
As to AI, so far the leading companies are fairly transparent in revealing the capabilities of their systems. They're driven to this by competitive pressure--both for market share and for talent recruitment. For now, I think we're okay on this front. I also think it's an unstable equilibrium. A major sign of this instability is that ever increasing amounts of AI research is not being published, it's becoming proprietary. The darkness is spreading. The other problem is that there's actually no bright line between AGI and ASI. And even the "owner" of an ASI probably will not realize the dimensions of his new pet.
Very cool conversation
hell yeah brother
Lots of great points from the crowd makes for a great conversation.
2:50 People talk about car crashes and FSD not working.. so people aren’t excited about it
Dylan Patel with the cameo in the crowd haha
How do I get to go to the next Manifold Markets event like this?
What camera/lens was used to film this? The talk is intriguing, I just find myself constantly appreciating how beautiful it is to look at. The shallow depth of field and overall softness of the footage is wonderful.
any current day canon or sony or nikon camera .
you can get this quality with a relatively cheap (for a video camera) sony a6600 for example (1000$)
you are welcome 🤡
damn, these people are smart. Never seen a talk with such good questions and discussion
Bear in mind that I am not a US citizen! I believe that the current political "spectrum" is not really operating at the level of the problems. When "AI happens", we should not try to see in it "what we have always said". Probably the biggest challenge will be what the implications of what happens then are. A factual and as sober as possible attitude will then probably be a good idea.
gigachadesh patel
I like this style of talk
Brilliant
I kinda like the idea of a digital lockdown. Where everyone has to go offline as a lockdown to keep AI from spreading. I bet that digital lockdown would be harder on people than a physical lockdown.
Hey, we might actually have to talk to other people! :D
@@squamish4244 yeah right!
before march 2026
What will July 2024 for Nanotechnology look like?
O3 is Jan 2020
That moment will happen by 2030.
When AI starts to become cheap enough to be considered free.
21:06 patel looks really confused when confronted with facts.. it clearly does not fit his right wing free market world model lol
his hair is a different color here wtf
It's the combination of sun plus they have warm colour temp lights that are point towards the top/back of his head
The weights don't need to leak when Meta's giving shit away, all hail open source. This is fine.