After finishing recording I realised I hadn't asked: have you read this book? If so, please do comment below as I'm very curious to hear what other people thought of this book. Cheers, Oli
Wow, you are right. I honestly didn't think of it that way. I didn't read the book, I found out about it just now and I searched for a summary and found myself here. I am glad I got to hear your opinion on the book and on its idea of what we should focus on. You just made me want to study Philosophy more that I wanted to.
If Philosophy is the love of wisdom ..then much like the urban myth of all searches on Wikipedia ending in philosophy, it is the king of subjects & maths should have philosophy envy, like physics has maths envy & chemistry has physics envy & biology ...well wisely say that without life the rest is .... just....philosophy!
I loved this book. I know for a fact, I am gonna love your channel. Can you also review Thiking Fast and Slow by Daniel Kanheman. It is a brilliant book.
But there is lots of fundamental flaws with the idea of Effective Altruism and the solutions presented in the book to fix our future. Reading over the years, I've come under the conclusion that if we don't change the abusive capitalism all other solutions are like throwing dirt in the wound instead of treating it.
Hi thanks! And, yes, I've read Thinking Fast and Slow, which is a great book. Despite the rather dull terminology, I find the System 1 / System 2 analogy a useful way to think about how we think. So, I will definitely do a review of Kahneman's book someday 'soon', thanks for the suggestion.
I too think that we need a radical overhaul of the way that our economies work, which, in time, is another area that I hope to discuss on this channel too. And I think the advent of increasingly advanced AI is going to accelerate the need for change, so the question of what comes next is an increasingly urgent issue for us all to be discussing.
So what does MacAskill say about the issue of "discounting the future"? I mean long-termism seems a perfectly sensible principle but obviously needs to be tempered by the recognition that our certainty about the effects of our actions in the future diminishes the further forward we look. There is some horizon behind which it's more or less impossible to have any reasonable belief as to whether the consequences of an action are a net positive or negative. I could decide to try to have as many children as possible on the assumption this maximizes the probability that one of my descendents is a great benefactor to humanity. But it also maximizes the chance that one of my descendents is a mass murderer. And given I don't have reasons for assuming the first rather than the second, I can't claim a utilitarian justification for my campaign of mass impregnation. The child drowning in front of me has no greater moral worth than the child starving across the world. But the probability that my action will definitively save him is higher, and my knowledge of this fact is far more sound. The expected utility of the act is therefore higher. Now of course this reasoning is used all the time in bad faith (I won't give to charity because I can't trust the charity not to steal the money) but at these time scales such caveats legitimately come into play.
Yeah, I thought about covering this point in the video, especially because at one point in the book MacAskill bemoans that there aren't enough people doing useful longer term predictions of what is going to happen! To me the obvious link to make here is with the book Superforecasting in which Tetlock and Gardner say that beyond about 10 years most predictions are no better than a random guess. So the idea that we could plausibly do the calculations into 10,000 or millions of years is clearly nonsense (and, yes, I really should have made this point!) The only reason we can have vaguely plausible long term climate modelling is because those processes at scale are all based on physics, not on human psychology and creativity. As for the pond analogy, here too I was trying to present it as "this is what Singer and the effective altruists say", rather than being an endorsement of those positions and without having the time to present my view on the ways that this kind of thinking, and indeed utilitarianism in general fails to capture what is morally important to many people. Indeed, I plan to do a separate video about utilitarianism someday in which I'll note the important irony of when Singer ended up paying for the care of his elderly mother in obvious contradiction to his previous stance. When questioned about this he apparently said, "Perhaps it is more difficult than I thought before, because it is different when it's your mother." Yeah, exactly! (see last sentence of: www.independent.co.uk/arts-entertainment/ethics-man-1127735.html)
And, just to add: I don't recall MacAskill actually talking about the need to discount the future when doing the calculation, but he does say things like "A 10% chance of humans reaching 500 million give us a life expectancy of 50 million years" (an edited quote), so that is at least acknowledging the need to account for our uncertainty. But mostly the book is talking in very hand wavy theoretical ways rather than looking at specific examples of how you would actually do the calculations for this kind of thinking. And, I suspect that one reason for this is because as soon as you were to try an actual calculation the uncertainty would so obviously swamp the usefulness of the exercise. So, in the end I think it could be argued that this whole approach is only pretending to be calculative.
@@Go-Meta Yeah. I mean, I'm not against the principle of utilitarianism as a way of giving justification to the moral intuitions we have. It's a good insight. Singer is a good corrective to our tendency to just fix the problems in front of our noses. We SHOULD give a lot more to the people we can't see. And more importantly, we should make that a habit. Utilitarianism can justify and explain that. But obviously we have to apply a discount for uncertainty. Otherwise you can justify any activity you like simply by concocting a scenario where it leads to some far future positive outcome. (In the case of climate, I think the real issue is that what's being talked about ISN'T that "long term". It's largely issues that are already hurting people and likely to hurt people a lot more within the next 100 years. That's "long term" thinking by the standards of corporations who are worried about quarterly profits, or their managers who might hold the job for 5-10 years. But it's very small compared to the predictions long-termism wants to appeal to.) I guess my main point though is that you don't have to give up what's good about utilitarianism or even effective altruism / long-term thinking to avoid people making gratuitous self-serving claims using it. Most of the self-evidently bad claims can be rejected just on the grounds that they're based on predictions we don't have good reasons for. Even if the underlying framework for reasoning is sound. The argument that "AI makes things very different" is kind of meh. Sure. AI is a big change. So was the printing press. And the steam engine. And electricity. Every age has crucial deciders of the future. And the idea that we can sufficiently predict how AI development is going to roll out, thousands of years in the future, to steer it now, is laughable. We don't even have much idea where it will be by the end of this year. Misaligned AI might kill a billion people before 2024 if we're not careful.
After finishing recording I realised I hadn't asked: have you read this book? If so, please do comment below as I'm very curious to hear what other people thought of this book.
Cheers,
Oli
Wow, you are right. I honestly didn't think of it that way. I didn't read the book, I found out about it just now and I searched for a summary and found myself here. I am glad I got to hear your opinion on the book and on its idea of what we should focus on. You just made me want to study Philosophy more that I wanted to.
Hi Ru, thanks! Really glad you enjoyed the review and, yeah, there is so much interesting philosophy to learn. Good luck with your studies.
If Philosophy is the love of wisdom ..then much like the urban myth of all searches on Wikipedia ending in philosophy, it is the king of subjects & maths should have philosophy envy, like physics has maths envy & chemistry has physics envy & biology ...well wisely say that without life the rest is .... just....philosophy!
I loved this book. I know for a fact, I am gonna love your channel. Can you also review Thiking Fast and Slow by Daniel Kanheman. It is a brilliant book.
But there is lots of fundamental flaws with the idea of Effective Altruism and the solutions presented in the book to fix our future. Reading over the years, I've come under the conclusion that if we don't change the abusive capitalism all other solutions are like throwing dirt in the wound instead of treating it.
Hi thanks! And, yes, I've read Thinking Fast and Slow, which is a great book. Despite the rather dull terminology, I find the System 1 / System 2 analogy a useful way to think about how we think. So, I will definitely do a review of Kahneman's book someday 'soon', thanks for the suggestion.
I too think that we need a radical overhaul of the way that our economies work, which, in time, is another area that I hope to discuss on this channel too. And I think the advent of increasingly advanced AI is going to accelerate the need for change, so the question of what comes next is an increasingly urgent issue for us all to be discussing.
@@Go-Meta Yes. Kanheman is not a writer lol. But the ideas he presented are useful way to think, especially in decision making.
So what does MacAskill say about the issue of "discounting the future"?
I mean long-termism seems a perfectly sensible principle but obviously needs to be tempered by the recognition that our certainty about the effects of our actions in the future diminishes the further forward we look. There is some horizon behind which it's more or less impossible to have any reasonable belief as to whether the consequences of an action are a net positive or negative.
I could decide to try to have as many children as possible on the assumption this maximizes the probability that one of my descendents is a great benefactor to humanity. But it also maximizes the chance that one of my descendents is a mass murderer. And given I don't have reasons for assuming the first rather than the second, I can't claim a utilitarian justification for my campaign of mass impregnation.
The child drowning in front of me has no greater moral worth than the child starving across the world. But the probability that my action will definitively save him is higher, and my knowledge of this fact is far more sound. The expected utility of the act is therefore higher.
Now of course this reasoning is used all the time in bad faith (I won't give to charity because I can't trust the charity not to steal the money) but at these time scales such caveats legitimately come into play.
Yeah, I thought about covering this point in the video, especially because at one point in the book MacAskill bemoans that there aren't enough people doing useful longer term predictions of what is going to happen! To me the obvious link to make here is with the book Superforecasting in which Tetlock and Gardner say that beyond about 10 years most predictions are no better than a random guess. So the idea that we could plausibly do the calculations into 10,000 or millions of years is clearly nonsense (and, yes, I really should have made this point!) The only reason we can have vaguely plausible long term climate modelling is because those processes at scale are all based on physics, not on human psychology and creativity.
As for the pond analogy, here too I was trying to present it as "this is what Singer and the effective altruists say", rather than being an endorsement of those positions and without having the time to present my view on the ways that this kind of thinking, and indeed utilitarianism in general fails to capture what is morally important to many people.
Indeed, I plan to do a separate video about utilitarianism someday in which I'll note the important irony of when Singer ended up paying for the care of his elderly mother in obvious contradiction to his previous stance. When questioned about this he apparently said, "Perhaps it is more difficult than I thought before, because it is different when it's your mother." Yeah, exactly!
(see last sentence of: www.independent.co.uk/arts-entertainment/ethics-man-1127735.html)
And, just to add: I don't recall MacAskill actually talking about the need to discount the future when doing the calculation, but he does say things like "A 10% chance of humans reaching 500 million give us a life expectancy of 50 million years" (an edited quote), so that is at least acknowledging the need to account for our uncertainty. But mostly the book is talking in very hand wavy theoretical ways rather than looking at specific examples of how you would actually do the calculations for this kind of thinking. And, I suspect that one reason for this is because as soon as you were to try an actual calculation the uncertainty would so obviously swamp the usefulness of the exercise. So, in the end I think it could be argued that this whole approach is only pretending to be calculative.
@@Go-Meta Yeah. I mean, I'm not against the principle of utilitarianism as a way of giving justification to the moral intuitions we have. It's a good insight. Singer is a good corrective to our tendency to just fix the problems in front of our noses. We SHOULD give a lot more to the people we can't see. And more importantly, we should make that a habit. Utilitarianism can justify and explain that.
But obviously we have to apply a discount for uncertainty. Otherwise you can justify any activity you like simply by concocting a scenario where it leads to some far future positive outcome.
(In the case of climate, I think the real issue is that what's being talked about ISN'T that "long term". It's largely issues that are already hurting people and likely to hurt people a lot more within the next 100 years. That's "long term" thinking by the standards of corporations who are worried about quarterly profits, or their managers who might hold the job for 5-10 years. But it's very small compared to the predictions long-termism wants to appeal to.)
I guess my main point though is that you don't have to give up what's good about utilitarianism or even effective altruism / long-term thinking to avoid people making gratuitous self-serving claims using it. Most of the self-evidently bad claims can be rejected just on the grounds that they're based on predictions we don't have good reasons for. Even if the underlying framework for reasoning is sound.
The argument that "AI makes things very different" is kind of meh. Sure. AI is a big change. So was the printing press. And the steam engine. And electricity. Every age has crucial deciders of the future. And the idea that we can sufficiently predict how AI development is going to roll out, thousands of years in the future, to steer it now, is laughable. We don't even have much idea where it will be by the end of this year. Misaligned AI might kill a billion people before 2024 if we're not careful.