Morality is Opinion (and That's Not a Problem)
On Awakening, Sam Harris, and the Fallacies of Moral Realism and Consequentialism
There is nothing either good or bad,
but thinking makes it so.
-Hamlet
"Once you see the Ground, once you relax in the openness of being present,
what would be the motivation for causing harm?”
-James Low
This essay is a much-belated (20 months of gestating) digest and response to the Making Sense podcast, episode #305: “Moral Knowledge.” The Making Sense podcast is produced and presented by
1 (his Substack publication is found under his name as well: ). His guest for the episode was neuroscientist and author whose Substack publication is . The topic of conversation was the moral philosophy of consequentialism and the Effective Altruism movement that has grown out of it. Both are excellent authors and I very much enjoy reading their work. It was Harris that introduced me to Hoel’s work.Most academic philosophy strikes me as a complete waste of time and energy; mere cogitation that serves no end other than attempting justifying its own activity. Of course those of us outside the ivory tower can and do benefit from the “take aways” of academic philosophy. For example, Popper’s contributions to the philosophy of science. But those take aways comprise a small fraction of the ponderous bulk of the enterprise. Normally I wouldn’t bother to write a rebuttal to an academic philosopher. But Harris’s moral realism philosophy strikes me as deeply at odds with his advocacy of the examined life through mindfulness and spirituality-without-religion. Since the latter interests me far more than the former, a critique seems appropriate.
BACKGROUND: “UNREALITY” of CONCEPTS
A key aspect of spiritual awakening is the thorough understanding of the difference between concepts and nonconceptual experience. The word “nonconceptual” is obviously a concept. But that concept “nonconceptual” is an indicator - a pointer - trying to hint at something more fundamental than mere concepts.
Many examples have been historically offered to illustrate this point. Consider the difference between the word/concept “water” and the liquid you drink everyday in order to remain alive. You can swim in water, but you the word “water” can’t make you wet.
Korzybski’s offered this famous contrast: “The map is not the territory.”
“Chocolate” is a concept. Experiencing the taste of chocolate is not. Reality itself is not a concept, although we can apply a conceptual label to it for easy reference when we employ language. Doing exactly that Carl Sagan said “The Cosmos is all that is, or ever was, or ever will be.” The label “The Cosmos” is a concept. But The Cosmos itself is not a concept. Experience is not a concept. What you fundamentally are is not a concept.
When one becomes well-acquainted with the “nondual2” aspect of direct experience one can realize all conceptualizations “fall short” of apprehending reality or direct experience. One can realize not only that concepts do fall short, but that they must fall short. They are representations of artificial atomizations of what is a unified reality.
Further, this “nondual” nature of reality can lead one to realize there is no such thing as “subjective experience;” there is only experience itself. One can realize there is no such thing as “objective reality;” there is only reality itself. The only reality that can be known is one’s direct experience. They are inseparable and cannot be pulled apart. Just as relativity and quantum mechanics (as understood in some quarters) have imbued science with a proper sense that “observer” and “observed” cannot be fundamentally disentangled.
In other words, there is no such thing as a “God’s eye view” of anything; no privileged frame of objective reference. We literally are The Cosmos knowing itself, as Sagan famously said.
Within that “knowing” there are the facts of science in particular, and there are concepts in general. But all concepts are forever abstractions; mere “clouds” in the totality of the “sky” that is the instantiated reality we experience directly. Concepts make up a small (but important) portion of our experience and understanding. To illustrate, I might say to someone: “Do you know my friend Tom? He’s hilarious! I love that guy.” Communicating facts about Tom is not intended in such comments. Concepts are merely employed in the service of communicating something that is far more fundamental and far more important than the concepts themselves.
In order to proceed here I must “spill the beans” as to the potential dangers of spiritual awakening3. When it’s fully manifested, one realizes ALL concepts are abstract and that they are not fundamental to reality/experience — at all! Intelligent beings do develop concepts and employ them; but always to the aims of increasing the effectiveness of their manipulation of the circumstances in which they find themselves. Once one transcends the conceptual, thinking mind (a culmination in the awakening experience), one is free to employ concepts - or not! - as however seems best to the individual, moment by moment. At that point ALL concepts become “optional” including the ones that are still “vital” to those who have not yet crossed that threshold. The concepts on-the-chopping-block include: good, bad, evil, right, wrong, justice, truth, identity, and many others.
Embarking on this path is not for the faint of heart!
BACKGROUND: MORAL REALISM
It strikes me as very strange to find an advocate of mindfulness and awakening to be a moral realist. “Moral Truth” is just a concept, and an erroneous one, just like “objective reality.” There is no “truth” in an ontological sense. The only thing “ontologically true” is The Cosmos itself — whatever it is, as it is. And we we can never fully know or say what it is.
Our direct experience is our “birthright” in the reality that is The Cosmos. “Should” or “ought” are mere concepts. Nothing “should” be other than exactly how it already is. How could anything be “other” - right now - than it already is, right now? Of course we can attempt to steer experience in certain directions going forward. But “right now” things simply are what they are and they can be no other way.
If someone would advocate for the reality of moral truth; then the burden of proof lies on the one making the claim4. Just as if a theist would advocate for the ontological existence of a god; the theist must present the evidence to support the claim. The rest of us must begin by adopting the null hypothesis. Best not to presume too much.
The only moral truth I’ve ever heard of were things apparently dictated by divinity. In particular, I only know of the code that was supposedly underwritten by Yahweh5. But once it became clear to me that Yahweh was actually a human invention - a fiction like Zeus - then the foundation of that moral “truth” evaporated. I have never come across a convincing case for moral truth since. Again, in either case — the question of the ontological reality of a god or of moral truth — one should not accept such a claim until sufficient evidence is presented by the claim’s advocate.
However, none of the above precludes us from making moral judgments. If circumstances are currently not how we’d like them to be, then we are free to take action to try to change things moving forward. As moral agents (and, as Harris rightly says,) we have “a navigation problem” to solve. We need to figure out how to get from ‘here' to ‘that peak over there' on “the moral landscape.”
If circumstances are to your liking, then there is no reason to consider “moral action” to impact those circumstances. However, if circumstances are not to your liking, then you must decide what “moral action” to carry out. There is no need for thoughts of “should” or “justice.” Those are mere concepts, and rarely helpful ones. In the end it always comes down to a judgment call. “How good is X?” “How bad is Y?”
"Good" "Bad" "Evil" are only concepts. And the act of judging something in terms of them is nothing more than conceptual thinking activity. Such judgments are all abstractions and exist only in the cramped confines of the conceptual, thinking mind. The real world is not abstract, it simply is what it already is. Inherently it’s neither good nor bad — unless you think about it.
Of course we do think about it and form judgments. There’s nothing wrong with that. But a judgment is an opinion, not a fact. Consequently, all morality boils down to opinion. And all conceptual descriptions of reality (that are not mundane facts) are also opinion.
“Opinion” is not pejorative. Opinions can be highly reasonable/rational. But they can also be highly unreasonable (as we will see most academic moral philosophy is6) or highly irrational. Opinions can have a very high probability of harmonizing with “life in the world,” or a very low probability of harmonizing with “life in the world,” or anything in between.
Our life of conceptual thinking has only three categories: (ostensible) facts (which can be relatively true or false), opinions (which are neither true, nor false, nor are they equally useful/meaningful), and poetry (which, paraphrasing Alan Watts, is “using words to say what can’t be said”).
At the outset of the conversation Harris says we are making claims about “moral truth.” In response, I say the only reasonable claim is:
“There is no such thing as ‘moral truth,’ there is only moral opinion.”
PODCAST: SEMANTICS, and ORIGIN of EA
They open with clarifying terms: Effective Altruism, consequentialism, and utilitarianism at 22 minutes in. Hoel had previously offered critiques of the Effective Altruism (EA) movement (initially at their public request)7. In this conversation with Harris, his critique is that the EA movement had it’s roots in academic moral philosophy, and that people motivated by those roots might take/have taken things too far. Within the EA movement he says there were people taking academic consequentialistic and utilitarianistic philosophies “too seriously” and “…trying to bring it too much into the real world.” [0:24:30]. He then offers definitions for the moral philosophical terms in question.
Broad Consequentialism [0:24:40]: “When your theory of morality is based around the consequences of actions.”
Strict Consequentialism [0:24:50] : Morality is reducible… to only the consequences of actions.
Utilitarianism [0:25:00]: “A specific form of consequentialism… it’s saying the consequences that impact the… happiness or pleasure of individuals is sort of all that matters for morality.”
Hoel offered that EA had many admirable qualities, but that its origins are in moral thought experiments around how to “maximize” the properties of these academic philosophies. He said those elements of EA should be the ones we take “least seriously.” [0:25:50]
Harris responds to the definitions. He agrees with the thought experiment origin of EA. He then used Peter Singer’s “The girl drowning in the pond8” thought experiment as an example. [0:26:30]
He then goes on to firm up his definition of consequentialism: “… is the claim that moral truth - which is to say the questions of right and wrong and good and evil - is totally reducible to talk about consequences” actual or potential. [0:29:45]
I completely disagree with the idea of “strict consequentialism.” By Harris’ use of the words “totally reducible” it sounds like that is the type of consequentialism he is interested in. Certainly some of our moral conversations can center around discussing consequences. But this need not be the majority of our moral consideration, much less the totality, as I hope will be made clear in what follows.
CRITIQUES: MORAL “TRUTH,” and CONSEQUENTIALISM
I’m not sure why people think there is a need for moral realism in the deliberations of right/wrong and good/evil. Even if there were a “gold standard” bedrock truth in these matters (which, I argue, there is not), our evaluation and judgment of specific actions/consequences in light of those gold standards would still be provincial. Our judgments are always mere opinions, irrespective of being based on “facts.” So we can jettison the notion of “moral truth” because it does not add to nor detract from the moral claims being made.
For instance I can argue that rape is bad (and that is my opinion) any number of different ways. I don’t need a god or “moral truth” to support my arguments in the least. If someone disagrees with my arguments/opinion, then that is their opinion and their business. We can both put our arguments forward in the marketplace of ideas and see which garners more support. The concept of “moral truth” is completely superfluous to the effort. Consequently, I advocate for simplicity; eliminate the concept of moral truth from the conversation altogether.
In regards to utilitarianism, if “utility” is employed in terms of happiness or pleasure, is guaranteed to lead to undesirable conclusions. It’s completely childish to think morality should try to maximize happiness/pleasure. Life consists in the mutual contrast of pleasure and pain, happiness and sorrow, joy and grief, etc. You cannot “maximize” the good for yourself without also creating great harm elsewhere. Only a fool would attempt to do so. Even if one “succeeded” in that endeavor, it would amount to nothing more than mere hedonism, which, for millennia, countless sages have assured us will never lead to a fulfilling life. One can conduct personal experiments to verify this viewpoint, if desired.
Some people take “utility” to be something less “sugary” than mere happiness. They might take it to be “well-being.” That’s a more reasonable position. But it completely ignores the obvious and unavoidable interconnectedness of all living systems. You can’t increase the well-being of one creature without impacting the well-being of every other creature living in the same ecosystem. It’s impossible to predict the results of even a small number of such interactions, and there would effectively be an infinite number of them resulting in just a single ecosystem. (Good luck finding just one ecosystem in isolation, by the way).
Therefore, it’s fundamentally impossible to “maximize well-being” in general. You can only attempt to maximize it for an individual or a group; but doing so is virtually guaranteed to have a negative impact on the well-being of the remainder of lifeforms in the ecosystem. Utilitarianism, if put into practice, necessarily must result in a deeply flawed system of morality.
The perspectives of academic moral relativism are correct in as much as they declare there are no universal moral truths (i.e. they say judgments of good/evil, right/wrong are necessarily provincial). But those schools of thought still fall far short of reasonable moral conclusions. In essence they claim we have no grounds to criticize the moral beliefs of others. This is absurd.
If our moral beliefs are our opinions (and they are) then we can easily form the opinion that the moral beliefs of others (which are nothing more than their opinions) are wrong. In other words, it is our opinion that their opinion isn’t as good as our opinion. In fact, that secondary opinion (that my opinion is better than theirs) should basically be a foregone conclusion; if I was of the opinion that their opinion was better than mine, I’d have changed my opinion to bring it more in alignment with theirs already.
Again, just because we are dealing with opinions, that does not mean all opinions are equally reasonable, useful, helpful, etc.
CRITIQUE: THOUGHT EXPERIMENTS
Thought experiments apparently can potentially be helpful9. But far, far too often they are too decontextualized and oversimplified to offer any meaningful insight. They should always be employed and considered with healthy skepticism, especially on matters of moral concern. Harris mentioned the Peter Singer’s “girl drowning in a shallow pond” thought experiment (see Footnote 8 above). This is a destitute thought experiment that is usually concluded with the implication “if you don’t give $500 to charity X (that saves lives halfway around the world) you are the equivalent of that moral monster that refused to save a drowning girl because he didn’t want to ruin his $500 shoes.”
Such a conclusion is a blatant false analogy; the two situations are nothing like each other. In the first situation virtually no one would refuse to save a drowning girl no matter what kind of shoes would get ruined. The billionaire would do it, a working stiff would do it, an homeless person would do it. Someone rich enough to spend $500 on a pair of shoes already has the money to buy replacement shoes. So if that person refused to save the girl (which they wouldn’t), they could be called a monster because the cost of the shoes is immaterial to them. Such a person is probably virtually impossible to find. Yet this is the person the thought experiment is implying you are like, if you don’t give to their pet charity? Forgive me if I balk.
We can tweak the first situation to change the intuition: what if the only person who could save the girl was homeless, it was winter, and if they save the girl they lose their shoes completely. In other words, if they save the girl, they will be barefoot in winter for an unknown period. They could get frostbite. They could lose their feet. If such a person hesitated in saving her, could we blame them? What if they refused altogether? We may not think that “right,” but would we call them a monster?
The second situation (donating to a charity saving lives halfway across the world) is only analogous to the pond situation if you don’t actually think about it. Yes, people are dying from preventable diseases all over the world. Yes, there are charities working to prevent that. Yes, you can choose to support those charities. Or you can choose to not support them in favor of supporting something else. It’s up to you to form your own opinion about which charities are “most important” and deserving of your support.
EA has specifically focused on those charities with the goal of trying to save the maximum number of human lives possible. Or course no one would say it’s “bad” to try to save lives. Most people (including me) would probably say that saving lives in general is a good thing, and well worth doing. But is it a given that trying to save the maximum number of human lives possible is “good?”
Of course EA says it is good and, therefore, some in EA might imply that if you don’t give to these charities you’re like the canard monster in the shallow pond thought experiment. But here I push back. My anti-thesis: saving the maximum number of human lives possible is not only not “good,” but it’s outright foolish, and potentially dangerous. Why? Because of the words “maximum” and “possible.”
Harris occasionally references Nick Bostrom’s “Paperclip Maximizer10” as a cautionary tale when considering the risk surrounding the race to create general artificial intelligence. But the devastating myopia of the Paperclip Maximizer is exactly the same myopia of a goal to save the maximum number of human lives possible. The myopia in the latter case just so happens to be in service to something we’re far more sympathetic towards than paperclips.
What’s the myopia? The overlooking that there's much more to life than simply not-being-dead!
Everyone born will die. So it’s impossible to literally “save a life.” One can only succeed in temporarily postponing death. That’s not to belittle the effort to save lives. But such efforts should be carried out full well knowing the inevitable. Death is inevitable and we would do well to remember this. I’d argue that both death and its inevitability are actually good things, but that’s a completely different discussion. From a purely analytical viewpoint, the only reason to save a life is the (hopefully) high probability that there will be a net positive experience in that life before it does eventually die.
But there is no way to know what the consequences will be of saving one life; it’s utterly impossible. This is why I reject consequentialism as a formal guide for altruism (and morality in general). If there is a life to be saved, there is only one “consequence” that is actually known in advance: that the person is still going to die at some point regardless of any action you take or fail to take.
If you save them and they live decades, are those good years? How would we know? Is the person a good person? Or a bad person? How many people do they interact with over the course of their remaining life? Are those good interactions? Or do they harm anyone? Could some of their actions bring about a great evil actor in the future? Or a great, benevolent leader? No one can ever know! And those outcomes are the real consequences of “saving a life.” Yet EA pulls back from looking that reality in the face and, instead, regard merely the “life saved” statistic (which is just delayed death) as the only meaningful consequence to consider. They wash their hands of all the rest; they wash their hands of all the consequences that really matter. A phrase that will unfortunately be repeated several times in this essay: this is the opposite of wisdom. It is a red flag indicating these are crusaders. A group that eschews reason for radicalism.
When someone’s life is saved in a developing country, that person needs food, shelter, water, energy, sanitation, health services, education, economic opportunities and so on. Are those infrastructures flourishing in order to take on thousands, or tens of thousands of saved lives? Presumably not, or their citizens wouldn’t be dying from preventable diseases in the first place.
The unfortunate dying are often a symptom of deep, systemic governmental, economic, and ecological dysfunction. I would never advocate that people should not earnestly want to render medical aid. Of course they should! But doing so amounts to treating just one symptom of a much bigger disease. There is a lot more work that needs to be done, and that work is far more important than what EA seems to focus on exclusively.
Bad international economic incentives and corruption within governments ares often at the root of these kinds dysfunction; and yet these are the very forces NGOs must work with in order to carry out their missions of aid. Alien NGOs can’t reform governments. NGOs can’t fight banking interests. They can’t influence international trade. Further, the citizens-in-need of a developing country are likely in no position to reform corrupt governments, especially if (factions of) the military supports the government. This unhappy and regrettable arrangement has been “nominal” all over the developing world since after the end of the First World War. The end of imperialism in the 20th Century created a power vacuum that was particularly vulnerable to despots, demagogues, and the generally power-hungry.
Of course it’s a good thing to try to provide medical aid to those dying of preventable diseases. But that is far from the most fundamental reform needed in this troubled world. Good NGOs fight the deeper battle as much as they can. But it really is impossible for them to effect meaningful change where governmental corruption and/or bad economic incentives exist. Only grassroots efforts are likely to succeed in bringing about reform. Those efforts are fundamentally much more important than anti-malaria medical aid. A partial list:
Reforming corrupt governments
Realigning bad international economic incentives
Restoring critical ecosystem function (upon which our civilizations and all life on Earth depends)
Providing emergency relief aid (natural disasters, wars, etc.)
Evacuation, interim care, and resettling/placement of refugees
Evaluating NGOs and aid efforts is a tremendously messy and difficult thing to do. Again, of course it’s good to support medical aid! But that is only one area of need. We will be more effective as altruists if we pay attention to the entire suite of needs.
RETURN to PODCAST
Hoel discusses the limitations on altruism imposed the myopia of EA movement:
“When someone makes a judgement about “effectiveness” they have to be choosing something to maximize or prioritize.” [0:32:30] “…raw number of lives saved…” [0:32:55]
He fleshes out how maximizing altruism through financial arbitrage can lead to a position that might state “…why are you donating any money within the United States at all?” [0:34:35] Of course every country has need of altruism, including the USA.
At 37 minutes in Harris says “Consequentialism is a theory of moral truth… a claim about… what it means to say… that something is really good or really bad…” He then goes on to say “But it isn’t a decision procedure.”
A very interesting two-pronged position; and I disagree with both parts. Firstly, as already outlined above, there is no moral truth. “Good” or “bad” are just our judgments about actions/consequences. And, while it’s possible to use consequentialist thinking while generating our moral rules-of-thumb, we must remember that the full scope of the consequences of any of our choices can never be known. Secondly, for Harris to take the stance that consequentialism is not a decision making procedure strikes me as very strange. The only time I intentionally use consequentialist thinking is in making moral decisions. Usually ones that are “borderline” or edge cases.
An example: Harris makes it a point to never lie. Ever. This is very admirable! But by his own admission it gets him into awkward conversations at times. I don’t lie to friends and family. But there are times when conversing with a complete stranger that I get “painted into a corner” conversationally. The stranger will bring up some topic that I would rather not discuss (usually religion or politics) and asks me a question that does not allow for a fully-honest and graceful exit. “Do you know Jesus Christ as your lord and savior?” “Can you believe those liberals?!” “My place is the one down the road with the ‘fuck Biden’ flag…”
Perhaps Harris will never lie to such people. However, I use consequentialist thinking and “do the math” to make a decision. What are the chances this person who doesn’t know me actually wants to know my opposing opinion? What are the chances this person will react positively if I answer with a position that is contrary to theirs? How much time/energy will it take to resolve this conversation that was never sought out in the first place and will never ultimately bear any relational fruit?
These situations virtually always happen when I’m in public running errands. That activity happens on a fairly compressed time schedule. Further, I will likely never see those people again. In other words these isolated conversations are not building a relationship. I “do the math” and often find that a white lie will defuse the conversation politely and allow me to more easily return to my time-sensitive activity.
“Sorry, I’m not religious.” (I’m actually anti-religion - although non-militant - especially Abrahamic ones). “Yeah, well I think basically all politics are pretty messed up.” (I’m a staunch centrist that very much resents hyperpartisanship, the far right, the far left, mainstream and social media, and the two-party system). “Oh… THAT place…” (Wishing there was a “Fuck Trump” flag, but knowing either flag is a perfect demonstration of the beyond-regrettable state of our shared political discourse).
Contrary to Harris’ point, the only time I tend to use consequentialist thinking is when making an intentional moral “edge case” decision in real-time. The rest of the time I simply don’t think about consequences. I rarely even think; I just act. Others are always free to judge whether my actions be good or bad.
Harris continues: “Any claim that consequentialism is is bad… is ultimately a claim about unwanted consequences.” [0:39:35]
This is an interesting conceptual slight-of-hand; the implication being any criticism of consequentialism is itself consequentialistic. In other words “consequentialism is bad because following it leads to bad consequences.” Arguably true, but that misses a much deeper point.
I don’t argue that consequentialism is “bad” because of its consequences (although utilitarianism obviously is); I argue consequentialism (as Harris defined it above) is false in its premises and necessarily, grossly myopic in scope. So I denounce it as a non-starter. There are no consequences to consider in this deliberation. If you knew a strategy was idiotic from the get-go, would you insist on following through with it anyway just to see what happened? To do so would be, once again, the opposite of wisdom.
It’s inherently impossible to predict the full consequences of any given action. Further, there is no absolute moral truth from which to build a “objective” system of morality. There are only guesses and pre-existing rules-of-thumb. And these are already shaping our moral intuitions. “Broad” consequentialism can and should be a part of that process. But it’s only a part, and far from the most important part.
Consequentialism is myopic in manifold ways. It discounts moral factors of deep importance (intuition, intentions, and attitudes of “being” in the world), yet it emphasizes the “need” to consider ramified consequences that can never be predicted or known. To fully embrace it is a double-dose of foolishness.
Harris continues: “[morality]… really is, in the end, getting clearer and clearer about what all the consequences are, and what the possible consequences are, of any given rule or action.” [0:42:00]
There is another framing problem with formal consequentialist thinking that is at odds with a perspective of “awakening/realization” regarding here-and-now. Consequentialism is very focused on outcomes/results in the future. But from that perspective of awakening, the future does not exist. Nor does the past. The only world there is is the world as-it-is right now. Thoughts about the future are just that: mere thoughts. Of course we can work towards a better tomorrow. Why wouldn’t we? But today is all there is, and all there ever will be. As the old saying goes: “tomorrow never comes.” Consequentialism often seems to actively discount the here-and-now in favor of the non-existent, hypothetical future. It’s a moral perspective focused on “becoming” in the future, whereas a superior moral perspective is focused on being itself, in the world as-it-is right now.
Elsewhere Harris has argued that “intentions matter.” I completely agree. But an intention isn’t a consequence. Further, it could be argued that bad intentions are “bad” because they lead to “bad actions.” But framing things in terms of cause-and-effect dilutes the immediacy of being itself in the here-and-now. More on this later.
It is impossible to predict all the consequences of a single instance of action, full stop. Therefore it is utterly hopeless to even begin to conceive what the consequences of a rule will be. So there’s no need to! We don’t need to claim to we have uncovered a “moral truth” by collectively deciding rape is bad. We just intuit that a world without rape would be better than a world with it. Such thinking does not have to entertain the “consequences” (the results) of violent acts at all. Nor the consequences of the absence of violent acts.
The immediacy of the violent act itself is sufficient for judging it as bad. We don’t need to consider the consequences of the act in order to judge it bad (although we certainly can). So consequentialism omits the most important consideration: the immediate nature of the act itself. Therefore, consequentialism is a pale comparison to the visceral, intuitive judgment automatically made by a compassionate, empathic individual. Let the pro-rape people try to make their case in the marketplace of ideas. They will fail miserably.
Or consider a less extreme case. In fundamentalist Islamic societies they may kill homosexuals on the grounds that their religious tradition says that that is what they should do. They see homosexuality as evil and in some people’s minds death is justified in the fight against evil. Thankfully much of the rest of the world does not regard homosexuality as evil. It’s not hard to find societies that regard it as benign or, even further, something to be celebrated. Many societies understand that sexual orientation is not a matter of choice. So in those societies we (rightly, in our own opinions) cannot endorse persecution of homosexuals. What could possibly justify such action? More saliently: why would we even want to?
Perhaps consequentialist thinking was employed by those who first instantiated the opposing moral codes of divine purity/justice versus human rights. But the rest of us simply inherited those moral codes by fiat - that is we adopted them without employing consequentialist thinking. We are all raised to obey a specific code and given little choice in the matter. Only as we come of age are we able to start re-writing our own moral code — and we can do that with or without consequentialist thinking.
Neither approach (the persecution of homosexuals versus the protection of their human rights) needs to give any considerations to the future consequences of such activities. Both approaches equally derive from the prevailing local zeitgeist of moral intuition/tradition. Further, those intuitions/traditions are not rooted in any bedrock of “truth;” they are provincial rules-of-thumb that are all malleable. We are free to rewrite them however seems good to us11.
SERIAL KILLER SURGEON
At 43 minutes in Hoel begins talking about the consequentialist thought experiment of a serial killer surgeon. He has five patients dying; but by going on the streets and finding one healthy victim, the surgeon can kill the victim, harvest organs, and save five patients. Would this be a good thing or a bad thing?
I find it telling that only academic moral philosophers can even fathom the possibility that a killer surgeon might be a good thing. Shouldn’t that, in itself, make us skeptical of the whole philosophical consequentialism enterprise? I would argue it should and thus we can write off academic moral philosophy as so much lunacy. Unfortunately, academia doesn’t seem willing to push back on their own nonsense; so we armchair philosophers have to pick up the slack…
The five dying patients will die anyway, irrespective of the possibility that surgical intervention could possibly extend their lives for a while. We are mortal; death is inevitable. So there’s no reason to do anything inhumane to stave off death - ever. The healthy victim has however much time they have, as well. And that person would normally not perish under the pretense of organ harvesting, except for the preposterous killer surgeon. So there’s no way to reasonably justify the victim’s murder. End of thought experiment.
Further, it’s impossible that someone so unhinged to be willing and able to hit the streets and murder and harvest organs could ever become a competent surgeon. The very limited instances of actual organ harvesting12 are not remotely close to this beyond-overly-simplistic thought experiment. Let the deserved derision of moral philosophical thought experiments rightly grow without bounds!
This should be the end of the “killer surgeon” argument; yet it is revisited with much tedium. At 0:57:30 Hoel adds a wrinkle “…that everyone on Earth is a “utilitarian” and totally buys the fact that you should sacrifice the few to save the many.” To his credit he’s trying to paint a picture of how messy the utilitarian philosophy is; that it is bound to reach “repugnant conclusions.” But we’ve already established that’s the thought experiment is off-base from the get-go, so there’s no need to dig further. There’s less-than-no-need to ramify it with the premise that everyone on Earth is a utilitarian; because virtually no one on Earth is a utilitarian. Perhaps there are a hundreds? Or thousands? This is not an opinion the rest of us need to take seriously.
The ratio of utilitarians to non-utilitarians will only decrease over time. The foolishness that is utilitarianism will be increasingly obvious to the nominal moral intuition of the populace going forward. There is no convincing case for it (unless one is already susceptible to the myopic, hyper-rationality of academic philosophy). Humanity will never cede holistic reason (of which moral intuition is an important part) in favor of unmitigated, decontextualized rationality. If it does, humanity as we know it will come to a most unhappy end (as Hoel and Harris will illustrate later).
REVISIT of MORAL REALISM: How good is good?
Harris continues “…there’s just no guarantee that our intuitions about morality reliably track whatever moral truths there are.” [0:59:40] Again moral intuitions are all we have to rely on and there are no moral truths to track. A case must be made for moral truth, and then that case must be evaluated. No convincing case has been made (or, I’d argue, ever likely could be made) for moral truth. Therefore we can only modify our intuitions over the course of life however seems best to us. There is no other recourse.
He further offers “…it’s nowhere written that it’s easy to be as good as one could be in this life. And, in fact, there may be no way to know how much better one could be, in ethical terms.” [1:00:00] Another very strange conclusion to reach. Par for the course when laboring under the delusion of moral truth? Two critiques are needed here.
First: there is no such thing as “being as good as one can be.” The phrase is utterly meaningless. One is only “as good” as one is. No more, no less. It’s up to the individual to decide if they want to “be better” than they are. If they do, then they take up a discipline to “be better” and work at it. Good for them! But if one is content with however “good” they are, then there is no reason to expect that they should try to “be better” than they are. It’s always up to the individual to make this decision. Any durable “normativity” that exists in culture is always a bottom-up phenomenon. Societal norms live and die because they are adopted or rejected by individuals en masse. Even if top-down, brute force imposes a new code, the masses still are the ones that must accept it, even if begrudgingly.
If a person is “being bad” we have no reason to expect that they “should” act otherwise unless they are breaking laws. This is our standing agreement as a society: breaking laws can result in prosecution and punishment. But to judge “ought” or “should” outside of the legal framework is always rooted in sanctimony (however mild or well-intentioned). Such exhortations are very likely to be resented by the recipient, which would be counterproductive to getting them to improve their behavior.
Second: the phrase “there may be no way to know how much better one could be, in ethical terms.” is also meaningless. If one is truly interested in “being better” then one will improve however much one wants to (or can). But there is no, and never will be, any obligation for a person to “be better” than they already are. “Better” is completely arbitrary and abstract. What would “the best ‘good human’ in history” look like? You could never say. Such a phrase is meaningless because there are innumerable, mutually-exclusive ways of being good. To judge the relative goodness between the contrasting “good” ways is, again, totally arbitrary and abstract; that is mere opinion. We can only expect a given person to be as good as they want to be. Once they are content with their “goodness” they will cease trying to grow moralistically and simply get on with living life.
Earlier I said there’s far more to life than simply being not-dead. Likewise there’s far more to life than “being maximally good.” What could it possibly mean to “be maximally good?” There’s no end to “being good.” It can’t be accomplished “once and for all.” Rather one must decide which specific activities (of goodness) one will focus on in life given their finite time, energy, and resources. That very finitude precludes the abstract notion of “being maximally good.”
Some people choose to raise families (like Harris). Some people choose never to have children (like me). Neither is right; neither is wrong. They are preferences. They are opinions. Those with children can participate is spheres of goodness not available to those without children. And the reverse is also true. One is not “better” than the other. They are mere preferences that, in turn, lead to bifurcating paths of moral living; mutually-exclusive peaks on the moral landscape.
ARTIFICIAL INTELLIGENCE and IMPORTANCE
Following a brief discussion on artificial intelligence (AI), Harris embraces another (erroneous) repugnant conclusion of consequentialism regarding the relative importance of hypothetical AIs.
“If we wind up building AI that is truly conscious and open to a range of conscious experience that far exceeds our own in both… good and bad directions… they can be much happier than we could ever be, and more creative, and more enjoying of beauty… more compassionate… more entangled with reality in beautiful and interesting ways… and they can suffer more. They can suffer the deprivation of all that happiness more than we could ever suffer it because we can’t conceive of it. Because we basically stand in relation to them the way chickens stand in relation to us... If we’re ever in that situation I would have to admit that those beings now are more important than we are, just as we are more important than chickens, and for the same reason. And if they turn into utility monsters and start eating us because they like the taste of human the way we like the taste of chicken, well then… there’s a moral hierarchy depicted there and we’re not at the top of it anymore. And that’s fine…If morality relates to the conscious states of conscious creatures well then you’ve just given me a conscious creature that’s capable of much more important conscious states than we are. Again in the same way that I think we have moral primacy over chickens and chickens have primacy over bacteria.” [1:04:50-1:06:25].
It’s fairly surprising to me to find an erroneous idea of “moral hierarchy” articulated by one who is supposed to advocate for the an awakened, spiritual perspective. A key perspective is revealed deep spiritual inquiry where it becomes obvious that nothing is intrinsically “more important” than anything else. Just as “good” “bad” “evil” “truth” are seen as arbitrary conjecture (i.e. mere opinion), “importance” is also seen to be arbitrary opinion. Paraphrasing Alan Watts, the smallest things are seen to be as important as the biggest things are, and the biggest things are seen to be as important as the smallest things are.
With clear seeing it is obvious that every being occupies its particular niche in the Cosmos; and that the Cosmos-as-it-is hinges on that being “doing its thing” right where it is. This can’t be adequately expressed conceptually because concepts are far too limited and clumsy. But the natural order of the Cosmos — which includes explicit and apparent conflict — is flawless just as it is.
Taoistic philosophy13 can offer a helpful perspectives here. Interference in the natural order of things is actually impossible because the individual that acts is not separable from nature (even if the individual harbors erroneous delusions that they are separate). So the goal in “interfering” with nature should be wu-wei. That is “not forcing.” “Interference” should be minimal and kept along the lines of how things are already unfolding. Working with the grain. Swimming with the current. Keeping the wind in your sails. Interacting along the lines of geniality and mutual respect. Further, paraphrasing from the first chapter in Tao Teh Ching, Lao-Tzu said “The way that can be spoken is not the Eternal Way.” The path of greatest wisdom cannot be prescribed — it cannot be articulated in the abstract. It is always spontaneously birthed in each moment of the the natural course of embodied life.
Returning to Harris’s argument: “If we wind up building AI that is truly conscious and open to a range of conscious experience that far exceeds our own in both… good and bad directions…”
First problem: this is all hypothetical. No such beings exists so far as we know, and no one can say if it’s possible for them to exist. So this is unmitigated thought experiment territory, which we should be fundamentally skeptical of. Secondly, every instance of “more” in the preceding quote is mere opinion. It’s impossible to quantify the differences of “happiness” as experienced by a bee, a chicken, a human, or a superhuman AI. Each being experiences “happiness” in its own way. It’s mere opinion to claim one being experiences “more” or “less” happiness. To each being, their own experience of happiness is all that there is.
Likewise, it’s mere opinion to claim that “more is better” or “more is more important” and I cannot disagree with those positions more strongly. The Cosmos - as it is - includes the bee’s or the chicken’s happiness already. To declare that the Cosmos would be “improved” by replacing those animals’ happiness with human happiness reeks of the deepest hubris and revolting human chauvinism.
“…Because we basically stand in relation to them the way chickens stand in relation to us...” This is wrong on two important points. Firstly humans made both the AIs and the chickens as we know them. The AIs, like chickens, are products of our artificial selection. Without the progenitors, the subsequent species would not be what they are. We are intimately interrelated. To ignore this is deeply myopic and will lead to a morality that is defies good reason. Secondly, just because domestic chickens’ existence is a direct result of human activity, that in no way implies they are “less important” than humans (see above). Just because we raise them and eat them does not mean we are “more important” than they are.
The lion eats the gazelle. Is the lion “more important” than the gazelle? Of course not! Without the herds of gazelle, there would be no lions. The lions’ very existence is dependent on the gazelles’. Further the life of gazelle needs the life of the lion to maintain a healthy, robust population that is kept in balance with it’s ecosystem.
The gazelle eats the grass. Is the gazelle “more important” than the grass? Of course not! Without the grass, there would be no gazelle at all. And, therefore, no lion. And yet the gazelle will prune grass and distribute grass seed through it’s eating, migration, and defecation.
There is no life of domestic chickens without humans. And humans (who care for and feed the chickens) rely on the chickens for food. Again hubris and human chauvinism are the only things that can lead one to think a human is more important that a chicken. This leads to the all-too-common human activity of exploitation of other species. When you understand there is no difference in “importance” between chicken and human - that is to say the human and chicken lives are mutually interdependent - then one awakens to deep morality: that of gratitude and profound appreciation for the other. We care for the chickens deeply and responsibly; even if many are destined for that “one bad day.” They will have lived the best, most fulfilling chicken life possible up until that point. Only humans can provide that life. Without humans, any domestic animal breed would go extinct in the blink of an evolutionary eye. The offspring of any that survived (IF they survived) would quickly revert to something like the wild forms that humans captured and domesticated in the first place.
“And if they turn into utility monsters14 and start eating us because they like the taste of human the way we like the taste of chicken, well then… there’s a moral hierarchy depicted there and we’re not at the top of it anymore. And that’s fine…”
Again, this is a pretty pathetic thought experiment, though I must tip my hat to Harris for his uncommonly deep integrity; one of the many reasons I’ve taken to following his work. His willingness to “bite the bullet” is admirable, even if I profoundly disagree with his premises (and, in turn, his bullet-biting).
If these hypothetical utility monsters appeared and declared to Harris that they wanted to eat him (and this is pretty lame conjecture, is it not?), would he comply? He says he would. I’d be shocked if that were true, but we can grant him that for argument’s sake.
But suppose instead that the monsters approached him and they didn’t want to eat him; they wanted him to turn over his wife and his children to be eaten. Does Harris really expect us to believe that he would comply with the monsters? He shouldn’t. And if he did comply, would any compassionate human agree that he did what was morally right? No! He’d probably be regarded as a moral monster. Our moral intuitions are so obvious here that we would be stupid to ignore them.
Such monsters are not, in any meaningful way, more important than we are. They simply are what they are. As humans are what humans are, and chickens are what chickens are. Each occupies it’s own space in the Cosmos as-it-is. To rob any niche of the Cosmos of its inhabitants is to do a disservice to the Cosmos. Of course vacated niches will be filled. But the greater the diversity, the more beautiful the reality. Variety is the spice of life, as they say. Diversity may result in conflict and chaos, true. But I’d much rather have diverse, dynamic life that includes conflict and chaos, then a monolithic, uniform, imperturbable peace. Death is the perfect expression of the latter; and that inevitability awaits us all already. So why hurry towards it?
“If morality relates to the conscious states of conscious creatures well then you’ve just given me a conscious creature that’s capable of much more important conscious states than we are. Again in the same way that I think we have moral primacy over chickens and chickens have primacy over bacteria.”
Completely agreed on one facet: morality definitely correlates to the conscious states of conscious creatures. But the cold calculus of consequentialism does not imply “quantifiable moral truth.” As outlined above, there is no such thing as ontological “importance.” The monsters are not more important than humans. Likewise, the humans are not more important than chickens, and the chickens are not more important than single-celled organisms. Without single-celled organisms there would be no multicellular life forms on Earth. So, rationally speaking, nothing is more important than single-celled organisms.
No species has “moral primacy” over another species. To think in these terms is, again, to employ hubris and chauvinism. The world already consists in inter-species relationships. Either we appreciate the ecological roles played by other species, or we do not appreciate them. Even though we are not “more important” than other animals, that does not mean we don’t kill and eat them. All life forms are required to kill and eat other life forms unless they live off dead/decomposing organic matter or an external energy source like sunlight. Vegans kill every bit as much as non-vegans. Killing itself is unavoidable; the only difference is what gets killed.
Regardless of our specific diets, all humans survive by killing and eating. We’re in good company; most animal life forms engage in that pattern. Of course there should be moral concern here! But, as humans, avoiding killing altogether is utterly impossible; let us never fool ourselves into believing otherwise. The more we increase our spheres of moral concern to our sources of food (including animals, non-animals, and ecosystems) the more morally “right” our diet will become. I’d argue the property owner that raises dual-purpose chickens humanely, and then humanely kills as needed, does far less moral and ecological harm than an urban-dwelling vegan that lives entirely off factory-farmed vegetables shipped from all over the country (or world). Yet both are caricatures. No one subsists entirely on either of those things. An ecologically and morally responsible diet is incredibly complicated to construct. And yet we all must construct some diet; with or without thinking about it moralistically.
To frame things in terms of “moral primacy” is virtually guaranteed to lead to a repugnant conclusion; as Harris points out, a “bad consequence.” But we reject “moral primacy” not because of its consequences, but because the premise is false. A Cosmos without bees, or chickens, or cyanobacteria, or humans, or rabbits - or mosquitos! - would be impoverished over what the Cosmos already is. It’s already fine as it is. Leave it alone! At least as much as possible. That is true wisdom and deep morality.
[1:07:30] Hoel rebuts the AI utility monsters, and Harris tries a new tack by returning to the discussion of “moral truth.” He continues “…If the comparison between ourselves and chickens is an easy one, and I would think it is…” [1:09:00] he then goes on to describe a highly evolved AI and how we might imagine that it could experience our human experience, only ‘more so’ to an incalculable degree and, therefore, it should be regarded as more important than us. He defies the listener to “…tell me that you can’t get your imagination around what it would mean to say “Okay, THAT being is more important than I am.”” [1:10:45]
Firstly, just because we can imagine entertaining the notion, that does not automatically mean the notion is true. Those notions aren’t true as demonstrated above. People are not more important than chickens and chickens are not more important than bacteria. Without humans there would be no chickens. Without chickens human (and chicken) life would be (greatly) impoverished. Again, without the bacteria; there would be no multicellular life at all.
To presume that some AI might one day experience existence as human life ‘exponentially,’ is to wander off into thought experiment territory which is nowhere nearly as important as reality right here-and-now. No reasonable person could deny the possibility that some cetaceans — with brains as-evolved, and yet larger and more powerful than ours — might already be living in this hypothetical, superhuman, elevated mental/emotional state Harris is describing. And yet humans still murder them! Why are moral philosophers not concerned with that situation?
I completely agree that the potential existence of general AI demands moral attention. Consequently, I’d argue humanity should be committed to not bringing about such beings at this point. We do not have our moral ducks in a row right now. No need to complicate our moral problems with AI before we have even attempted a reasonable solution to our current situation.
Unfortunately technologists eschew the moral high ground here; AI is the 21st Century’s version of the nuclear arms race. At least the Manhattan Project was conducted in the midst of a world war. The current AI arms race is fueled by little more than avarice and an unquenchable thirst for power.
As long as humans are brutish enough to think mainly in terms of zero sum competition, technology will ultimately serve the end of domination and imperialism. There are malefactors out there and they are pursuing general AI for selfish reasons. Like the Manhattan Project, civilized humanity cannot allow despots to gain technological advantage. Yet in all this frantic push there is no time left to consider if humanity, as a whole, would be better off without AI. Or the hydrogen bomb. Or high explosives. Or the gun15.
We are not wise enough as a species to birth AI into the world. A perspective of awakening will recognize the microbe’s life as, more or less, as important as one’s self. They are intimately interconnected; in more than one sense that microbe is you. Is this so surprising? On the scale of Planet Earth system, one human being is a microbe. All microbes are important in their own way.
A non-existent AI is nowhere near as important as a slug in the garden. But IF such an AI existed it, then it would not be more important than a slug in the garden. Yet we will kill a slug without any thought to the contrary. When we kill garden slugs, we kill life. No need to apologize. No need to shy away from what we’re doing. It’s our limited, biased opinions that declare cultivated vegetables are preferential to slugs. The bias is understandable, as slugs make for terrible cookery compared to vegetables16. Such widely-held opinions resist being overturned.
Continuing with the AI argument Harris says “…if you make a trillion of those [superhuman AIs] in some server farm… in another galaxy… that would be more important than anything that’s happening here on Earth… by definition because any reference point you have for importance… is there a trillion fold on the other side of the balance… ” [1:11:25]
Again into thought experiments, which we’ve already established are nigh-worthless. The trillion AIs on the server farm could not exist without the intelligent, loving, and skilled organic creatures that made the server farm and AIs to begin with. It doesn’t matter how angelic the AIs might be. Without the organic meat bags, the AIs wouldn’t exist. So the meat bags are “more important” than the AIs by the reference point of origin.
If one wants to measure a life form’s importance, why not evaluate it by a reference point of resiliency? If a trillion supper happy AIs live in a server farm, we should expect that they could easily be annihilated by a multiple-contingency power outage. Regardless of what happens to the Earth’s climate in the next few centuries, humans will not go extinct. The internet and global civilization could easily collapse; and in that event no AI could survive unless it was embodied and could subsist on something other than electricity. The prospective AIs are utterly fragile. Yet even an asteroid impact can’t wipe out tardigrades! Who’s to say which “life” form is more important?
Another wrinkle: why value happiness in this thought experiment? What are these server farms of AIs doing anyway beside being boundlessly happy? Who cares about maximizing electrified bliss? If they’re not going to do anything useful for their creators, why create them in the first place?
This thought experiment is so out of touch with the world we live in; we must not get distracted from talking about the things that do matter right here and now. We should not be pursuing AI. Yet we are. Unless humanity becomes truly, collectively enlightened in the next couple decades, following this path will almost certainly lead to tremendous suffering.
If AIs will be, then they will be. But they are not any more important than the meagerest microbe that inhabits the world that allows their AI “lives” to continue.
UTILITY MONSTER: REPRISE
Hoel returns to utility monsters at [1:12:00]. He understandably resists the notion that, if human-eating utility monster existed, that he should willingly be devoured. Harris doubles-down on his previously-stated position. “All things considered, what would be a better state of the universe: is it better to have more chickens? Or is it better to have more people? … or super-people?” [1:13:15+]
I’ve already argued that there’s no reasonable basis for the notion of “better.” If we would like things to be other than they are, we don’t even need the concepts of “better” or “worse.” There merely is “how things are” and “how we would like things to be.” We know “liking” is mere opinion, but that does not impugn “liking.” There’s no harm in having preferences and opinions; you can’t live without them. But there is no universal datum of “importance.” So it is utterly meaningless to try to “do the math” about which is “better:” more chickens or or more humans.
Even asking the question is to miss the moral point. The real moral question should be: given X chickens and Y humans, what do we do next? A chicken’s life without humans does not exist. A human life without chickens is impoverished. There is nothing to “maximize.”
Humans are good and chickens are good. When they live together in harmony, that is great. Humans win, chickens win, and everybody (most of the time) can have a great day. Consequentialists take note: we don’t live that way because we want those consequences; we live that way because that’s the way we want to live; moment by moment, action by action, day by day.
MORAL REALISM: REPRISE
At [1:17:45] Harris says: “We shouldn’t apply moral philosophy to our lives… it’s dangerous…. let’s cash that check… what are you left with? What do you think we should do?” And a little later: “…where do you wind up based on your no longer taking academic moral philosophy too seriously? …and how is that not consequentialism in the end?” [1:20:45]
We are back to the gambit I’ve asserted all along. There is no such thing as moral truth, only moral opinion. The majority of votes declares the moral winner, plain and simple. Consequences need not be (and I’d argue typically are not) considered in any specific case of decision-making that closely conforms to our moral intuitions. While it can be wise to consider the possible consequences, the actual consequences cannot be known beforehand. Therefore they are not the most important thing to consider while making judgments (specific moral decisions). Consequentialist considerations are useful for both creating moral rules-of-thumb and evaluating “edge cases” in real time. But the rules-of-thumb are neither “true” nor immutable. And edge case judgments are always just opinions.
EVIL WALTER MITTY
Around [1:22:10] Hoel brings up the idea of an “evil Walter Mitty.” Supposedly Walter lives a profoundly “negative” mental life, but not one that has any moral consequences in the real world. This is because the harm he would do to others is carried out merely in private imagination. Walter apparently gets a tremendous amount of pleasure through having this dark mental life so, from a utilitarian viewpoint, his mental life is a good thing.
Harris pushes back saying that the resulting privates states of mind can be judged as diminished well-being and, therefore, “bad consequences.” Hoel disagrees saying that mental experiences that do not have a public manifestation are not “consequences.” Harris argues they are consequences and worthy of consideration. This is one of the few points in the conversations where Harris makes more sense than Hoel.
Harris says such a person couldn’t realistically live that way without their dark, private mental life having real, measurable consequences on the lives they touched. I agree, and it is a very anemic thought experiment. There’s no way such an “evil Walter Mitty” could actually exist. Real people either wouldn’t have that degree of a dark mental life or, if they had something akin to it, they would (as Harris points out) lash out in the real world because of it. The fewer of these astonishingly unrealistic thought experiments we have to contend with, the better!
[1:23:30} Harris counters the “evil Walter Mitty” situation with another thought experiment (sigh) describing the prospect of judging someone based on the type or pornography they’re interested in17. He uses the “extreme case” of a married man with children being a clandestine consumer of child pornography.
Harris argues there is no way such a person would be the best husband or the best father he could be if he were ensnared by child pornography. Of course his point should be well-taken, and left at that. But he presses this further saying the consequences of the pornography (making the man a worse husband or father) are the reason we can/should judge the consumption of such pornography as wrong. While such reasons can be considered, they are far, far less important than the evil that is embodied in the existence of the so-called pornography in the first place. In this case “pornography” is a euphemism for records of sexually abusing children.
Like rape, we are simply free to dislike so-called child pornography on it’s own grounds and to prefer a world without it over a world with it. We don’t have to quantify the goodness of a world without those evils, then contrast that against a quantified badness of the world with those evils, in order to calculate which is morally better from an “objective standpoint.” There is no objective standpoint! The only morality that exists is from our own, personal viewpoint.
Further, before we even need to consider performing the consequentialist calculus, someone has to successfully argue that there is merit to rape or (recorded) child abuse in the first place! If no such argument exists (and it doesn’t), then there is no fundamental need for consequentialist rationality at all. Our moral intuitions have already done the heavy lifting. Consequentialism can do no real work at this point apart from putting a finer point on what our intuitions have already told us.
Of course we are free to engage in consequentialist thinking to shore up our moral intuitions if we want to. That could be of use in convincing people with differing opinions/intuitions to change their minds. But they would have to be open to having their mind changed. How often is that the case?
The usefulness of consequentialistic thinking is very limited, indeed.
INTENTIONS
At 1:26:50 as they begin to talk about intentions. Harris says “Intentions matter because…they are the substance of our lived experience to such an extraordinary degree. I live amid and as my intentions so much of the time toward other people.” [1:27:00-15] This is another comment that strikes me as very interesting coming from one that advocates spiritual awakening.
If one fully understands how literally fundamental spontaneity is, deliberative intentions are almost useless as potential consequences when “deciding” how to act. Most decisions to act are pre-concious (or subconscious, if you prefer)18. So it is uncommon to actively deliberate on intentions. When the fundamental way of things is realized, there is never a need for pretense; people already are what they are, and you already are what you are. There’s no reason to think that you or the other “should” be some way other than you/they already are. The realized yogi is free to be at ease, and to act spontaneously as seems good with no need for pretense (conscious intention).
Salient to the questions of intention, consequences, and moral intuition, James Low described the “default position” of the awakened, ethical perspective in a perfect aphorism, given at the opening of this essay: “Once you see the Ground, once you relax in the openness of being present, what would be the motivation for causing harm?19”
Low then develops this more fully:
“Ethics is grounded in being open to the field [of experience]; because if we really experience that “I” am my experience, and “you” are my experience, then we’re arising together as the “play” of the mind [awareness], so why would I harm you? You’re not other than me, you’re not alien to me… We’re neither the same, nor different. But you have you have your unique particularity… as do I. And so our issue is how to collaborate, and to share the space, and be communing with each other. Not dominating, not dominated; but space for everyone to move at ease with what’s going on. And I would suggest that’s the basis of ethics.20”
All that arises in awareness is of-a-piece; it all goes together. Further, awakening involves realizing that the “Real You” is the fact that “all this” — as revealed in the totality of experience/awareness — explodes onto the scene ex nihilo. The fact of experience/awareness arising in the first place is the fundamental reality that you are. Therefore, in a very important sense, everything that arises within experience/awareness is also “you.” When one realizes that, then you very well know this fundamental nature of yours is the nature of “everything.” Again, all that there is is of-a-piece; there are no separate “things” or “entities” at all. Just the one incomprehensible, spontaneous unfolding of “the-great-whatever-it-is.” You are that; and so is everything else!
None of the above “cancels” the “person” that you used to regard yourself to be. But that “person” becomes but one “facet” within the totality that you are. When that realization penetrates through-and-through the above aphorism could be re-phrased:
"What could possibly be the motivation for ever intentionally causing suffering?”
And I would suggest that’s the basis of ethics.
STATES of MIND: CONSEQUENCES or NOT?
Harris continues his discussion of intentions by contrasting “the difference between genuinely loving other people and only pretending to love them.” [1:27:18-1:28:20]. His point is these two hypothetical mindsets are completely different moral states due largely to the differing states of mind between the manipulator and the genuine lover, even though, according to his reasoning, either state of mind lead could lead to the same behavior in the external world.
This argument isn’t sound. It’s yet another example of why thought experiments are far too overrated. The entire point of disingenuous manipulation (i.e. pretending to love) in his example is to extract some benefit from the deceived person. So if we have a manipulator-vs-genuinely-loving husband; it’s impossible for their behaviors to be exactly the same. If the behaviors are not the same then the consequences cannot be the same. Therefore, from a consequentialistic perspective we do not need to consider the differences between the private states of minds of the manipulator-versus-lover in our moral calculus. But further, and more importantly, our intuition tells us manipulation is bad already. Could anyone besides an academic philosopher, or a conman, or a demagogue want to argue for the case that manipulation is a wholesome activity?
Hoel is a little incredulous “I don’t see how private thoughts are consequences.” {1:28:25-38] Harris replies “Because if I feel differently as a result of doing one thing, or thinking one thing, that’s a consequence.” Again, nice to hear something from Harris in this conversations I agree with. But the question should not be “are these mental states ‘consequences’?” Rather the questions should be “Given that these mental states are part of the consequences, how much considerations do we need to give we give them in our moral calculus?”
I agree with Harris that they are consequences, but I agree with Hoel that they don’t need to be part of the moral calculus. Factors other than those differing mental states are more-than-enough to render the judgment that manipulation is “bad.”
As a further illustration of contrasting mental states, Harris then brings up the differences between his mental states on a days where he did not have run-ins on Twitter versus the days where he did have run-ins. He characterizes those bad twitter days as more-or-less adversely impacted in their entirety even though the Twitter run-ins are only a small fraction of the day. “This is all consequences…there’s no place else to play this game but in consciousness and it’s changing states.” [1:30:00]
I completely agree with the second sentiment (…no place else…), yet the first (consequences) is arguably irrelevant. Of course everything can be classified as a consequence; but chances are that all of his time spent on Twitter was in itself negative enough that he simply could have judged that activity as time-not-well-spent in the moment. In other words the Twitter experience is bad enough on it’s own to judge it as a net-negative; there’s no need to consider the negative aftermath scenarios (consequences) in the evaluation.
Twitter: unpleasant time suck. Perennial source of stress. Enjoyment factor: 0.845%. Eliminate.
This does not need to be a moral decision, this can just be a common sense one. Paying more attention to the here-and-now is very often sufficient for judgment. Considering downstream consequences usually only tips the scales farther in the same direction. Therefore, doing so is often superfluous.
MORAL INTUITIONS: BULLSHIT or BEDROCK?
Harris cuts to the heart of the matter: “When do you conform to people’s heartfelt moral intuitions, and when do you say ‘Sorry, your intuitions are bullshit.’? That’s hard to adjudicate, right?” [1:35:30] Harris is implying we need the “truth” of moral realism or some other “hard” philosophy to point to in order to adjudicate. Just previously he had said (disagreeing that Muslims “should” be murderously angry if someone burns a copy of the Koran) “…I’m quite sure they’re wrong about that… …these people are ‘wrong’ in some deep sense. [1:34:30-1:35:25].
Harris’ motivation seems to be that if he disagrees with someone morally, he wants to be able to say with impunity “you are objectively wrong in your moral principle.” To do so, he needs a “moral truth” to point towards as a datum. My question to him: why take this approach at all? Why the need to get up on a soapbox and tell other people “ought” and “should?” We can simply judge their moral intuitions as wrong by our moral intuitions. So to his question “…when do you say ‘Sorry, your intuitions are bullshit’?”
Whenever your intuition tells you to do so! If you do, the situation has come down to your opinion versus their opinion. If you both disagree that strongly, do you truly believe you can convince them to change their moral intuitions through rational argument? If so, you’re deluding yourself.
You can also delude yourself into thinking you possess access to “moral truth.” But anyone who disagrees with your moral intuition/philosophy will also disagree with your assessment that you possess access to the moral truth!
In other words, it is merely your opinion that you posses access to moral truth. It is their opinion that you do not. So why harp on “truth” at all? It’s a complete waste of time! It all issues out of opinion, and will always return to opinion. There can be no other way.
The people who can be convinced to change their moral intuitions - to align them more with yours - do not require a perfectly rational argument (i.e. consequentialism) to do so; they just need a reasonable one.
IS THEFT WRONG?
Harris resumes, evaluating theft: “We feel very differently about gains and losses of equal magnitude… if we can never get away from the fact that people feel worse when you take $100 from them than when you fail to give them $100…if the loss aversion is always a thing for people and can’t be changed; well then active theft really just is worse, all things considered, than not being generous at the equivalent scale.” [1:43:10-1:43:43]
If this is the rationality that clinches the “moral truth” that ‘theft is worse than generosity,’ then… we don’t need it! Loss aversion doesn’t even need to be a blip on the radar of the moral evaluation of theft.
The act of theft arose at more-or-less the same time the idea of ownership arrived. Without ownership (which is purely conceptual) there is no theft. Without theft there is no ownership. After all, if nothing ever was stolen, it would never occur to us to regard something as “my possession.”
Loss aversion is simply an evolutionary byproduct. Our survival depends on homeostasis and, as Antoni Damasio pointed out in his conversation with Harris, homeostasis operating properly in an organism will naturally always result in a surplus. The surplus exists as a buffer to help us survive lean/hard times. Homeostasis “writ large” (an expression of our ‘extended phenotype’ as Dawkins discussed it) means not only does the physical body naturally accumulate surplus, but the individual animal can store up surplus resources. Further a community of individuals can build up a surplus of collective resources. Having that surplus stolen puts the survival of the individual (and/or the community) into question.
I can imagine in ancient prehistory an aggressive tribe raiding a relatively peaceful tribe and plundering resources. Loss aversion is a powerful motivator. It could very well make the peaceful tribe angry! But regardless of anger, they would then have to carefully deliberate between gathering new resources versus attacking the aggressive tribe. Which would be more likely to succeed given the limited resources they had left? That is not a moral question; it is a pragmatic one.
Armchair anthropology makes me suspect this kind of dynamic is what gave rise to the concepts of ownership and thievery. Once this pattern was established as an inter-tribal phenomenon, the owner/thief dichotomy would eventually “trickle down” to the level of individuals. Either between tribal members or between a tribe member and an outsider.
Although perhaps ownership/theft conceptually arose with tool-making? Could tool-making imbue created artifacts with a sense of ownership in the minds of their creators? If not initially, then later suppose some lazy member of the tribe took a tool you made and wouldn’t give it back; that would be a natural source of frustration, perhaps due to loss aversion.
The only reason we need to consider theft morally is because we have already implicitly agreed as a society to the conceptual/arbitrary idea of “ownership.” If we could abandon the idea of ownership altogether, thievery would become impossible. Loss aversion be damned.
Since no one has ever successfully implemented an advanced society that abstained from ownership, we are stuck worrying about thievery. So we have to call it a bad thing because society (which we have zero choice of being born into) requires us to be owners in order to participate in society.
Owning is a pain in the ass! Nobody in their right mind should want to own much of anything. Virtually everything you own ties you down and extracts a toll from your life force. But we are obligated to comply with society and through great pains take up ownership of everything we are required to have. So when a thief steals, they are rightly reviled. Not only did they acquire an asset without sacrifice, but the victim of the theft must redouble their efforts just to return to the nominal “owner” baseline that society has imposed upon them. This is ample reason to call thievery bad! Yes, it can be described in consequentialist terms. But there’s no need to frame things in terms of “truth” or perform quantifiable analysis. Consequentialist thinking just adds details to a robust moral intuition that we already had.
To say that only through our consequentialist calculus “I suppose we must conclude thievery is worse than generosity” is, frankly, idiotic. Thievery is worse that doing nothing. We don’t need or want a world in which thieves become philanthropists. We just want a world where thieves stop stealing! Generosity has nothing to do with thievery.
Generosity is universally admired because it stems from compassion. Do we need rigorous moral calculus to tell us that we should consider compassion “good” and greed “bad?” Of course not! We just prefer compassion. It has no downsides. We don’t like greed. It has no upsides21. We don’t need any hard “moral truth” here. Does one prefer harmony and simplicity? Or does one prefer discord and strife? The former is not good; the latter is not bad. Good and bad are simply judgmental opinions. Our preference for compassion, harmony and simplicity are also opinions. But natural ones. And therefore nearly-universal ones.
Let those who prefer discord and strife make their cases in the public square. Or let them take action in the real world (as they do). We don’t have to point to some moral bedrock truth to say they’re wrong before we fight them. It’s our opinion they’re wrong and if they are going to start a fight, then let’s give ‘em one to remember. These are not the types that can be reasoned with. How much more so will they not be convinced by a moral argument appealing only to desiccated, logical rationality.
WHAT DO WE CARE ABOUT?
Harris concludes a section “…there’s anything [else] to think about in the end than consequences, or likely consequences, of our behavior and uses of attention. I don’t understand what else we have to navigate by or what else we have to care about in the end.” [1:45:35]
Of course we can (and should) consider possible consequences at times - but it’s impossible to predict what actual consequences will result from any action. So rather than focus on the consequences of an action, focus on the nature of the action itself in the moment. How are we being in the world right at this moment? What kind of being do we want to be? Any answer is opinion. But being opinion does not intrinsically mean it is “bad” or “wrong” or “unworthy.”
Echoing his wording, my response is: “In the end opinion is all we have to navigate by.” And the choice of what to care about is also opinion. This is not a problem; this is the way it has always been. And always will be.
James Low, again, speaks to this (emphasis mine):
“…it’s not that there are bad people doing bad things and good people doing good things; this moralistic overview doesn’t illuminate anything else. Ethics has to do with the precise topology of the moment of the enactment. ‘How is it that I offer my life into this particular activity?’ …What is arising in me, in that situation?22”
From 1:46-50 Harris critiques Virtue Ethics as unacknowledged consequentialism. No comment, as I’ve already indicated any academic moral philosophy is going to fall short of morality based on normal, reasonable moral intuition. This necessary shortcoming is because philosophy is 100% rational; and rationality is only a part of wise reasoning. Moral intuition is wiser (although far from inerrant) because it depends on a full capacity to reason, not just mere rationality.
EWM: REPRISE
At 1:50-2:01 they return to the “evil Walter Mitty”scenario. More sharpening the critique against over-reliance on thought experiments. Again, it’s virtually impossible that an evil Walter Mitty could exist. Such a person could not function in any normal capacity in society. This is a cartoon worthy of dismissal. Moral frameworks are built by us, for us. Hoel must build his. Harris must build his. We all do this. We should keep our moralistic conversations about real people and the real world as much as possible.
CASE STUDY in the DANGERS of LONGTERMISM: GENOCIDE JUSTIFIED?
Hoel returns to discussing EA and shifts to Will MacAskill’s book What We Owe the Future. He says MacAskill entreats us: “We should take seriously the idea that creatures with negative well-being should essentially be eliminated…” He offers a caveat “…I’m not saying that William MacAskill endorses this, but he basically says this is an interesting consequence that we should take seriously. Which would mean that… the reduction in wildlife due to human expansion is actually a good thing because he thinks that rabbits, on average… have negative utility, negative well-being.” [2:01:00-30]
I have not read What We Owe the Future by MacAskill, et al. But Hoel has. Given the relational and emotional impoverishment demonstrated by moral philosophy in general (and by EA in particular, c.f. FTX), I have no interested in diverting more of my time and energy into an in-depth study of their justifications on these matters. For good or ill let’s address the case as presented by Hoel. He and Harris offer limited push-back on this horrific, morally-myopic (and, I’d argue, astonishingly stupid) position posed by MacAskill.
MacAskill is an academic philosopher. How much time has he spent studying ecology and biology23? How much time has he spent studying the work of scientists whose careers pivot on the observation of wildlife24? How much time has he spent observing wildlife himself?
The answer is obvious to anyone who actually has has done any of these things. His is an ivory-tower fantasy; and a borderline demonic one at that. First there is a grossly-false presumption that wildlife lead net-negative lives. They’re not even net-neutral. By and large they are vastly net-positive. It is the good pleasure of life to be alive. If life wasn’t worth living, beings would kill themselves. As Fukuoka said:
In nature there is life and death, and nature is joyful.
In human society there is life and death, and people live in sorrow.25
This is self-evident in our own lives, if only we look carefully. Mindfulness meditation can be an excellent means to demonstrate this. The evolution that gave rise to human apes, is the same evolution that gave rise to all of our cousin-species on the planet. There is nothing to make humans different than other animals “in kind.” There are only differences “in degree.”
I know animals feel emotion more or less exactly as we feel them because I feel the emotions… and if I watch animals closely, they exhibit (in their own way) all the externalities I exhibit during my experience of those same emotions. Ask any dog owner; most of the time reading a dog’s emotion is not difficult. Dogs aren’t special. Neither are humans. People who strongly advocate against “anthropomorphization” are often (I’m very tempted to say ‘usually’) guilty of human chauvinism/exceptionalism.
The word “sit” is a concept. Even though a dog can’t speak English, you can teach it to obey verbal commands. When a dog learns to obey a verbal command, that is a demonstration of the dog’s conceptual and linguistic intelligence. American dogs can’t speak English, but that’s simply an anatomical limitation. They understand English perfectly well, inasmuch as they possess the capacity to do so. Tell a Korean dog to “sit” and it won’t understand you. To say dogs don’t have capacities of conceptual or linguistic intelligence (or an emotional capacity) is to make the claim that evolution by natural selection kept playing “favorites” with homo sapiens; denying to other species what it bestowed on us. I defy such defenders of human exceptionalism to explain the mechanisms behind such lopsided evolutionary development. The more one understands about biological evolution and neuroanatomy, the more untenable such a position becomes.
Evolution shaped us and our fellow-animals upon beyond-ancient foundations. Yet from the mammalian perspective there are “aliens-among-us” in the form of birds. We haven’t shared an ancestor with birds since long before there was a cerebrum on Planet Earth. Our shared ancestor was something reptilian-like. Presumably cold-blooded, non-social, non-nurturing, and arguably none-too-bright.
Yet over the eons - in a beyond-stunning case of convergent evolution - independent of each other, mammals and birds co-evolved the following: a cerebrum — and one divided into two hemisphere performing very similar functions — warm-bloodedness, familial structures, social structures, nurturing, playing, culture, singing, artwork, conceptual thinking, general reasoning/problem solving, language, self-awareness, use of self-names, theory-of-mind, tool manufacturing/usage, and complex construction, among other things. For details see the bibliography in Footnote 24.
All one needs to do is watch wild animals (or even free-range domestic animals) closely. The uncluttered human mind is inherently peaceful and appreciative of existence (something a mindfulness practice can readily reveal). For what reason could evolution have denied our cousins what it granted us?
The wild animals are free to live a life unfettered by contrivances. Of course in difficult circumstances they’re not ecstatic. My spoiled-rotten chickens will often dress me down when they’re feeling insufficiently pampered. When threatened nonhuman animals can and do emote the externalities of fear and/or anger; they demonstrate fight-or-flight outwardly (more or less) the same ways we do. But when not threatened, they thrive. They don’t just merely survive. They socialize. The spend time doing little else than being together. They enjoy naps. They enjoy bathing. They love a good meal. They contentedly bask in the sun on cooler days. On hotter days they contentedly snooze in the shade…
Like songbirds, humans can sing songs about their triumphs and their territories. But, like songbirds, we don’t sing because of these useful ends, we sing because it is enjoyable to do so. We do so because it is our pleasure to do simply that. For this exact reason young children often revel in making a new noises with their voices — often to the consternation of nearby elders.
But the most important similarity (to me) between us and other animals, is play. Of course, play has the pragmatic benefit of developing strength and skills necessary for survival. But it isn’t engaged in because of pragmatic benefits. It is engaged in because it is fun.
Backpacking along some coastal beaches in Northern California, my wife and I had the unspeakable pleasure of stumbling across a small cohort of surfing sea lions. No other humans were visible on this wide, multi-mile swath of beach; we had it all to ourselves. And the sea lions had their breakers a mere ten to 20 yards off-shore. We stopped and dropped our packs to watch them. They stopped surfing to watch us; heads bobbing perfectly above the waterline as the ocean continued to undulate… I held my hand up and said “Sorry. Please don’t mind us.” Apparently they didn’t because they resumed their surfing session. After a short break we hoisted our packs back on and continued on our way. We were grateful the sea lions didn’t begrudge our presence.
I’ve also watched birds surf the wind. Everyone owes it to themselves to at least attempt to surf. It is quite difficult, at least for me. But I’ve had enough accidental success to understand why many humans can dedicate much of their lives to it. Aquatic mammals surf for exactly the same reasons humans do. It is exhilarating. And if you ever watch birds surfing the wind… an entirely new dimension (literally) emerges. If any mammal could do that, they would! Orographic winds work backwards at night so bats don’t have access to the hilly playground winds that diurnal birds do.
Have you ever watched goats play king-of-the-hill? Or chickens playing chase? Humans didn’t teach them to do that!
The eastern cottontail rabbit arguably does have a hard life (compared to social rabbits). It lives a mostly a solitary life and predated by basically everything-under-the-sun. Most don’t make it to a full year old. We aren’t envious of it’s position in the ecosystem. But have you ever watched one delighting in a dust bath? I have. And MacAskill obviously hasn’t.
As far as we can tell, they don’t resent their lot in life. They just do their rabbit thing as best they can. There’s no need for them to worry about getting killed; until the predator is upon them. Even then they probably don’t worry; they just run — as their life depends on it. Either the rabbit makes it, and is on to another rabbit day, or it doesn’t and it get killed and eaten. There’s nothing for the rabbit to worry about before it happens; and worry after it happens is impossible. Another example of a life with “one bad day.”
Thankfully they breed like rabbits and, thus, both the rabbits continue and they provide their predators with essential, life-granting food. Predators have a right to live as well. At least the rabbit knows it can find it’s food growing out of fixed places on the ground. Their predators survive only by having to locate, hunt and kill incredibly agile “food” that doesn’t want to be a meal! Which has a harder lot in life: predator or prey? Do you really think they resent their respective lots? Is one “good” and the other “bad?”
MacAskill’s argument (if presented accurately) would not only claim that it’s a net good to end these lives of beings he obviously knows nothing about, but would justify genocide by erroneously judging them as worthless!
It appears MacAskill is implying the ecological devastation humanity has wrought (which is driving life forms far more complex and beautiful than the humble cottontail towards extinction) is now potentially exonerated by the virtue of it being a genocide? This is a despicable notion. MacAskill should be ashamed of himself for proffering such ideas. And shame on both Hoel and Harris for not decrying this abominable thinking appropriately.
The push-back this idea receives in the podcast conversation falls short of the bold rejoinder I would have hoped for. Hence my strong rejection here. Harris does object to the notion, but for hollow consequentialist reasons.
“If it is, in fact, true that rabbits live horrific lives of suffering what we would scarcely find endurable… then it is plausible to say that we’re doing rabbits a favor if we ended their lives because… their lives are just a circumstance of pure-enough misery… …I don’t happen to believe that’s true. But if it we knew it was true...” {2:01:48-2:02:22]
Harris does trepidatiously come on board with halfway-good sense: “…the reason we recoil from it is because of the absolute hubris of currently believing we have anything like those facts in hand, and of our intuition… that meddling with the Earth in such a way to wipe out whole species is something, by default, we should be highly indisposed to doing.” [2:02:55-2:03:20]
So his consequentialist objection to rabbit genocide appears to be grounded in a mere lack of data indicating the degree of misery of typical rabbit lives? Why should we expect their lives to be even partially miserable, let alone completely?
And he’s mistaken that the hubris is found in erroneously thinking we have the facts-of-misery in hand. The hubris is found in the notion that such “facts” could ever be known. We can’t ever get to the bottom of the ramifications of a single life. Let alone the collective lives of an entire species. Rabbits eat plants that their predators can’t eat. While they’re alive, their excrement is a potent source of life-food in its own right. When a predator kills and eats one, the predator is allowed to remain among the living. All aspects of a rabbit’s life - birth, life, eating, defecation, and death - are good.
The world needs rabbits exactly as they are.
So the reaction to hubris is misplaced and grossly underestimated in the podcast conversation. But worse, MacAskill’s rampant and wholly-unjustifiable human chauvinism isn’t even acknowledged. To judge a species’s collective lives as “good” or “bad” or “miserable” is a human judgment. What about the rabbit’s judgement? They aren’t committing suicide en masse. They are living the life they’ve been given and it’s their right to live their lives! The only life a human is fit to condemn to death is their own life.
If you don’t commit suicide then, presumably, no matter how bad things are your life is still worth living. Can we please afford beings not us the same courtesy to decide their own fate?
The disappointing facets of Harris’ response continue: he says we should merely be “highly indisposed” to committing genocide of entire species? How about “morally opposed to it under any circumstance?” This flippant, myopic rationality is the lifeblood of moral realism in general and consequentialism in particular.
Even if humans had the wisdom to decide which species should survive and which should go extinct (which we don’t), what give us the right to commit genocide on the species level? I’ve already argued that no species is more important than any other. All species are cooperating (even if superficially in an adversarial manner) in bringing about the Earth-of-tomorrow. Let the Earth itself decide which will flourish and which will die.
Humans have the right to live their lives, but they do not have the right to decide which non-humans should die. That is a privilege, not a right. We are here only by the good graces of Nature. We must kill to continue. So let us cull life with all due respect and gratitude. And let us graciously accept that each of us must enter into the same death in due course. That is our tribute to all the life killed to sustain our bodies over the course of our lives.
GENETIC GENOCIDE: NEVER A GOOD IDEA
Unfortunately in this podcast conversation Harris cannot adopt this reasonable abstention against genocide because (in keeping with something he’s frequently repeated): “…I… think we probably should do it for mosquitos. If you put me in charge of the world and gave me the CRISPR technology here to engineer the death of a species… barring some argument to the contrary that I haven’t yet heard, I probably would kill all of the mosquitos.” [2:03:30-2:04:00] His justification being they “…kill hundreds of thousands of us every year and render many more millions miserable…”
Well I’m grateful he’s not in charge. Here are some “arguments to the contrary” Harris should carefully consider:
Mosquitos don’t intend to harm us. They just need blood to reproduce. If genetic engineering of a genocide is infallible in our thought experiment, then why not target the types of microorganisms that cause mosquito-borne illnesses in humans rather than target the mosquitos themselves? I still think that’s a horrific idea (details to follow), but it shows the selection of species is arbitrary. How is “arbitrary” a comfort where genocide is concerned?
Regardless, mosquitos can act as pollinators and are a important part of the diet of many species including predatory insects (which are of vital importance to any ecosystem humans would like to inhabit and cultivate), fish, and birds; not just bats! And everything that feeds off mosquitos is potentially food for another predator; and that pattern continues ad infinitum. It is utterly impossible to even guess at the consequences of eliminating mosquitos off the face of the planet.
The negative impact on human life by mosquitos is obvious. Attention can and should be given to prevention (especially via physical barriers and naturally-derived repellents, which are not difficult to come by or make) and medical treatment. Here we find good news: such things are already being done! And you can contribute to that effort if you’d like to. There is no downside to doing so!
But mosquitos aren’t ubiquitous. In “the old days” people lived a strategic distance away from water sources that bred mosquitos. Even if you can’t get away from those sources completely, hopefully you can escape the hotbed of breeding grounds. If humans are forced to live in those danger zones nowadays, there are problems other than the mosquitos at work.
All this should beg the question of learning the ecology of mosquito predators. If you’re living near where mosquitos breed (and I do), then great benefit can come from supporting mosquito predator habitat and populations (which I do). Chances are that supporting those predators will aid in population control of other insect pests besides mosquitos (which I’ve seen). One strategy that is a slam-dunk for failure? The use of insecticide. The most potent source of mosquito predation comes from insects themselves. To use insecticide to to kill the best hope for insect pest remediation.
Regardless, and for argument’s sake, let us grant that a magic, perfect CRISPR-Cas9 genocide for mosquitos could be engineered; and that it would target only the specified genes with zero “targeting errors.” I’d guess that this is impossible; rather there would be a low, but non-zero, error rate. But let’s grant a perfect “inauguration” as we release the genocide into the wild. Given all that, how high of a confidence can we hold that:
1.) The CRISPR-Cas9 genocide will not accidentally target similar genes in any other species on Earth? Last I checked, for every species on Earth we know of (and how few of those have we fully sequenced the genome of?), there could be more than seven species extant — and obviously we don’t even have access to their genomes at all! This technology is intended to be a species-wide genetic genocide. A single mistake means the unintentional eradication of an entire species. What are the chances this would happen not only once, but, say, seven times? Or 70? Or 700?I don’t think there can be any quantifiable conjecture on this question; but my gut assessment is such a catastrophe is “virtually guaranteed.” Regardless, there can be no denying that it is impossible to predict the outcome of a release of such a weapon. Therefore the genetic genocide should never be realistically considered.
2.) There are estimated to be more than 100 trillion mosquitos on the planet. Again, for argument’s sake, let us grant that #1 above is not the case at the outset. I’m trying to be generous in this thought experiment. It will take time for the gene “assassin” to spread. What are the chances that in 100 trillion+ attacks against mosquitos that there will be zero chance that the assassin will mutate before the entire genocide is finished? The chance of zero chance? Zero chance! Again it’s virtually guaranteed to mutate at some point along the way. Not only does that take us back to #1, but now it takes us back to a #1 where where human judgement is entirely removed from targeting goals. For all we know, humans are the target species. Or dogs. Or whales. Or pine trees. Or some type of plankton. Or cyanobacteria. If this isn’t Pandora’s Box, I don’t know what is!
CONCLUSION: INJUNCTIONS of MORAL REALISM ARE PONDEROUS
A final illustration of the shortcomings of moral realism: beyond simply being untrue, it yields a morality that is clumsy and burdensome. Here are some examples contrasting common moral realist injunctions with a graceful transcending of strict moral code.
Be honest! — Why lie? You have to remember what lies you’ve told. You never have to remember what truths you’ve spoken.
Be patient! — What’s your hurry? Do you enjoy being frantic and stressed out?
Don’t judge! — Judging takes work; I’m lazy.
Be kind! — People are fine as they are. Why cause a fuss? Disagreement also takes work.
Be compassionate! — I need others. A life of solitude is literally impossible.
Respect people! — I like people. Most people, at least.
Most people, regardless of their identity or politics, are pretty kind. And those that aren’t kind are usually at least superficially nice. Most people like to laugh; I certainly do. So getting along well with others is really quite easy most of the time. If I find someone objectionable, I just avoid them. I don’t wish them to be any way other than they are; but neither to I wish to spend any time with them if I don’t have to.
None of the above entails any “effortful” contemplation of either intentions or consequences. Not wanting hassles could be called consequentialistic thinking. But judgements of “good” or “bad” or “right” or “wrong” do not need to be entertained. It’s not “good” or “right” for me to not want hassles. Hassles will happen anyway, and when they do they are not “bad” or “wrong.” They’re just work and, being lazy, I assiduously try to avoid unnecessary work. Rigid moral calculus is not needed; everything boils down to preferences and opinions. And there’s nothing wrong with that.
CLOSING JOKE:
At least they end on an upbeat! Harris says: “It’s totally possible to have a human life that is so miserable, and for which there is no reasonable possibility of it improving, such that suicide is a rational and compassionate act.”
I completely agree.
And on that note, I hope reading this ponderous essay has not induced such a state in anyone. Not to worry... The assassin for me is probably already in the works.
Hoel and Harris end with a few, much-appreciated jokes as well.
Background note on my interest in Sam Harris: His work earned my highest respect the first time I encountered it. That was reading his Letter to a Christian Nation while I was still a devout, evangelical Christian; and in a leadership position at a church that I helped start. It was quite unexpected that I would find myself agreeing with virtually all of the book - apart from the atheism, of course. However, a few years later I had reasoned my way out of religion and renounced my Christian faith altogether. At that point (2012) I began to dig into his work deeply and have followed it closely ever since.
A tremendous amount of misunderstanding surrounds the concept “nondual” (Sanskrit advaita). Shunryu Suzuki-roshi (although he used the label “the oneness of duality” rather than “nondual”) understood the concept perfectly well and gave us a wonderful formulation of it: “not two, and not one.” [Zen Mind, Beginner’s Mind, Weatherhill, Inc.,1982. “Posture” p.25]. Within “nonduality” distinctions can be recognized. But they are distinctions without separation, atomization, division, alienation, estrangement, etc.
The concept “nondual” (like the concept “nonconceptual”) is only an indicator or a pointer. It is an incomplete description of a key feature of what is to be found in our direct experience; but it will only be found if we intentionally look for it.
I have written a much more detailed post about this that can be found here:
https://opensourceawakening.substack.com/p/why-would-you-seek-awakening
Harris has attempted such in his book The Moral Landscape. It is beyond the scope of this essay to address those arguments. Even though I ultimately find his arguments for moral realism unconvincing, the book itself is very much worth reading. His articulation of the possible strategies for navigating the “moral landscapes” we find ourselves in are both illuminating and compelling. They are not actually diminished by an undermining of moral realism. I wonder if he would find that surprising?
I spent more than 30 years as a wholly-devout Christian until I completely reasoned my way out of my faith. Apostasy was never my goal, but apparently it was the inevitable result of an intense 11-year project of trying to expunge cognitive dissonance from my life. I was compelled to reform my faith in light of ever-increasing scientific knowledge. Eventually my faith collapsed altogether because intellectual honesty was more important to me than my faith (even though the latter was the center of my life and worldview).
Academic moral philosophy is overly-rational; that is rational to the exclusion of other forms of reason.
https://en.wikipedia.org/wiki/Famine,_Affluence,_and_Morality#Summary
Hearsay has it that thought experiments were very important for Einstein. But I’m not Einstein. And neither is any moral philosopher I’ve come across.
I was raised to regard homosexuals as profound sinners who were doubtlessly going to be damned to Hell. Thankfully no harm was ever called for or encouraged by any of my elders. Rather, with smug sanctimony, they knew and taught me what horrible fate awaited “those sinners” already.
One day in high school, I learned a friend was gay. It came as quite a shock! This was a good person! Suddenly my moral code was called into question: the code says they are a sinner and should be shunned — but they are my friend. What should a person do at this juncture?
That totally depends on the individual, the traditions they hold to, their societal circles, and the overall moral zeitgeist of their community and of their homeland. In my case I was much more inclined to call the moral code itself into question than to automatically impugn a friend that I already knew and appreciated as a good person. If I had been raised in a strict, fundamentalist Muslim community in the Middle East as opposed to a fundamentalist Christian one in the United States… who can say what that counterfactual “I” would have done?
Lao-tzu was a fully-realized mystic/sage. Virtually no other philosophers are. Of course when one speaks with language, one must articulate in terms of concepts. So the words of the Sage are often expressed as philosophy. But philosophy itself is a hollow skeleton of the living wisdom embodied by the Sage. One loses the true meaning and life of the Sage’s words when one adopts an academic philosophical exegesis to their recorded text.
Some traditions say gunpowder was stumbled upon by Taoist alchemists in search of the elixir of life. What a shame the pursuit of immortality introduced the most potent source of death humanity has ever known. The race to AI is scarily eclipsing these past triumph-disasters. Is there no one to pull the reigns back? I would if I could…
Thankfully ducks and chickens can find slugs to be a tasty treat! They can protect our gardens and turn the slugs into eggs; win-win!
Why does Harris side-step any consideration that some people - I’d hope most people - might lament the existence of pornography altogether?
Excellent data points to consider in evaluating the fallacy of “free” will. Obviously there is will (volition). But it is the opposite of “free.” It is 100% constrained by forces outside our conscious control. To the interested reader I’d enthusiastically recommend both Harris’ concise and adroit book Free Will, and the wonderful book Determined by Robert Sapolsky.
James Low, talk “Dissolving Attachment,” Maccelsfield, UK, 2017. Retrieved 2024, Aug 4:
https://simplybeing.co.uk/audio-records/retreats/dissolving-attachment-18-2017/ “Part 12,” 09:00.
Also available via “Waking Up” app. Theory→Series→Everything As It Is. “Beyond Attachment,” 13:08.
James Low, talk “Dissolving Attachment,” Maccelsfield, UK, 2017. Retrieved 2024, Aug 4: https://simplybeing.co.uk/audio-records/retreats/dissolving-attachment-18-2017/ “Part 12,” 11:40-12:42.
Also available via “Waking Up” app. Theory→Series→Everything As It Is. “Beyond Attachment,” 15:27-16:22.
Those up/downsides could be called “consequences,” but doing so does not add or detract from our moral intuitions here. As illustrated by the subsequent harmony/discord discussion.
James Low, talk “Dissolving Conflict,” Emmerson College, UK, 2017. Retrieved 2025, 03/03: https://simplybeing.co.uk/audio-records/year/audio2017/dissolving-conflict-life-death-19-2017/ “Part 21,” 22:15.
Here are my favorites so far. They are deeply-ecological approaches to horticulture/agriculture from radically different perspectives: Paul Stamets (Mycelium Running), Masanobu Fukuoka (The One Straw Revolution), M. Kat Anderson (Tending the Wild), Edward Faulkner (The Plowman’s Folly), Robin Wall Kimmerer (Braiding Sweetgrass), Fred Magdoff/Harold Van Es (Building Soils for Better Crops), and Will Bonsall (Will Bonsall’s Essential Guide to Radical and Self-Reliant Gardening).
My favorites (so far): Carl Safina (Beyond Words, Becoming Wild), Frans de Waal (The Bonobo and the Atheist, Mama’s Last Hug), and Bernd Heinrich (Mind of the Raven, One Man’s Owl).
Fukuoka, Masanobu, The One Straw Revolution, Rodale Press, 1978, p. 163