In Defense of Effective Altruism

The Worst Form of Ethics, Except For All The Others

This would’ve been a lot easier in August 2022.

  1. William MacAskill had just released What We Owe The Future, wowing hosts like Trevor Noah and Tim Ferriss with his passionate arguments for effective altruism. Elon Musk even recommended MacAskill’s book, saying “Worth reading. This is a close match for my philosophy.”
  2. Sam Bankman-Fried was still worth $24B and promising to donate it all to EA causes.
  3. And the biggest criticism was that EAs were too focused on AI safety.

Flash forward 15 months, and the EA movement is in trouble.

  1. MacAskill’s long-termism was criticized for ignoring the present.
  2. SBF’s EA-coded largesse ended up being cover for massive fraud.
  3. And now EA is even taking the blame for OpenAI firing Sam Altman.

RIP: Effective Altruism (2011-2023). Right?

“The measure of a man’s life is not the number of his breaths, but the action he takes.” – Aristotle

EA isn’t new. It’s just a modern rebranding of the ancient ethical principle of consequentialism.

We’ve been having this debate for centuries.

Fundamentally, there are three main schools of ethics:

In practice, most of the world, including our free markets, works on consequentialism (i.e. EA).

  • A company’s stock is not valued by God or virtue, but by a prediction of future value.
  • Techno-optimism, e/acc, etc. are also consequentialist movements.

Sure, it’s time to build. But what are we building?

Before you say “everything”, beware:

“Mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.” – Elon Musk

We can build nuclear power and nuclear weapons, engineer new vaccines and viruses, create AI with the power to advance and destroy humanity.

Safetyism is not the path forward. It’s weak virtue ethics.

With nuclear, nations got the weapons but we didn’t get the power.

This can’t happen with biology and AI.

But we shouldn’t Leeroy Jenkins our way into the future either.

I certainly don’t think all gas, no brakes toward the future. But I do think we should go to the future… And maybe relative to most people who work on A.I., that does make me an accelerationist. But compared to those accelerationist people, I’m clearly not them. So, I think you want the CEO of this company to be somewhere in the middle — which I think I am.Sam Altman

Decisions that affect the future of humanity deserve nuance.

How do we get more AI knowledge and less AI authoritarianism?

More biologics and fewer bioweapons?

There has to be a way to optimize these existential risks and rewards.

  • What if we all focused on maximizing our net positive impact on the world?
    • Congrats, we just invented effective altruism again.

We need to return EA’s focus to maximizing the impact of our charity and work.

  • That starts with addressing our public and private failures.
  • It also means passionately fighting for what’s working in EA.

In EA’s defense, I offer four arguments, from negative to positive:


Why We Love to Hate EAs

Let’s be honest. It made you a little insecure last year when you saw people like SBF openly dedicating their life’s work and net worths to altruistic causes.

And then you felt a little better about yourself when you found out that SBF was actually guilty of fraud.

Guess we’re all just selfish, right?

It’s true. We’re all a little selfish.

But most of us spend most of our lives in service of others. Our kids, partners, parents, friends, customers, strangers.

The central question of effective altruism is how can we best spend this time to maximize good?

That’s a noble goal. EA is worth fighting for, even with our very famous recent failures.

The truth is that most startups are lying when they say they’re making the world a better place.

You can’t outsource your impact to your employer, even if you’re a startup founder.

And that goes extra if you’re a VC. Don’t judge yourself by the successes of your top portfolio companies. Measure your impact by the replacement value of your money vs. other VCs.

There’s also a false narrative that EA is somehow anti-capitalist.

Peter Singer, whose writings gave rise to the EA movement, called Bill and Melinda Gates and Warren Buffett the “most effective altruists in history” for their charitable work focused on maximizing efficacy.

If anything, EA has gone too far in promoting the “earn” part of the earn to give model.

SBF was right about one thing – money is just a tool to get what we really want.

The Giving Pledge, Gates Foundation, GiveWell, etc. have truly saved millions of lives.

So what went wrong for EA?

Our niche subculture got infiltrated by mops and sociopaths.

Now you can’t discuss EA without first addressing SBF’s failures and fraud.

So let’s do that first. Yes, even assuming the best intentions, what SBF did was wrong and there is no way to justify his actions using effective altruism.

But are his mistakes an indictment of the EA movement? Absolutely not.

If capitalism can survive a global financial crisis, too big to fail, bank bailouts, Madoff, and Epstein, and still claim moral high ground, then we can expel our bad actors and get back to work.

What SBF actually did was expose one of three failure modes in EA.

By exploring and learning from these mistakes, we can restore EA’s good name.


The 3 Bad Types of EAs
1. SBF EAs:

A common criticism of people in Silicon Valley, who I think have great futures in their past, are people who say some version of the following sentence: My life’s work is to build rockets. So what I’m going to do is I’m going to make $100M in the next 3, maybe 4, years trading cryptocurrency with my crypto hedge fund because I don’t want to think about the money problem anymore, and then I’m going to build rockets. And they never do either.” – Sam Altman, On Choosing Projects, Creating Value, and Finding Purpose, 2018

This is the classic deferred life plan:

  1. Make $X million.
  2. ??
  3. Change the world.

The problem is that step 2 always ends up being “Increase X by 10x.”

This is the path that ultimately doomed SBF.

This path is also exemplified by the Billions character Taylor, who joins a hedge fund, then (spoiler) decides they’ll leave once they make 100M, then raises that number to $1B, then never leaves.

Earn to give’ is a trap because it doesn’t solve the fundamental problem of money.

People don’t just lack money, they’re stuck in systems where they could never earn their own way out.

EAs are of best service when attacking the problem directly instead of trying to buy a solution.

Money doesn’t translate well into impact.

And this is especially true in politics, where most of the World’s Biggest Problems are solved.

2. Bostrom EAs:

Long-termism only matters if we make it to the future.

Sure, we should spend some time trying to avoid existential risks.

But even Nick Bostrom himself now believes we’ve gone too far in our X-risk panics.

Extinction would be bad. But what if we survive? Who’s working on making our world better now?

One of the biggest problems with the focus on X-risks vs. X-rewards is that there are many potential failure modes for humanity, but only one timeline for us to live.

We need to focus more on improving our timeline and less on whack-a-moling extinction risks.

This is the failure mode that almost took down OpenAI.

Truly effective altruism requires us to devote most of our resources to the near present. It’s smart, not selfish, to focus our efforts mostly on our era.

Billions of people will be born in the next millennium who will solve our future problems.

First and foremost, we need to survive and advance.

3. Eliezer EAs:

We need to loudly disown any EAs who are credibly accused of inappropriate behavior.

Sociopaths will always attempt to infiltrate high-trust subcultures.

It is our job to constantly clean house of any dangerous people from our moral movement.

I take no pleasure in reporting EA’s failures, but it’s vital to fix these issues ASAP.

We’re not the first or last movement to be beset by charlatans.

But we can create a great example for how movements can regulate themselves.


“Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time…

Winston Churchill, House of Commons, 11 November 1947

Democracy is messy. Capitalism is messy. EA is messy. But they get the best results.

We have to be willing to try many different potential solutions to solve all the world’s biggest problems.

EA’s core value lies in the open competition of these ideas.

This causes more debate and controversy. Which is good!

We should be spending much more time talking about the biggest existential risks and rewards.

The core of EA is about doing good better.

  • There’s nothing inherently wrong with this idea.
  • How can we fix our issues without losing our mission?

EA has been successful on at least three missions:

Real effective altruism has been tried.

  • It’s how leaders like Bill Gates, Elon Musk and Sam Altman choose where to focus their work.
  • “In the case of Tesla, SpaceX, SolarCity, and PayPal … it really stemmed from when I was in college and trying to think: what would most affect the future in very likely a positive way? So the three areas I was quite sure would be positive were sustainable energy, the internet, and making life multi-planetary.” – Elon Musk

EA implores each of us to calculate our real impact and see how we can positively improve it.

When everything is reduced to cash flows, it’s easier to make money by moving money around vs. actually building things and improving the world.

The result is that Real GDP is up, but Real Real GDP is flat.

What if we all worked from first principles, instead of following established paths?

One of the best features of EA is that it forces people to check whether they’re really living their ideals.

It’s easy to spend from years to a whole lifetime trying to win an established career path.

But that’s not where the greatest impact is made.

The greatest impact comes from chasing the biggest missions with the best people for the longest time.

That’s effective altruism.

I’m still proud to be an effective altruist. Here’s why:

In 2009, before Effective Altruism was even named, when I was about to graduate from college, I wrote down the mission for my life, “to make the greatest positive impact on the greatest number of people.”

Most of the greatest founders, investors, activists, and politicians I’ve met are all driven by their own version of this mission, whether they call it EA or not.

We’re all tackling different problems and solutions, which is the beautiful part.

The sum of all this work is what makes effective altruism great.


How to Jump-Start EA Again

Forget the money, because if you say that getting the money is the most important thing, you will spend your life completely wasting your time.” – Alan Watts, What If Money Was No Object

The whole world is stuck in a scary simulation of Goodhart’s Law – When money becomes the target, it ceases to be a good measure.

Our society is increasingly disillusioned with the pursuit of money, but relatively few people have sought to replace that desire with something more productive.

Effective altruism has the answers that these people are seeking.

Key: EA isn’t prescriptive – it doesn’t tell you what problem to solve.

We build communities, systems, and tools that can accelerate any positive mission.

So what do you really want to do with your life?

If all you really want is just financial independence to retire early, this essay (and book) isn’t for you.

But if you agree with me that there’s more to life than money, that you want to be remembered for the dent you make in the universe and not just the one you make in your bank account, then EA is for you.

How much positive impact are you really making? How much more could you do?

Ask yourself some version of the Hamming Questions:

  1. “What do you think are the world’s biggest problems?”
  2. “Which ones are you solving?

I’m writing this book to convince more people to take on these huge challenges.

Because if you’re not solving what you think is one of the world’s biggest problems, what are you really doing with your life?

We need to take the focus off the people in EA and return it to the problems.

Everything from politics, to marriages, to movements, fall apart when things get personal, and succeed when everyone focuses on the problems and solutions.

EA has saved millions of lives and can improve billions more. Let’s act like it.

  • So if you’re ethically driven to:
    • Maximize happiness and minimize suffering.
    • Maximize individual freedoms and minimize authoritarianism.
    • Maximize our species’ survival while minimizing existential risks.
  • Congrats! You’re an effective altruist.
    • Pick your #1 problem and let’s get to work!

This is Solution #1 on my list of The World’s Biggest Problems.

  • This idea is a work-in-progress. If you’d like to riff on it, hit me up @neilthanedar on Twitter!

Published by Neil Thanedar

Neil Thanedar is an entrepreneur, investor, scientist, activist, and author. He is currently the founder & chairman of Labdoor (YC W15), a consumer watchdog with $7M+ in funding and 20M+ users, and Air to All, a 501(c)3 nonprofit medical device startup. He previously co-founded Avomeen Analytical Services, a product development and testing lab acquired for $30M+ in 2016. Neil has also served as Executive Director of The Detroit Partnership and Senior Advisor to his father Shri Thanedar in his campaigns for Governor, State Representative, and US Congress in Michigan.