SiebeRozendal

3103 karmaJoined

Bio

Participation
4

Unable to work. Was community director of EA Netherlands, had to quit due to ME/CFS (presumably long covid). Everything written since 2021 with considerable brain fog, and bad at maintaining discussions/replying to comments since.

I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research. Currently most worried about AI and US democracy. Interested in biomedical R&D reform. 

Comments
463

Worth noting that that $37.8B figure of the founders pledges worth is based on their $380B valuation from their February fundraise. Current secondary markets value Anthropic much more highly, e.g. Ventuals (speculative valuation futures market) at $850B[1] would make those pledges alone worth $85B. Add in EA-aligned and -adjacent investors as well as employees, and the potential for further increase in value, and we are looking at $100-200B worth of pledged/intended donations. This is an insane amount of philanthropic money.

Besides your point about limited grantmaking and project capacity, I'd like to make two others: 
- As the Transformer piece notes, all this money will have a significant pro-Anthropic bias
- None of the founders' pledges are legally binding. I've previously proposed that this might be a worthwhile project to make it so, but it's obviously a sensitive subject.

There's also the OpenAI Foundation, with a net worth of 25.8% of OpenAI, currently ~$220 billion. Their recent hiring of Jacob Trefethen as Life Sciences Lead, formerly at Coefficient Giving, makes me hopeful that at least some of that money will be reasonably well-spent, even if not on AI safety.

  1. ^

    There are some concerns about the reliability of this platform's price. But OpenAI's recent $850B valuation was ~28x their Annual Recurring Revenue, and Anthropic's recent ARR was $30B which at the same multiple would put them in the same ballpark. In any case, my general point remains that the actual pledged amount could become significantly larger than $37.8B

It seems like a worthwhile project to ask/pressure Anthropic's founders to make their pledges legally binding. 

Anthropic's founders have pledged to donate 80% of their wealth. Ozzie Gooen estimates that in a few years this could be worth >$40 billion.

As Ozzie writes, adherence to the Giving Pledge (the Gates one) is pretty low: only 36% of deceased original pledgers met the 50% commitment. It's hard to follow through on such commitments, even for (originally) highly morally motivated people.

Sounds like this would benefit from this method of analyzing online anecdotes to estimate effect sizes, which showed good results in early data and is being expanded: 

https://www.cs.toronto.edu/~nikita/natural/ 

There's advantages to play money, such as that players don't care as much about the time value of money (it's also much easier to start and resolve markets, leading to many more markets)

I sometimes think of this idea and haven't found anyone mentioning it with a quick AI search: a tax on suffering.

EDIT: there's a paper on this but specific to animal welfare that was shared on the forum earlier this year.

A suffering tax would function as a Pigouvian tax on negative externalities—specifically, the suffering imposed on sentient beings. The core logic: activities that cause suffering create costs not borne by the actor, so taxation internalizes these costs and incentivizes reduction.

This differs from existing approaches (animal welfare regulations, meat taxes) by:

  • Making suffering itself the tax base rather than proxies like carbon emissions or product type
  • Creating a unified framework across different contexts (factory farming, research, entertainment, etc.)
  • Explicitly quantifying and pricing suffering

The main problems are measurement & administration. I would imagine an institute would be tasked with guidelines/a calculation model, which could become pretty complex. Actually administrating it would also be very hard, and there should be a threshold beneath which no tax is required because it wouldn't be worth the overhead. I would imagine that an initial version wouldn't right away be "full EA" taking into account invertebrates. It should start with a narrow scope, but with the infrastructure for moral circle expansion.

It's obviously more a theoretical exercise than practical near-term, but here's a couple of considerations: 

  • it's hard to oppose: it's easier to say that carbon isn't important or animals don't suffer. It's harder to oppose direct taxation of suffering
  • it's relatively robust in the long-term: it can incorporate new scientific and philosophical insights on wild animal welfare, non-vertebrate sentience, digital sentience, etc.
  • it's scale sensitive
  • it focuses the discussion on what matters: who suffers how much?
  • It incentivizes the private sector to find out ways to reduce suffering

What are the latest growth metrics of the EA community? Or where can I find them? (I searched but couldn't find them)

Couple of thoughts, kinda long and rambly

This gets discussed occasionally on the Manifold Discord and I wanted to share some skeptical points that one of the top forecasters (Semiotic Rivalry) made there:

  • "for me to go >5% on this [authoritarian takeover/coup] i'd have to see them openly disobeying orders from the supreme court or like, the very least should be killing the filibuster"
  • "I feel like the lightest first step of the fascist takeover would be to have the VP overrule the senate parliamentarian on what can be permitted to go into a reconciliation bill, which is totally legal and tons of Dems wanted Biden to do, and they failed to even do this"
  • Supreme Court is still constraining him, e.g. Trump wasn't allowed to fire Federal Reserve commissioner Lisa Cook
  • Revealed preferences suggest people don't actually believe dictatorship/catastrophe to be very likely: they aren't moving abroad or stocking up on guns. (To which people replied that it's not easy to find a good place to live abroad due to economics, language)

 

This was largely in response to me saying that I find it hard to think through Trump/MAGA military (self-)coup possibilities. Because although military self-coups appear to be rare in consolidated or backsliding democracies, they're not entirely unheard of, and it sure seems like Hegseth, Trump, etc. are working towards this. They are systematically dismantling military guardrails:

  • They keep pushing the envelope on deploying military domestically (in conflict with the Posse Comitatus Act), which blurs lines among the population and the military for this
  • They've fired a lot of military leadership as well as all Judges Advocate General (JAGs) which generally serve as a constraint on executive overreach.
  • The pardoning of war criminals, the broader pardoning of J6'ers and Trump allies, and the commands to engage in war crimes like shooting down a boat of non-combatants (allegedly drug-traffickers)

The general pattern of purging appears to be that Trump/the administration gives an illegal/norm-breaking order which functions as a loyalty test: it forces everyone involved to either comply, step down, or refuse to obey (which tends to get you fired - something that the Supreme Court hasn't been adequately protecting besides the Fed). 

The coup form I expect, if it happens, would not be a direct command to military generals, but to order his most loyal militarized groups (e.g. red states National Guard, ICE) to take control of the democratic/election process. Opposing military would then have to coordinate on action, which would be very difficult. The general population could resist en masse (South Korea 2024-style), but so far protests have been small, and in the US there's a vocal and dangerous base supporting Trump. That said, base rates suggest a coup is still very unlikely, and coups are difficult. I don't know what probability I would give it, I'm mainly trying to understand the mechanisms here.

Other thoughts:

  • Trump attempted to overthrow an election before
  • Orbán is often mentioned as comparison (rightly so), but he was able to amend the Constitution in the first year due to Hungarian law, which is a major difference
  • An economic crisis would be a major cause of discontent and the AI boom is really unfortunate
  • Protests so far have been small, No Kings Protests of "5 million" was [greatly exaggerated](https://bsky.app/profile/siebepersists.bsky.social/post/3lruu445wgk27) (posted on BlueSky but I'm not at all active there otherwise). I think bigger protests will be necessary (but not sufficient)
  • I haven't even talked about AI but it's a wild card, probably would favor Trump. Executives are largely very appeasing (OpenAI, xAI), appeasing (Google, Meta), or softly defiant (Anthropic)
  • People think Trump is too old and a unique figure, but I'm not confident that a successor wouldn't be as bad. At some point, they either put a successor on the ballot or Trump himself. A successor could pull away power from Trump and then lose. There's generally a lot of possibility around this to dis-unite any coup-interested faction. However, I find the sentiment that "Trump is uniquely bad and his successor will not have the same power and therefore it's not a concerning scenario" overconfident, and there's plenty of systemic reasons to expect a successor to be pretty bad

Pet peeve: stop calling short timelines "optimistic" and long timelines "pessimistic". These create unwarranted connotations that day AI progress is desirable. Most people concerned about AI safety find short timelines dangerous! Instead, use "bullish" vs. "bearish", or just "short timelines" vs. "long timelines".

Load more