Yarrow Bouchard

80 karmaJoined May 2023Seeking workmedium.com/@strangecosmos

Bio

pronouns: she/her or they/them

Comments
34

The $30M for SRF was a one-time windfall and its annual income and expenditures haven't increased nearly to $20M.

Your question is hard to read and understand the way it's written. Try using a human proofreader or a software tool like Grammarly.

How can you estimate cost-effectiveness for scientific/medical research?

It's as if trillions of dollars per year were spent on firefighting but only millions of dollars per year were spent on fire prevention.

I was unaware of these resignations! Why did Will resign? Was it because of his association with SBF? Will doesn’t say why he resigned in the source you linked. He links to a post that’s extremely long and I couldn’t immediately find a statement.

The most prominent charity evaluator is GiveWell, which makes a list of top charities working in global health. If you're interested in other cause areas, like animal welfare, there are other evaluators. Does that help answer your question?

I think the EA community can, should, and will be judged by how we deal with bad behaviour like fraud, discrimination, abuse, and cultishness within the community. 

Who knew what about Sam Bankman-Fried’s crimes when? As I understand it, an investigation is still underway and, as far as I know, nobody who enabled or associated with SBF has yet stepped down from their leadership positions in EA organizations. Not necessarily saying anyone should, but I’m not sure I see enough of a reckoning or enough accountability with regard to the FTX/Alameda fraud.

Has the EA community done enough to rebuke Nick Bostrom’s racism? The reaction seems dishearteningly mixed.

What will the EA community ultimately do about the allegations of abuse surrounding Nonlinear, once the organization posts its much-awaited response to those allegations? This is something to watch.

There are disturbing accounts of Leverage Research and, to a lesser extent, CFAR functioning much like cults. That’s pretty weird. How many communities have two, or one and a half, cults pop up inside them? What are the structural reasons this might be happening? Has anything been done that might prevent another EA-adjacent cult from arising again?

I’m not trying to be negative. I’m just trying to give constructive suggestions about what would improve EA’s reputation. I think there are a lot of lovely people and organizations in the EA sphere. But we will — and should — be judged based on how we deal with the minority of bad actors. 

but seems so far away technologically it may as well be sci-fi.

Further away and more sci-fi than AGI?

There’s a few things to consider. 

  1. One of the best ways to prevent the creation of a misaligned, “unfriendly” AGI (or to limit its power if it is created) is to build an aligned, “friendly” AGI first. 
  2. Similarly, biological superintelligence could prevent or provide protection from a misaligned AGI.
  3. The alignment problem might turn out to be much easier than the biggest pessimists currently believe. It isn’t self-evident that alignment is super hard. A lot of the arguments that alignment is super hard are highly theoretical and not based on empirical evidence. GPT-4, for example, seems to be aligned and “friendly”. 
  4. “Friendly” AGI could mitigate all sorts of other global catastrophic risks like asteroids and pandemics. It could also do things like help end factory farming — which is quite arguably a global catastrophe — by accelerating the kind of research New Harvest funds. On top of that, it could help end global poverty — another global catastrophe — by accelerating global economic growth. 
  5. Pausing or stopping AI development globally might just be impossible or nearly impossible. It certainly seems extremely hard. 
  6. Even if it could be achieved and enforced, a global ban on AI development would create a situation where the least conscientious and most dangerous actors — those violating international law — would be the most likely to create AGI. This would perversely increase existential risk. 

I highly recommend the book Consciousness Explained by philosopher Daniel C. Dennett.

Load more