toonalfrink

Wiki Contributions

Comments

AGI Safety Fundamentals curriculum and application

I have added a note to my RAISE post-mortem, which I'm cross-posting here:

Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.

A Red-Team Against the Impact of Small Donations

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

I would like to respond specifically to this reasoning.

Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value. 

Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.

Probabilities for X and Y could be hotly debated, but I'm comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high. 

The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.

How do EAs deal with having a "weird" appearance?

To me, reducing your weirdness is equivalent to defection in a prisoner's dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.

Of course you can't just go all-out on weirdness, because the cost you'd incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.

How many people should get self-study grants and how can we find them?

Cause if there is excess funding and less applicants, I'd assume such applicants would also get funding.

I have seen examples of this at EA Funds, but it's not clear to me whether this is being broadly deployed.

How many people should get self-study grants and how can we find them?

Let's interpret "study" as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?

How many people should get self-study grants and how can we find them?

I think the biggest constraint for having more people working on EA projects is management and leadership capacity. But those aren't things you can (solely) self-study; you need to practice management and leadership in order to get good at them.

What about those people that already have management and leadership skills, but lack things like:

  • Connections with important actors
  • Awareness of the incentives and the models of the important actors
  • Awareness of important bottlenecks in the movement
  • Background knowledge as a source of legitimacy
  • Skin in the game / a track record as a source of legitimacy

If I take my best self as a model for leadership (which feels like a status grab but I'll hope you excuse me, it's the best data I have) then good leadership requires a lot of affinity/domain knowledge/vision/previous interactions with the thing that is being led. Can this not be cultivated?

How many people should get self-study grants and how can we find them?

There is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we're tempted not to count those.

What high-level change would you make to EA strategy?

I would train more grantmakers. Not because they're necessarily overburdened but because, if they had more resources per applicant, they could double as mentors.  

I suspect there is a significant set of funding applicants that don't meet the bar but would if they received regular high-quality feedback from a grantmaker.

(like myself in 2019)

List of EA funding opportunities

I'd recommend putting the airtable at the top of your post to make it the schelling point

Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

What would it have taken to do something about this crisis in the first place? Back in 2008, central bankers were under the assumption that the theory of central banking was completely worked out. Academics were mostly talking about details (tweaking the tailor rule basically). 

The theory of central banking is already centuries old. What would it have taken for a random individual to overturn that establishment? Including the culture and all the institutional interests of banks etc? Are we sure that no one was trying to do exactly that, anyway?

It seems to me that it would have taken a major crisis to change anything, and that's exactly what happened. And now there are all kinds of regulations being implemented for posting collateral around swaps and stuff. It seems that regulators are fixing the issues as they come up (making the system antifragile), and I don't see how a marginal young naive EA would have the domain knowledge to make a meaningful difference here.

And that goes for most fields. Unless we basically invent the field (like AI Safety) or the strategy (like comparing charities), if the field is sufficiently saturated with smart and motivated people, I don't think EA's have enough domain knowledge to do anything. In most cases it takes decades of work to get anywhere.

Load More