Co-founder of Nonlinear, Charity Entrepreneurship, Charity Science Health, and Charity Science.
If you like that documentary, you might like Up as well. It's a documentary that follows 14 different kids in the UK, starting at the age of 7, then showing what their lives are like every 7 years.
They tried to make it representative, but based on what they thought was important in 1964 England, so mostly based on class.
It's really fascinating. One guy becomes homeless and ends up being a politician. Another is really successful but feels terrible because all of his friends are even more successful. There's a more or less happy family that seems content with a pretty average life. Etc.
Not even close to representative of the world's sentient beings, but nevertheless, way more representative than I ever get talking to my social circle. Also really cool to get a longitudenal sense of a person, as opposed to a snapshot.
You also might like:
Also, thanks for sharing this! I love these sorts of documentaries and am so going to watch it.
Good question! So, that's important, but I'm less worried about this because:
In most endeavors, you expect to receive many nos before receiving a yes (eg applying to schools, jobs, publishing papers/books, startups, etc). In EA it's common to receive one no and for people to give up.
I think this would only make sense if it was in a field where talent / value was easy to spot and evaluate and there were good feedback loops. But AI safety is far more like evaluating startup founders than evaluating bridge-builders.
Except even more difficult to evaluate, because at least with for-profit founders, you find out years later if they made money or not! With ethics, you can't even tell if you're going in the right direction!
If that's the case, we should have more evaluators, so that there's less people who slip through the cracks.
I discuss something similar in another comment thread here.
Good question! Here are a few thoughts on that:
You can tell if somebody is a good bridge builder. We have good feedback loops on bridges and we know why bridges work. For bridges, you can have a small number of experts making the decisions and it will work out great.
However, with startups, nobody really knows what works or why. Even with Y Combinator, potentially the best startup evaluator in the world, the vast majority of their bets don’t work out. We don’t know why startups work and the feedback loops are slow and ambiguous.
Charity startups and projects are more like startups, but they’re actually worse. At least with for-profits you can tell eventually if something is profitable or not. With impact, you can never know for sure. Like, we can still discuss whether Eliezer has been net positive or not because of his potential influence on the launch of OpenAI. And we can even question whether AMF is net positive, because of its flow-through effects on factory farmed animals. Heck, we can even question the whole framework of consequentialism, and maybe it’s better to be a deontologist, etc.
So, given that Y Combinator misses tons of opportunities in a field with better feedback loops and a better understanding of how things work, we should expect that to be even more the case for large EA funders.
With YC, at least everybody’s trying to maximize the same goal - money. With nonprofits, you might actually be pursuing different goals. Even if everybody’s a utilitarian, there’s a bunch of different sorts of utilitarians you can be.
Different people can spot different types of talent or theories of change based on their background. For example, people who’ve spent their entire lives in academia might be better at spotting academic talent but less good at spotting entrepreneurial talent, and vice versa.
Right now it’s much harder to get funding if you’re not based in the Bay Area or London. This will help fix that.
Big funders usually don’t have the time to process smaller grants, leading to a lot of people missing out.
Due to time constraints, big EA funders often only have one person review an application before making a decision. This can lead to all sorts of noise in the assessments, like them making worse decisions because they’re hungry, tired, distracted, feeling emotional, don’t know much about the field, misunderstood the application, had a bias towards the applicant, etc etc.
I remember reading an article here about grant applications being noisy but can't find it. Kat-points to anybody who finds it and links it in a reply!
Finally, I’ve definitely seen a lot of people rejected for funding who I think were doing good work or went on to do it anyways. It’s really easy for people to be refused funding for all sorts of reasons
In general, I really want to push back against the meme in our community that if you don’t get funding from one of the big EA funders, that must mean your project isn’t good.
For most things in this sort of category, even the absolute best have to try many times before they get accepted. Even the best scientists have to apply to a lot of different schools and grants. Even the best authors get rejected from publishing companies. Even the best founders have to ask dozens to hundreds of investors before they get funded. Many people who’ve been rejected by tons of EA orgs for jobs or grants have gone on to do great things.
There’s room for disagreement on how to do the most good, and that’s what I love about EA. And now, hopefully, with more diverse funders, we can turn that productive disagreement into action, and then impact.
Entertaining and useful? Kat approves.
Also, wanted to highlight the list of AI alignment communities I found on the Resource Rock. Found a lot of really cool things there that I didn't know about before.
Thank you so much for making this!
Good question! It's hard to say. I suspect that meditation experience won't be particularly relevant because most of the time when people "meditate", it's just concentration practice, which is not particularly relevant to these techniques.
I had probably a sum total of 1.5 hours of loving-kindness practice under my belt before this, so I don't think that'll be particularly relevant either.
Could be an interesting thing to measure though if I end up doing more studies on it.
It happened about a year ago and hasn't changed since :)
Even though last year was pretty hard for me in a lot of ways and usually I would have been crushed by everything.
I did do the occasional maintenance session. Maybe five 30-minute sessions over the year? Hard to say how much of a difference those made. Internally, it feels like very little, but they could also be critical.
The URL only for some of the sub channels on some of the platforms. But always the title, author, and source.