Thanks. Hmm. The vibe I'm getting from these answers is P(extinction)>5% (which is higher than the XST you linked).
Ohh that's great. We're starting to do significant work in India and would be interested in knowing similar things there. Any idea of what it'd cost to run there?
Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I'm also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.
Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks "experts". What do you think?
Would love to see this for other countries too. How feasible do you think that would be?
Thanks for writing this up and sharing. I strongly appreciate the external research evaluation initiative and was generally impressed with the apparent counterfactual impact.
Thanks for your response Ben. All of these were on my radar but thanks for sharing.
Good luck with what you'll be working on too!
Congratulations! I'm looking forward to understanding the strategic direction you and CEA will pursue going forward. Do you have a sense of when you'd be able to share a version of these?
2.
- How do you define guided self-help? Do you mean facilitated group sessions?
- Do you have any specific papers/references that you've used to for those estimates?
Thanks for writing this up. We're running an incubation pilot at Impact Academy and found this post very helpful as a reference class (for comparison in terms of success) as well as providing strategic clarity.
I'm curious, what were the best initiatives (inside and outside of EA) you came across in your search (e.g., y-combinator, charity entrepreneurship, etc.)?
Congratulations on running the pilot and getting these results!
They seem rather promising to me and above a threshold for testing more rigorously and scaling. However, a couple of questions:
Glad it was useful. Looking forward to seeing the course (fortunate that you had reached out to request feedback on it).
Great to see how concrete and serious the US is now. This basically means that models more powerful than GPT-4 have to be reported to the government.
Thanks for posting this!
I appreciate the overview and attempt to build conceptual clarity around this related cluster of ideas that you neatly refer to Very Positive Futures.
To respond to some of your prompts:
Having thought more about it, I think the AI safety institute might be a continuation of the UK Frontier AI Taskforce. I don't know anything about the object-level output of the Taskforce but they've certainly managed to put together a great bunch of people as advisors and contributors (Yoshua Bengio, Paul Christiano, etc.). Very excited to see what comes out of this.
Thanks, Dave! Yes, we found that even the conservative estimate indicates that this can be a worthwhile investment and I feel more robustly good about folks investing in experimenting with high-quality services. That said, some initiatives have significantly higher ROI than 3.7 (our upper bound) so people will have to make individual judgment calls.
In terms of your considerations:
Some moderately strong and reasonable statements coming from a PM. I wonder what their vision for the AI safety institute is and how the expert panel might look like.
Seems a bit misplaced to say that the institute will be the first in the world as there's already several institutes working on this (though he could be meaning within government).
Thanks for the comment Tee!
As mentioned in the post, this approach had many flaws. This is partly because we wanted to rely on published studies on the association between various conditions (e.g., stress) and productivity loss. Most of the studies we looked at relied on self-reported absenteeism and presenteeism (loss of productivity while at work due to lower performance). This means that these estimates don't include turnover which can indeed cause decreased organizational productivity, emotional challenges, and other costs. Overall, this might mean tha...
Thanks for sharing your thoughts and frustrations - I'm very curious to learn more about LMIC perspectives! I have the following reflections that agree with some aspects of your points while potentially disagreeing with others:
Thanks for sharing and congrats on being the first in your family to attend uni and having landed a promising job! :)
I'll keep this in mind when I encounter people who find that their career doesn't really fit with EA and feels like a mess.
On a personal note, when I first encountered EA, I recall thinking that many people also had very impressive CVs and felt some sense of inadequacy and imposter syndrome.
Ok, I feel a bit confused as to why a method wouldn't have a more substantial entry or description but also don't want to keep bothering you.
Maybe I misunderstood something but my understanding was that there was an entry that you rewarded related to significant overall impact given the benefits across multiple cause areas. So I was wondering if this is something you could share.
Thanks for the post Inga.
When thinking about mental health as a cause area in EA, I think it's important to separate global mental health (e.g., anxiety and depression globally) from mental health amongst impact oriented individuals (e.g., EAs). I refer to the latter as development of impact oriented individuals as it wouldn't only consider mental health but also things such as skill building or character development. All with the intention of increasing their capacity for doing the most good.
Can I ask why you didn't mention Effective Peer Support?
Great reflections.
Agree that recognition and the associated feelings of gratitude should not be the main thing. But still, thanks to all of you who decide to pursue things that seem the best and perhaps even giving up on some version of your passion, the praise, and status you'd get if you acted less altruistic and rationally.
Thanks for this. I notice that all of these reasons are points in favor of working on multiple causes and seem to neglect considerations that would go in the other direction. And clearly, you take this considerations seriously too (e.g., scale and urgency) as you recently decided to focus exclusively on AI within the longtermism team now.
I agree with the overall claim of the post - i.e., that IQ is currently overrated within EA (although this will certainly depend on the context - i.e., it's much less the case in some contexts). That said, I do feel confused about several aspects - including which factors are the most important for impact.
For the record, I think that IQ is in the ~ top 5 qualities that are appreciated de facto within EA and perhaps even the top 1 in some contexts and that this is being overrated.
I think it's overrated for two reasons.
1. Firstly, depending on the role...
Yeah, this could be the case. Just not sure that gpt4 can be given enough context for it to be a highly user friendly chatbot in the curriculum. But it might be the best of the two options.
Hi Peter, thanks for your work. I have several questions:
How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.
Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:
There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can sc
Most organizations within EA are relatively small (<20). Why do you think that's the case and why is RP different?
I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:
Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we w
Thanks for running with the idea! This is a major thing within education these days (e.g., Khan academy). This seems reasonably successful although Peter's example and the tendency to hallucinate makes me a bit concerned.
I'd be keen on attempting to fine-tune available foundations models on the relevant data. E.g., gpt-3.5 and see how good a result one might get.
Hi Riley,
Thanks a lot for your comment. I'll mainly speak to our (Impact Academy) approach to impact evaluation but I'll also share my impressions with the general landscape.
Our primary metric (*counter-factual* expected career contributions) explicitly attempts to take this into account. To give an example of how we roughly evaluate the impact:
Take an imaginary fellow, Alice. Before the intervention, based on our surveys and initial interactions, we expected that she may have an impactful career, but that she is unlikely to pursue a priority path ba...
Thanks for this. I think it could've been more awesome by having a stronger statement on the importance of the EA ideas, values, and mindsets. I recognize that you somewhat mention this under reasons 2 and 5 but I would've liked to see it stated even more strongly.
Thanks so much for doing this. I'm very happy to see how the general public and university students seemed to be mostly unaware and unaffected by FTX. By being happy, I don't mean to imply that we should take the situation lightly and not learn from it. I'm curious about other groups such as young professionals. However, I am somewhat shocked to see the massive drop in trust in leadership (1/3 distrusting leaders). This is definitely a significant effect which might yield good consequences - e.g., people being more likely to develop their own views and be less inclined to defer to certain individuals.
Thanks for the model - I think it's useful.
I think it'd probably be more appropriate to say that wave 2 was x-risk (and not broad longtermism) and/or that longtermism became x-risk. Before reading your thoughts on the possibilities for the third wave, I spent a few seconds developing my thoughts. The thoughts were:
1. Target audience: More focus on Global South/LMIC.
2. Culture: Diversification and more ways of living (e.g., the proportion of Huel drinkers go down).
3. Call-to-action: A higher level community/set of ideas (e.g., distilling and formalizi...
Thanks a lot for this. I eagerly read it last year and found several valuable takeaways. Looking forward to reading the foundation handbook!
Just inserting a high-level description for other readers:
I expected that their perspective would be too rigid (e.g., overly reliant on rigorous research on average effects and generalizing too strongly), cynical (as opposed to humanistic and altruistic), and overly focused on intelligence. Fortunately, my expectations were off. In fact, they were highly nuanced (emphasizing the importance of judgment and context), con...
Thanks for this! We'll soon be vetting talent - are there any resources you'd recommend for understanding and selecting talent?
Thanks so much for this reply - very informative.
This seems relatively aligned with my perspective although the specifics of what the therapists said in relation the strong moral values matter as moral perfectionism can be self-defeating. I'd also add that it seems as if older EAs are less concerned with EA-informedness than younger. For an example of why it might be beneficial to have an EA-informed coach or therapist, I quite liked this podcast episode on 80K.
Thanks for this!
Would it be possible for you to provide information on the quantity/frequency of the following findings?
1. "People find it difficult to find therapists who accept the values of Effective Altruism or whom they can trust"
2. "Thoughts on whether an EA-aligned therapist is necessary: some say yes, others no; some say it’s helpful to speak with therapists who are outside of the EA community."
Also, for 2, I'm curious whether they made claims about whether it was necessary or whether it was optimal. Seems quite different and my experience tells me...
Might be overly short due to the recent advancements and recency bias (e.g., would be interesting to see in a few weeks) but that's a massive sample size!
Seems good to me. I'm still making up my mind as to whether it should be seen as its own cause area as opposed to a framing and method that can be used to enhance other cause areas.
Thanks for your reply.
I agree that it's highly complex and can positively affect other cause areas and I'm happy to jam more on this. However, I also think it's important to not assume that it's a panacea that's good for everything. E.g., I do worry that focusing too much on well-being could be bad for the world as one starts to act in ways that optimize for that and neglects the significance of other cause areas. But I think it's plausibly a really big thing which is why I'm exploring. I've written you a dm to set up a call.
Good luck with this. Happy to have more appropriately great coaches available for people ambitiously trying to do a lot of good.
That makes sense. We might do some more strategic outreach later this year where a report like this would be relevant but for now i don't have a clear use case in mind for this so probably better to wait. Approximately how much time would you need to run this?