I find exurb1a a bit too nihilistic.. the creator has also been accused of highly abusive behavior, so I feel iffy about the channel. (Sorry, no time to search the link for you)
" I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area."
+1 to this
Maybe the correct way to promote EA is clickbait?
7 Things To Do To Make A Man Fall In Love With You
I'm not sure if you're implying this: 'the neutral point of welfare is close to the point at which someone commits suicide'
If so, I'd argue that these points are often very far apart: there's tremendous evolutionary and social pressure against suicide, as well as that people can suffer immensely but hope the future will be better.
Therefore, I don't expect suicide rate to be very predictive of quality of life.
Great work! :) Very happy to see the increase in rigour over earlier estimates. If your research is correct (and, in my casual reading of it, I can find no reason why it wouldn't be) this opens up a whole new area of funding opportunities in the global health & wellbeing space!
I'm also excited about the rest of your research agenda. It seems very ambitious ;)
Some things I find interesting:
"we found evidence that group psychotherapy is more effective than psychotherapy delivered to individuals which is in line with other meta-analyses (Barkowski et al.,... (read more)
Last question: what's HLI's current funding situation? (Current funding, room for funding in different growth scenarios)
Our funding situation is, um, "actively seeking new donors"! We haven't yet filled our budget for 2022.
Our gap up to the end 2022 on our lean budget is £120k; that's the minimum we need to 'keep the lights on'.
Our growth budget, the gap to the of 2022 is probably £300k; I'm not sure we could efficiently scale up much faster than that. (But if someone insisted on giving me more than that, I would have a good go!)
I have a concept for a story, but not the time/energy to finish it before Friday. I'm posting it here, in case anyone wants to take a go at turning it into a story! If it wins anything, some kind of split of prize money should be decided on..
The concept is inspired by Harsanyi's Veil of Ignorance: if you didn't know which person you'd be, what kind of world would you want to be in? Also inspired by Andy Weir's The Egg.
The story would start with an "empty soul" as MC. They have heard only a few (very positive) things about Life, and are really excited abo... (read more)
The ending paragraph seems strange though: Simon just argued that the universe is at stake and that the MC is wrong, and then hands over the decision?
I suppose that you want to put the reader in the shoes of the MC, but I don't think that this is a good way to do that.
Can you give some examples of exciting work that you'd find exciting enough to accept, and your selection criteria/heuristics?
To some extent TBD: this is a bit experimental. That said, some examples of who I'd imagine this might be a good fit for:
But also just excited to see what applications we get!
I like the concept, but it was a little confusing to be honest. I interpreted the wonderful world as the future, and was very confused about the travel between the worlds (still am). Are they different planets? Is it time travel? Dimensional travel?
Due to this, the literal hearing of screams was also unclear, dulling the final twist (which I like!)
Lastly, I felt odd with the last two paragraphs. I find them quite moralizing, and I'd find the piece stronger without them. I think that's a big challenge with this whole contest: to teach a lesson and motivate,... (read more)
Oops! Sorry Peter, not my intention at all!
I think this is an excellent contribution to the forum: strong upvote! ;)
Retracting my comment because it's unclear what kind of event (game, ritual, experiment) this is.
Yeah, my comments should be read as [in-game] comments, not as [ritual] comments, and I all mean it in good nature!
Damn, seeing the social complexity of this event with the uncertainty about what it is quickly made it feel more like a social minefield than a game.
Er.. I'm reading Khorton's post now, and apparently people are viewing this game/event thing very differently, so I think with that meta-uncertainty I am unwilling to ruthlessly strategize.
Also, the reference class of launches doesn't fully represent the current situation: last launch was more of a self-destruct. This time, it's harming another website/community, which seems more prohibitive. So I think the prior is lower than 40%.
There is a chance to remove MAD by removing Peter's launch codes' validity, per my request.
I have also used my strong downvote capability to reduce the signal of Peter's message. I hereby apologize for any harm outside of this game (Peter's total karma), but I saw no other way.
I motion to
I would ask how users are chosen, but I imagine that making that knowledge... (read more)
Everyone cares about something, so maybe we should precommit to something more .. deterring? It should likely be something that's not really bad, but still somewhat uncomfortable for the person to experience. (I realize that going down this path of thinking might produce actual outside-game harm)
I'm curious: how much are you spending on this on a yearly basis, roughly? It seems a very effective thing to develop a real tight and collaborative community.
Linking to an EA Slack is definitely not advertising ;)
Interesting! Seems intuitively right.
I wonder though: how would this affect expected value calculations? Doesn't this have far-reaching consequences?
One thing I have always wondered about is how to aggregate predicted values that differ by orders of magnitude. E.g. person A's best guess is that the value of x will be 10, person B's guess is that it will be 10,000. Saying that the expected value of x is ~5,000 seems to lose a lot of information. For simple monetary betting, this seems fine. For complicated decision-making, I'm less sure.
Let's work this example through together! (but I will change the quantities to 10 and 20 for numerical stability reasons)
One thing we need to be careful with is not mixing the implied beliefs with the object level claims.
In this case, person A's claim that the value is mA= 10 is more accurately a claim that the beliefs of person A can be summed up as some distribution over the positive numbers, eg a log normal with parameters μA=logmA and σA . So the density distribution of beliefs of A is fA=1xσA√2πexp[−(lnx−μA)22σ2... (read more)
(highly speculative and I see a lot of flaws, but I can see it scaled)
EA training institute/alternative university. Kind of like creating navy seals: highly selective, high dropout rate, but produces the most effective people (with a certain goal) in the world.
My hunch is that that isn't a $100m per year-project, within reasonable time frames (the same is true of several other suggestions in this thread). Cf. Kirsten's post.
let's add a high school/prep school to it ;-)
Seriously though, I think having an institute more separate than GPI would not be great for disseminating research and gaining reputation. It would be nice though for training up EA students.
"2. Judgement calibration test
The Judgement Calibration test is supposed to do two things: first, make sure that students have really read the material and know its content; and second, test whether they can properly calibrate their confidence regarding the truth of their own answers."
This is really cool Simon, and awesome that you actually got permission to give actual grades by this mechanism. Curious how it works out in practice!
On 2: I know very little about the Chernobyl meltdown and meltdowns in general, but those numbers seem be the referring to the actual consequences of the meltdown. My understanding is that there was a substantial emergency reaction that liited the severity of the meltdown. I'm not sure, but I can imagine a completely unmanaged meltdown to be substantially worse?Also on 1: I have no idea how hard it is to turn a nuclear power plant off, but I doubt that it's very easy for outsiders with no knowledge (and that are worried about survival so don't have time to research how to do it safely?)
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes."
Is your issue more along the following?
That sounds like a better title to me :) Kudos on the adaptation.
Thanks for the highly detailed post! Seems like it was a cool event.
Nitpicking: this is the second time I see an evaluation described as "postmortem" and it puts me on the wrong foot. To me "postmortem" suggests the project was overall a failure, while it clearly wasn't! "Evaluation" seems like a better word?
I wrote some thoughts on risk analysis as a career path in my shortform here, which might be somewhat helpful. I echo people's concern that this program focuses overly much on non-anthropogenic risk.
I also know an EA that did this course - I'll send her details in a PM. :)
Giving Green was fortunate enough to receive a grant from the EA Infrastructure fund, with the express purpose of addressing this criticism, by bringing our methods closer in line to that of the EA community and implementing other suggestions.
This is really interesting! I am happy to see that the cooperative nature of that disagreement is being continued, and I look forward to the progress of the person that ends up taking this role. It sounds like a very high-level of qualifications (good researcher, good ops skills, communications, management..), so I hope you're able to find someone!
I think it stands for "depersonalisation" and "derealisation"
This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study somethi... (read more)
Good points! I broadly agree with your assessment Michael! I'm not at all sure how to judge whether Sagan's alarmism was intentionally exaggerated or the result of unintentional poor methodology. And then, I think we need to admit that he was making the argument in a (supposedly) pretty impoverished research landscape on topics such as this. It's only expected that researchers in a new field make mistakes that seem naive once the field is further developed.
I stand by my original point to celebrate Sagan > Petrov though. I'd rather celebrate (and learn f... (read more)
Ah yes, that makes sense and I hadn't thought of that
Have you considered running different question sets to different people (randomly assigned)?
It could expand the range of questions you can ask.
I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
It is... (read more)
I agree with this: a lot of the argument (and related things in population ethics) depends on the zero-level of well-being. I would be very interested to see more interest into figuring out what/where this zero-level is.
I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.
The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate
Great report! I have a two questions for you:
1. On the following:
There are already many ongoing and upcoming high-quality studies on psychedelic-assisted mental health treatments, and there are likely more of those to follow, given the new philanthropic funding that has recently come into the area. (p. 45-46)
Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?
2. Did you consider more organisati... (read more)
I was confused about the usage of the term drug development as it sounds to me like it's about the discovery/creation of new drugs, which clearly does not seem to be the high-value aspect here. But from the report:
Drug development is a process that covers everything from the discovery of a brand new drug for treatment to this drug being approved for medical use.
I speculate that the particulars of the psychedelic experience may drive rescaling like this in an intense way.
I also think that the psychedelic experience, as well as things like meditation, affect well-being in ways that might not be captured easily. I'm not sure if it's rescaling per se. I feel that meditation has not made me happier in the hedonistic sense, but I strongly believe it's made optimize less for hedonistic wellbeing, and in addition given me more stability, resilience, better judgment, etc.
I recently moved to a (nearby) EA hub to live temporarily with some other EA's (and some non-EA's), while figuring out my next steps in my life/career.
This has considerably increased my involvement. The ability to talk about EA over lunch, dinner, and to join meetups that are 5 minutes away make a big difference. As well as finding nice people I connect with socially/emotionally.
I suppose COVID had somewhat of a positive influence here too: I am less likely to attend a wide range of events, because I don't know people's approaches to safety. This leaves more time for EA.
Although communicating the precise expected resilience conveys more information, in most situations I prefer to give people ranges. I find it a good compromise between precision and communicating uncertainty, while remaining concise and understandable for lay people and not losing all those weirdness credits that I prefer to spend on more important topics.
This also helps me epistemically: sometimes I cannot represent my belief state in a precise number because multiple numbers feel equally justified or no number feels justified. However, there are often bo
Just to clarify: Focusmate isn't meant to talk about your work, so most people don't try to find people with in-depth knowledge. I mostly don't explain things in detail and don't feel like I need to. It's more an accountability thing and to share general progress (e.g. "I wanted to get 3 tasks done: write an email, draft an outline for a blog post, and solve a technical issue for my software project. I got 2 of them done, and realized I need to ask a colleague about #3, so I did that instead).
Thanks for the elaborate reply!
I think there's a lot of open space in between sending out surveys and giving people binding voting power. I'm not a fan of asking people to vote on things they don't know about. However, I have something in mind of "inviting people to contribute in a public conversation and decision-making process". Final decision power would still be with CEA, but input is more than one-off, the decision-making is more transparant, and a wider range of stakeholders is involved. Obviously, this does not work for all ... (read more)
Hi Max, good to read an update on CEA's plans.
Given CEA's central and influential role in the EA community, I would be interested to hear more on the approach on democratic/communal governance of CEA and the EA community. As I understand it, CEA consults plenty with a variety of stakeholders, but mostly anonymously and behind closed doors (correct me if I'm wrong). I see lack of democracy and lack of community support for CEA as substantial risks to the EA community's effectiveness and existence.
Are there plans to make CEA more democratic, including in its strategy-setting?
There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?
I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.
Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?