I found this post very interesting and useful for my own thinking about the subject.
Note that while the conclusions here are ones intended for OP specifically, there's actually another striking conclusion that goes against the ideas of many from this community: we need more evidence! We need to build stronger AI (perhaps in more strictly regulated contexts) in order to have enough data to reason about the dangers from it. The "arms race" of DeepMind and OpenAI is not existentially dangerous to the world, but is rather contributing to its chance of survival...
Hi Xueyin!
While I'm not currently working on a career plan, I am also suffering from a disability which limits my ability to work, so I wanted to offer my sympathy. Sadly, disabilities are not very visible in EA, but rest assured you're not alone in dealing with one.
A small word of advice, perhaps, is that in my experience most (though not all) career paths and programs are aimed at able-bodied people who can devote a full work week each to their career. Finding ways to accommodate disabilities will often require thinking outside the box, and being assertive and direct, and asking organizers and contacts about what can be done differently for it.
Good luck!
I think this is spot on. There have been many discussions on the forum proposing rules like "Never hit on women during daytime in EAG, but it's ok at afterparties". And they're all basically doing something blunt on the one hand, and not preventing people from being a**holes on the other hand.
How many examples do you have of elites making the right decisions for a larger group? And out of how many elites trying to do that in general?
I've been vocal about thinking the community should have a voice here (maybe specifically in CEA, and other stakeholders should be involved for other parts of EVF). But widening the boards is a minimal step in the right direction.
Hi, Kaleem and Guy!
This is Miranda Kaplan, communications associate at GiveWell. I'll answer both questions here, since they're closely related.
This adjustment updated GiveWell's overall impression of deworming by around 10%. But the bottom-line takeaway on deworming—which is that it's one of the most cost-effective programs we know of in some locations, but we have a higher degree of uncertainty about it than we do our top charities—hasn't changed much, and we think that should probably continue to be the takeaway for followers of our work.
You can s...
I think donors motivated by EA principles would be making a mistake, and leaving a lot of value on the table by donating to GiveDirectly or StrongMinds over GiveWell's recommendations
Not going into the wider discussion, I specifically disagree with this idea: there's a trade-off here between estimated impact and things like risk, paternalism, scalability. If I'm risk-averse enough, or give some partial weight to bring less paternalistic, I might prefer donating to GiveDirectly - which I indeed am, despite choosing to donate to AMF in the past.
(In practi...
What I mean is "these forecasts give no more information than flipping a coin to decide whether AGI would come in time period A vs. time period B".
I have my own, rough, inside views about if and when AGI will come and what it would be able to do, and I don't find it helpful to quantify them into a specific probability distribution. And there's no "default distribution" here that I can think of either.
aren't more reliable than chance
Curious what you mean by this. One version of chance is "uniform prediction of AGI over future years" which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?
Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn't defer to them, but it's also useful to recognize how that community's predictions have changed over time.
I'm aware that by prioritising how to use limited resources, we're making decisions about people's lives. But there's a difference between saying "we want to save everyone, but can't" and saying "This group should actually not be saved, because their lives are so bad".
Curiously, I note that people are quite ready to accept that, when it comes to factory farming, those animals would lead bad lives, so it is better that they never exist.
I actually agree! But I don't think it's the same thing. I don't want to kill existing animals; I want to not intention...
I have a difficulty with this idea of a neutral point, below which it is preferable to not exist. At the very least, this is another baked in assumption - that the worst wellbeing imaginable is worse than non-existence.
There are two reasons for me being troubled with this assumption:
Hello Guy. This is an important, tricky, and often unpleasant, issue to discuss. I'm speaking for myself here: HLI doesn't have an official view on this issue, except that it's complicated and needs more thought; I'm still not sure how to think about this.
I'll respond to your second comment first. You say we should not decide whether people live or die. Whilst I respect the sentiment, this choice is unfortunately unavoidable. Healthcare systems must, for instance, make choices between quality and quantity of lives - there are not infinite resources. The we...
Guy - thank you for this comment. I'm very sorry about your suffering.
I think EAs should take much more seriously the views of people like you who have first-hand experience with these issues. We should not be assuming that 'below neutral utility' implies 'it's better not to be alive'. We should be much more empirical about this, and not make strong a priori assumptions grounded in some over-simplified, over-abstracted view of utilitarianism.
We should listen to the people, like you, who have been living with chronic conditions -- whether pain, depression, PTSD, physical handicaps, cognitive impairments, or whatever -- and try to understand what keeps people going, and why they keep going.
I strongly agree with you: that kind of discourse takes responsibility away from the people who do the actual harm; and it seems to me like the suggested norms would do more harm than good.
Still, it seems that the community and/or leadership have a responsibility to take some collective actions to ensure the safety of women in EA spaces, given that the problem seems widespread. Do you agree? If yes, do you have any suggestions?
But I worry that instead of viewing it as a tradeoff, where discussion of rules is warranted, and and instead of seeing relationships as a place where we need caution and norms, it's instead viewed from a lens of meddling versus personal freedom, and so it feels unreasonable to have any rules about what consenting adults should do.
To me, at least, the current suggestions (in top level posts) do feel more like 'meddling' than like reasonable norms. This is because they are on the one hand very broad, ignoring many details and differences - and on the o...
I upvoted the post because I like that it tries to tackle power dynamics and sources of problems related to sex, which the community clearly has.
That said, I don't actually agree. I don't think policing people's relationship choices (including casual ones) is necessary - or productive - for preventing harassment etc.
Perhaps the most important point is that out of the sample of comments I've read so far, most were written by men - and I'm much more interested to hear what women in EA think here.
One could question what it even means to either 'not wish you'd never been born' or to 'not want to die when' when your wellbeing is negative.
One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing.
Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.
The idea of modeling people as having a single utility...
I don't know if you meant it like that, but this comment reads to me as very sarcastic towards someone who obviously just misunderstood you :/
Edit: especially as your original comment was clear and I don't think anyone would read this thread and come out with the implied false beliefs about you.
Thanks, appreciate the feedback. I didn't mean my comment as sarcastic and have retracted the comment. I had an even less charitable comment prepared but realized that "non-native speaker misunderstood what I said" is also a pretty plausible explanation given the international nature of this forum.
I might've been overly sensitive here, because the degree of misunderstanding and the sensitive nature of the topic feels reminiscent of patterns I've observed before on other platforms. This is one of the reasons why I no longer have a public Twitter.
I think on the first report, how far this needs to go depends on the person who was harassed. It's ok not to require a public apology and it's ok not to want the accused to lose their job (although it's also ok to want the opposite!).
But after Wise became aware of more cases, he should have been removed from the board. Personally I think he should have also apologized publicly (like he now did), but I find this less important.
But after Wise became aware of more cases, he should have been removed from the board.
I agree this definitely has to happen if Julia became aware of more cases through further complaints or through an investigation unearthing other things that are at least 50% as bad as the incident described by Owen.
However, if these "other cases" were just Owen going through his memory of any similar interactions and applying what he learned from the staying-at-his-house incident and then scrupulously listing every interaction where, in retrospect, he cannot be 100% conf...
If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments
I've just completed a master's degree in ML, though not in deep learning. I'm very sure there are still major obstacles to AGI, that will not be overcome in the next 5 years nor in the next 20. Primary among them is robust handling of OOD situations.
Look at self-driving cars as an example. It was a test case for AI compani...
I will publicly predict now that there will be no AGI in the next 20 years. I expect significant achievements will be made, but only in areas where large amounts of relevant training data exist or can be easily generated. It will also struggle to catch on in areas like healthcare where misfiring results cause large damage and lawsuits.
I will also predict that there might be a "stall" of AI progress in a few years, once all the low-hanging fruit problems are picked off, and the remaining problems like self-driving cars aren't well suited for the current advantages of AI.
Traditionally, thought leaders in EA have been careful not to define any "core principles" besides the basic idea of "we want to find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice". While it's true that various perceptions and beliefs have creeped in over the years, none of them is sacred.
In any case, as far as I understand the "scout mindset" (which I admit isn't much), it doesn't rule out recognising areas which would be better left alone (for real, practical reasons - not because the church said so).
No, it was redacted by me after I wrote the post and before I posted it here - the risk is low, but don't want to risk defamation - or derail the conversation about the overarching issue, in that people might start trying to guess/post names based on my descriptions (which happened with the original post about the Time article).
All the [redacted] parts of this post were written that way by the author, mods did not edit this post and if we edit or remove information, we will always either post a comment explaining what we did, or get in touch with the poster directly. (which we did not do in this case)
Ideas that you talk about don't stand on their own. They exist within a historical and social context. You can't look at the idea without also considering how it affects people. I imagine Matthew personally finds the idea toxic too, as do I - but that's not really the point.
Perhaps Rationalism really argues that fewer ideas should be taboo, or perhaps that's just Hanania's version of it. But EA isn't synonymous with Rationalism, and you don't need to adopt one (certainly not completely) to accept the other.
I'll only answer with a small point: I'm from a different country, and we don't have a "Democratic coalition", neither do we have racism against Chinese people because there are barely any Chinese people here (hence, we didn't have this pressure against making a big deal of COVID). I don't see EA through an American perspective, and mostly ignore phrases like that.
Still, generally speaking, I would side with US democrats on many things, and am sure the mild disagreements needed wouldn't be an actual problem. Progressivism is perceived by conservatives as something that creates extreme homogeneity of thought, but that doesn't really seem the case to me.
I'm really sorry for the experience you've been having, and I appreciate you stepping down to take care of yourself, and by sharing it all here, sending a message to all EAs that they should take care of themselves too.
If the Executive Director of CEA can decide to prioritise his own health, so can anyone else. EA is known to be very demanding - particularly in such high responsibility positions, but also for most other EAs - and in doing this you're leading by example and hopefully preventing other EAs from harming their health.
dspeyer brought up an interesting example in another thread:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism.
Hanania writes:
...One path [EA] can take is to be folded into the Democratic coalition. It’ll have to temper its rougher edges, which means purging individuals for magic words, knowing when not to take an argument to its logical conclusion, compromising on free speech, more peer review and fewer disagreeable autodidacts, and being unwilling to engage with other individu
I've also been trying this to people claiming financial interests. On the other hand, the tweet Haydn relied to actually makes another good point though, that does apply to professors - diverting attention to from societal risks that they're contributing to but can solve, to x-risk where they can mostly sign such statements and then go "🤷🏼♂️", shields them from having to change anything in practice.