I really appreciate you writing this. Getting clear on one's own reasoning about AI seems really valuable, but for many people, myself included, it's too daunting to actually do.
If you think it's relevant to your overall point, I would suggest moving the first two footnotes (clarifying what you mean by short timelines and high risk) into the main text. Short timelines sometimes means <10 years and high risk sometimes means >95%.
I think you're expressing your attitude to the general cluster of EA/rationalist views around AI risk typified b...
This seems like a good place to look for studies:
...The research I’ve reviewed broadly supports this impression. For example:
- Rieber (2004) lists “training for calibration feedback” as his first recommendation for improving calibration, and summarizes a number of studies indicating both short- and long-term improvements on calibration.4 In particular, decades ago, Royal Dutch Shell began to provide calibration for their geologists, who are now (reportedly) quite well-calibrated when forecasting which sites will produce oil.5
- Since 2001, Hubbard Decisi
Are these roles visa eligible, or do candidates need a right to work in the US already? (Or can you pay contractors outside of the US?)
[A quick babble based on your premise]
What are the best bets to take to fill the galaxies with meaningful value?
How can I personally contribute to the project of filling the universe with value, given other actors’ expected work and funding on the project?
What are the best expected-value strategies for influencing highly pivotal (eg galaxy-affecting) lock-in events?
What are the tractable ways of affecting the longterm trajectory of civilisation? Of those, which are the most labour-efficient?
How can we use our life’s work to guide the galaxies to better tra...
We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in relative[1] terms).
Imagine all six of these projects was implemented to a high standard. How robust do you think the world would be to catastrophic biorisk? Ie. how sufficient do you think this list of projects is?
The job application for the Campus Specialist programme has been published. Apologies for the delay
Hi Elliot, thanks for your questions.
Is this indicative of your wider plans?/ Is CEA planning on keeping a narrow focus re: universities?
I’m on the Campus Specialist Manager team at CEA, which is a sub-team of the CEA Groups team, so this post does give a good overview of my plans, but it’s not necessarily indicative of CEA’s wider plans.
As well as the Campus Specialist programme, the Groups team runs a Broad University Group programme staffed by Jessica McCurdy with support from Jesse Rothman. This team provides support for all university groups reg...
Thanks for this comment and the discussion it’s generated! I’m afraid I don’t have time to give as detailed response as I would like, but here are some key considerations:
Thanks Vaidehi!
One set of caveats is that you might not be a good fit for this type of work (see what might make you a good fit above). For instance:
Some other things people considering this path might w...
What factors do you think would have to be in place for some other people to set up some similar but different organisation in 5 years time?
I imagine this is mainly about the skills and experience of the team, but also interested in other things if you think that's relevant
This looks brilliant, and I want to strong-strong upvote!
What do you foresee as your biggest bottlenecks or obstacles in the next 5 years? Eg. finding people with a certain skillset, or just not being able to hire quickly while preserving good culture.
Thanks for the kind words!
Our biggest bottlenecks are probably going to be some combination of:
What if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm
An obvious question which I'm keen to hear people's thoughts on - does MAD work here? Specifically, does it make sense for the EA forum users with launch codes to commit to a retaliatory attack? The obvious case for it is deterrence. The obvious counterarguments are that the Forum could go down for a reason other than a strike from LessWrong, and that once the Forum is down, it doesn't help us to take down LW (though this type of situation might be regular enough that future credibility makes it worth it)
Though of course it would be really bad for us to have to take down LW, and we really don't want to. And I imagine most of us trust the 100 LW users with codes not to use them :)
I know we're trying to remember when the US and USSR had their weapons pointed at each other but it feels more like the North and South islands of New Zealand are trying to decide whether to nuke each other!
Edit: Not even something so violent - just temporarily inconvenience each other
The question is whether precommitment would actually change behavior. In this case, anyone shutting down either site is effectively playing nihilist, and doesn't care, so it shouldn't.
In fact, if it does anything, it would be destabilizing - if "they" commit to pushing the button if "we" do, they are saying they aren't committed to minimizing damage overall, which should make us question whether we're actually on the same side. (And this is a large part of why MAD only works if you are both selfish, and scared of losing.)
This is great! I'm tentatively interested in groups trying outreach slightly before the start of term. It seems like there's a discontinuous increase in people's opportunity cost when they arrive at university - suddenly there are loads more cool clubs and people vying for their attention. Currently, EA groups are mixed in with this crowd of stuff.
One way this could look is running a 1-2 week residential course for offer holders the summer before they start at university (a bit like SPARC or Uncommon Sense).
To see if this is something a ...
I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.
I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '...
I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).
This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems hi...
Nice, thanks for these thoughts.
But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later.
Ah sorry I think I was unclear. I meant 'capacity-building' in the narrow sense of 'getting m...
This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?
I'm a bit more interested in the question 'how much longtermist labor should be directed towards capacity-building vs. 'direct' work (eg. technical AIS research)?' than the question 'how much longtermist money should be directed towards spending now vs. investing to save later?'
I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. ...
I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"
Is The YouTube Algorithm Radicalizing You? It’s Complicated.
Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.
I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.
But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
Yep that is what I'm saying. I think I don't agree but thanks for explaining :)
Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').
There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)
Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.
Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.
At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.
There are many longtermists that don't hold these views (eg. Will MacAskill is literally...
from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing
We are not actively focusing on:
...
- Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted. Happy to expand on any points and have a discussion.
In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.
One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themsel...
I had left this for a day and had just come back to write a response to this post but fortunately you've made a number of the points I was planning on making.
I think it's really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .
OP made it clear that he doesn't agree with a number of Nick Bostrom's opinions but I wasn't entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main di...
I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:
Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)
My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.
I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA
Thanks for writing this and contributing to the conversation :)
Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.
I do think the salience of movement building has been raised elsewhere eg:
You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments
Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)
Thanks! I appreciate it :)
It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.
Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end...
Was this meant as a reply to my comment or a reply to Ben's comment?
I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.
Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)
I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think
It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).
Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in
Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the...
Also, I would love to have a wide variety of athletes represented by HIA. As it's still very new I'm focusing outreach on those I have personal relationships with, which means tennis, which is predominantly white in the professional space at this point in time. I'm hoping that over time I can get in touch with a more diverse range of athletes from many different sports.
Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.
Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.
I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?
(I'm not saying you should mention your take on longtermism on the website.)
This is really cool! Thanks for doing this :)
Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)
Welcome to the forum!
Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html
I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.
"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.
If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that ...
Thanks for writing this! I and an EA community builder I know found it interesting and helpful.
I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:
OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)
OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to
Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!)
Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
How do you think people should do this?