All of Ardenlk's Comments + Replies

All Possible Views About Humanity's Future Are Wild

Am I right in thinking Paul your argument here is very similar to Buck's in this post?

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

All Possible Views About Humanity's Future Are Wild

This is fantastic.

This doesn't take away from your main point, but it would be some definate amount less wild if we won't start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.

8Ben Garfinkel2moSome possible futures do feel relatively more "wild” to me, too, even if all of them are wild to a significant degree. If we suppose that wildness is actually pretty epistemically relevant (I’m not sure it is), then it could still matter a lot if some future is 10x wilder than another. For example, take a prediction like this: A prediction like “none of the above happens; humanity hangs around and then dies out sometime in the next million years” definitely also feels wild in its own way. So does the prediction “all of the above happens, starting a few hundred years from now.” But both of these predictions still feel much less wild than the first one. I suppose whether they actually are much less “wild” depends on one’s metric of wildness. I’m not sure how to think about that metric, though. If wildness is epistemically relevant, then presumably some forms of wildness are more epistemically relevant than others.
[3-hour podcast] Michael Huemer on epistemology, metaethics, EA, utilitarianism and infinite ethics

Thanks for posting this - I actually haven't listened to this ep but I just listened to the science of pleasure episode and thought it was fantastic, and wouldn't have found it without this post. My only wish was that you'd asked him to say specifically what he meant by conscious. I'll def listen to other episodes now.

1Gus Docker6moGlad you liked the episode :) I had limited time with Kent, so I didn't get to ask him everything I wanted to ask. I hope to have more pleasure/pain scientists on in the future, maybe from the same lab.
Some quick notes on "effective altruism"

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

  • It seems like it's been pretty successful? EA has grown a lot under the term, including attracting some great people, and despite having some very controverisal ideas hasn't faced that big of a backlash yet. Hard to know what the counterfactual would be, but it seems non-obvious it would be better.
  • It actually sounds non'ideological' to me if what that means is being comitted to certain ideas o
... (read more)
4MichaelA6moI think these are good points. Readers of these comments may also be interested in the post Effective Altruism is a Question (not an ideology) [] . (I assume you've already read the post and had it somewhat in mind, but also that some readers wouldn't know the post.)
Clarifying the core of Effective Altruism

I really like this post! I'm sympathetic to the point about normativity. I particualrly think the point that movements may be able to suffer from not being demanding enough is a potentially really good one and not something I've thought about before. I wonder if there are examples?

For what it's worth, since the antecedent "if you want to contrinute to the common good" is so minimal, ben's def feels kind of near-normative to me -- like it gets someone on the normative hook with "mistake" unless they say "well I jsut don't care about the common good", and ... (read more)

3richard_ngo8moThanks for the kind words and feedback! Some responses: The sort of examples which come to mind are things like new religions, or startup, or cults - all of which make heavy demands on early participants, and thereby foster a strong group bond and sense of shared identity which allows them greater long-term success. Consider someone who only cares about the lives of people in their own town. Do they want to contribute to the common good? In one sense yes, because the good of the town is a part of the common good. But in another sense no; they care about something different from the common good, which just happens to partially overlap with it. Using the first definition, "if you want to contribute to the common good" is too minimal to imply that not pursuing effective altruism is a mistake. Using the second definition, "if you want to contribute to the common good" is too demanding - because many people care about individual components of the common good (e.g. human flourishing) without being totally on board with "welfare from an impartial perspective". Yeah, I agree that it's tricky to dodge maximalism. I give some more intuitions for what I'm trying to do in this post [] . On the 2nd worry: I think we're much more radically uncertain about the (ex ante) best option available to us out of the space of all possible actions, than we are radically uncertain about a direct comparison between current options vs a new proposed option which might do "much more" good. On the 3rd worry: we should still encourage people not to let their personal preferences stand in the way of doing much more good. But this is consistent with (for example) people spending 20% of their charity budget in less effective ways. (I'm implicitly thinking of "much more" in relative terms, not absolute - so a 25% increase is not "much more" good.)
My Career Decision-Making Process

Thanks for this quick and detailed feedback shaybenmoshe, and also for your kind words!

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information.

Yes. We decided to go "ideas/information-first" for various reasons, which has upsides but also downsides. We are hoping to mitigate th... (read more)

3ShayBenMoshe8moThanks for detailing your thoughts on these issues! I'm glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented. I want to add one comment about to the How to plan your career [] article, if it's already mentioned. I think it's really great, but it might be a little bit too long for many readers' first exposure. I just realized that you have a summary on the Career planning [] page, which is good, but I think it might be too short. I found the (older) How to make tough career decisions [] article very helpful and I think it offers a great balance of information and length, and I personally still refer people to it for their first exposure. I think it will be very useful to have a version of this page (i.e. of similar length), reflecting the process described in the new article. With regards to longtermism (and expected values), I think that indeed I disagree with the views taken by most of 80,000 hours' team, and that's ok. I do wish you offered a more balanced take on these matters, and maybe even separate the parts which are pretty much a consensus in EA from more specific views you take so that people can make their own informed decisions, but I know that it might be too much to ask and the lines are very blurred in any case.
My Career Decision-Making Process

Hey shaybenmoshe, thanks for this post! I work at 80,000 Hours, so I'm especially interested in it from a feedback perspective. Michelle has already asked for your expended thoughts on cybersecurity and formal verification, so I'll skip those -- would you also be up for expanding on why the Key Ideas page seems less helpful to you vs. the older career guide?

Hey Arden, thanks for asking about that. Let me start by also thanking you for all the good work you do at 80,000 Hours, and in particular for the various pieces you wrote that I linked to at 8. General Helpful Resources.

Regarding the key ideas vs old career guide, I have several thoughts which I have written below. Because 80,000 Hours' content is so central to EA, I think that this discussion is extremely important. I would love to hear your thoughts about this Arden, and I will be glad if others could share their views as well, or even have a separate d... (read more)

What is going on in the world?

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

1Ramiro8moWorse: most of the members of that species don't realize this responsibility, and indeed consistently act against it, either to satisfy self-regarding or parochial preferences
Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

5Chris Meacham8moI guess the two alternatives that seem salient to me are (i) something like HMV combined with pairing individuals via cross-world identity, or (ii) something like HMV combined with pairing individuals who currently exist (at the time of the act) via cross-world identity, and not pairing individuals who don’t currently exist. (I take it (ii) is the kind of view you had in mind.) If we adopt (ii), then we can say that all of W1-W3 are permissible in the above case (since all of the individuals in question don’t currently exist, and so don’t get paired with anyone). But this kind of person-affecting view has some other consequences that might make one squeamish. For example, suppose you have a choice between three options: Option 1: Don’t have a child. Option 2: Have a child, and give them a great life. Option 3: Have a child, and give them a life barely worth living. (Suppose, somewhat unrealistically, that our choice won’t bear on anyone else’s well-being.) According to (ii), all three options are permissible. That entails that option 3 is permissible — it’s permissible to have a child and give them a life barely worth living, even though you could have (at no cost to yourself or anyone else) given that very same person a great life. YMMV, but I find that hard to square with person-affecting intuitions!
What does it mean to become an expert in AI Hardware?

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are a... (read more)

Literature Review: Why Do People Give Money To Charity?

Hey Aaron, I know this is from a while ago and your head probably isn't in it, but I'm curious if you have any intuitions on whether analogues of the successful techniques you list do/don't apply to making career changes or other actions besides giving to charity.

Also really appreciating the forum tags lately -- really nice to be able to search by topic!

3Aaron Gertler8moMy head definitely isn't in it, and the quality of many of the papers I reviewed was quite bad, but I think that the results here generally back up a few commonsense ideas: * People care more about something when they can easily empathize with or visualize it * People are more likely to trust/listen to you when they see you as attractive, well-groomed, well-dressed, etc. * People like to help "winners" -- making someone feel like they can take a bit of credit for something that's very likely to succeed can be very persuasive * In general, follow up as soon as you can when someone agrees to do something, lest they forget or change their mind
Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that ... (read more)

8jackmalde8moI think it's important to ask why you think it's horrible to bomb the planet into non-existence. Whatever reason you have, I suspect it probably just simplifies down to you disagreeing with the core rationale of person-affecting views. For example, perhaps you're concerned that bombing the plant will prevent a future that you expect to be good. In this case you're just disagreeing with the very core of person-affecting views: that adding happy people can't be good. Or perhaps you're concerned by the suffering caused by the bombing. Note that Meacham's person-affecting view thinks that the suffering is 'harmful' too, it just thinks that the bombing will avoid a greater quantity of harm in the future. Also note that many people, including totalists, also hold intuitions that it is OK to cause some harm to prevent greater harm. So really what you're probably disagreeing with in this case is the claim you would actually be avoiding a greater harm by bombing. This is probably because you disagree that adding some happy future people can't ever outweigh the harm of adding some unhappy future people. In other words, once again, you're simply disagreeing with the very core of person-affecting views: that adding happy people can't be good. Or perhaps you don't like the bombing for deontological reasons i.e. you just can't countenance that such an act could be OK. In this case you don't want a moral view that is purely consequentialist without any deontological constraints. So you're disagreeing with another core of person-affecting views: pure consequentialism. I could probably go on, but my point is this: I do believe you find the implication horrible, but my guess is that this is because you fundamentally don't accept the underlying rationale.
Some promising career ideas beyond 80,000 Hours' priority paths

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

2anon_ea7moThanks for the reply and clarification! The write up looks the same as before, I think
Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

Can I have impact if I’m average?

Thanks for posting this -- I think this might be a pretty big issue and I'm glad you've had success helping reduce this misconception by talking to people!

As for explanations as to why it is happening, I wonder if in addition to what you said, it could be that because EA emphasises comparing impact between different interventions/careers etc. so heavily, people just get in a really compare-y mindset, and end up accidentally thinking that comparing well to other interventions is itself what matters, instead of just having more impact. I think improved messaging could help.

2Denis Drescher9moYeah, I’m super noncompetitive, and yet, when I fear that I might replace someone who would’ve done a better job – be it because there is no interview process or I don’t trust the process to be good enough – I get into this compare-y mindset and shy away from it completely.
Kelsey Piper on "The Life You Can Save"

Thanks Aaron, I wouldn't read this if you hadn't posted it, and I think it contains good lessons on messaging.

Careers Questions Open Thread

Hi Anafromthesouth,

This is just an idea, but I wonder if you could use your data science and statistics skills to help nonprofits or foundations working on important issues (including outside the EA community) better evaluate their impact or otherwise make more informed choices. (If those skills need sharpening, taking courses seems sensible.) From the name it sounds like this could dovetail with your work in your masters', but I don't actually know anything about that kind of programme.

I guess it sounds to me like going back to academic stuff isn't what y... (read more)

1Anafromthesouth9moThank you, Ardenlk! Yes, that is exactly what I think I could be helping with. And I will surely keep on training my programming and stats skills. However, I do feel I need work experience and right now it is getting hard to find a job. I believe I am not applying to the right positions or that my profile is too confusing due to the different topics I have put energy to. I may just focus on the data science skills for a while and when I get more practice I start applying again. Thanks for your feedback, again :)
Careers Questions Open Thread

I agree with what the others below have written, but wanted to just add:

If you aim for entrepreneurship, which it sounds like you might want to, I think it makes sense to stay open to the possibility that in addition to building companies that could also mean things like running big projects within existing companies, starting a nonprofit, running a big project in a nonprofit, or even running a project in a govnerment agency if you can find one with enough flexibility.

Where are you donating in 2020 and why?

Yes, I do think they had room for more funding, but could be wrong. My view was based on (1) a recommendation from someone whose judgement on these things I think is informed and probably better than most people's including mine, who thought the Biden Victory Fund was the highest impact thing to donate to this year, (2) an intuition that the DNC/etc. wouldn't put so much work into fundraising if more money didn't benefit their chances of success, and (3) the way the Biden Victory Fund in particular structured the funds it received, which was to distribute... (read more)

What are you grateful for?

I'm grateful for all the people in the EA community who write digests, newsletters, updates, highlights, research summaries, abstracts, and other vehicles that help me keep abreast of all the various developments.

I'm also grateful for there being so much buzzing activity in EA that such vehicles are so useful/essential!

Where are you donating in 2020 and why?

I am not that confident this was the right decision (and will be curious about people's views, though I can't do anything about it now), but I already gave most of 10% of my income this year (as per my GWWC pledge) to the 'Biden Victory Fund.' (The rest went to the Meta Fund earlier in the year). I know Biden's campaign was the opposite of neglected, but I thought the imporance and urgency of replacing Trump as the US president swamped that consideration in the end (I think having Republicans in the White House, and especially Trump, is very bad for the pr... (read more)

9Taymon10moDo you think the Biden campaign had room for more funding, i.e., that your donation made a Biden victory more likely on the margin (by enough to be worth it)? I am pretty skeptical of this; I suspect they already had more money than they were able to spend effectively. (I don't have a source for this other than Maciej Cegłowski [], who has relevant experience but whom I don't agree with on everything; on the other hand, I can't recall ever hearing anyone make the case that U.S. presidential general-election campaigns do have room for more funding, and I'd be pretty surprised if there were such a case and it was strong.) "Neglectedness" is a good heuristic for cause areas but I think that when donating to specific orgs it can wind up just confusing things and RFMF is the better thing to ask about. I'm less certain about the Georgia campaign but still skeptical there, partly because it's a really high-profile race (since it determines control of the Senate and isn't competing for airtime with any other races) and partly because I think substantive electoral reform is likely to remain intractable even if the Democrats win. But I'd be interested to see a more thorough analysis of this.
What actually is the argument for effective altruism?

I think adding a maximizing premise like the one you mention could work to assuage these worries.

1Aaron__Maiwald10moI actually think there is more needed. If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false. I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA: 1. you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact 2. pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally If that's true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
How have you become more (or less) engaged with EA in the last year?

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

How have you become more (or less) engaged with EA in the last year?

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

2MichaelA1y(Btw, I've just updated my original answer, as it overlooked the time spent on audiobooks, podcasts, and video.)

tl;dr: Duration: Maybe ~12 months. Hours of EA-related video per week during that time: Maybe 4? Hours of EA-related audiobooks and podcasts per week: Maybe 10-15. Hours of all other EA-related learning per week: Maybe ~5-15? 

So maybe ~1400 hours total. (What!? That sounds like a lot!) Or 520 hours if we don't count video and audio, since those didn't actually take time out of my day (see below).


I learned about EA around September 2018, and started actively trying to "get up to speed" around October 2018. It's less clear what "end points" to u... (read more)

80,000 Hours user survey closes this Sunday

Thanks for filling out the survey and for the kind words!

Asking for advice

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

5jared_m1yYes, I agree clear deadlines are helpful! The two categories of deadlines I'm most responsive to are: * "I'm sorry to ask on short notice, but I'd love your feedback this week..." The acknowledgment that this might result in some reshuffling of plans in order to get to giving feedback/landing a call makes it feel like you're truly helping someone out, which can lead to some warm glow effects. It certainly goes over better than a terse "I'd like to get on the phone tomorrow or as soon as you can this week" - which feels a bit more like a burden. * "Sometime in the next 3-5 months, as time allows." The considerateness and flexibility in this sort of phrasing means I probably schedule these calls at least as quickly as the requests that are in the mode of a terse "sometime this month, please." All the other processes mentioned above seem very sensible to me and I don't have much to add. Perhaps teeing up the key tradeoff you're weighing. For example "I'd like to establish X as a process, but that will cause a lot of hassle in terms of setup time, etc. Are there other pros/cons that come to mind with you about this particular approach or phrasing?" Sometimes that will prime the person to start populating benefits or risks on the +/- ledger that you had missed, and you'll get more-valuable feedback from a busy person than, say, "that sounds good to me!" or "hmm, that process does sound annoying. To be honest I can't think of a better one at the moment, though."
EA reading list: Scott Alexander

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.


... (read more)
Some promising career ideas beyond 80,000 Hours' priority paths

Just to let you know I've revised the blurb in light of this. Thanks again!

Some history topics it might be very valuable to investigate

We also had this choice with our other problems and other paths posts, and decided against the listicle style, basically for the reasons you say. I think there is a nacent/weak norm, and think it makes sense to uphold it. The main argument against is that is actually kind of helpful to know if something is a long list or a short list -- esp if I have a small bit of time and won't want to start something long.

5Linch4moYeah, so 1) I think announcing the size of a list ahead of time is a net good, and 2) I prefer relevant numbers to vague words. On balance I think a listicle-style numbering system is better than ambiguous counting words like "some", "several", "many", etc. 3) I don't find it very plausible that a straightforward declaration of the size of a list tricks people into reading things they otherwise ought not to have (while I agree for phrases like "Number 5 will SHOCK you," or outrage bait) One reason against listicles-style posts for 80k is that they're likely seen as lower status with your target audience, and 80k has significant image/PR considerations for your public output, an issue that I think is relatively much less important for the EA Forum.
Some promising career ideas beyond 80,000 Hours' priority paths

Hey Michael,

Thanks (as often) for this list! I'm wondering, might you be up for putting it into a slightly more fomal standalone post or google doc that we could potentially link to from the blurb?

Really love how you're collecting resources on so many different important topics!

7MichaelA1yHappy to hear this list seems helpful, and thanks for the suggestion! I've now polished & slightly expanded my comment into a top level post: Some history topics it might be very valuable to investigate [] . (I also encourage people in that post to suggest additional topics they think it'd be good to explore, so hopefully the post can become something of a "hub" for that.)
Some promising career ideas beyond 80,000 Hours' priority paths

Thanks for these points! Very encouraging that you can do this work from such a variety of disciplines. I'll revise the blurb in light of this.

Some promising career ideas beyond 80,000 Hours' priority paths

Interesting! I think this might fall under global priorities research, which we have as a 'priority path' -- but it's not really talked about in our profile on that, and I agree it seems like it could be a good straetgy. I'll take a look at the priority path and consider adding something about it. Thanks!

Some promising career ideas beyond 80,000 Hours' priority paths

Thanks so much Rohin for this explanation. It sounds somewhat persuasive to me, but I don't feel in a psoition to have a good judgement on the matter. I'll pass this on to our AI specialists to see what they think!

4abergal9moChiming in on this very late. (I worked on formal verification research using proof assistants for a sizable part of undergrad.) - Given the stakes, it seems like it could be important to verify 1. formally after the math proofs step. Math proofs are erroneous a non-trivial fraction of the time. - While I agree that proof assistants right now are much slower than doing math proofs yourself, verification is a pretty immature field. I can imagine them becoming a lot better such that they do actually become better to use than doing math proofs yourself, and don't think this would be the worst thing to invest in. - I'm somewhat unsure about the extent to which we'll be able to cleanly decompose 1. and 2. in the systems we design, though I haven't thought about it much. - A lot of the formal verification work on proof assistants seems to me like it's also work that could apply to verifying learned specifications? E.g. I'm imagining that this process would be automated, and the automation used could look a lot like the parts of proof assistants that automate proofs.
Problem areas beyond 80,000 Hours' current priorities

Hi Brian,

In general, we have a heuristic according to which issues that primarily affect people in countries like the US are less likely to be high impact for more people to focus on at the margin than issues that primiarly affect others or affect all people equally. While criminal justice does affect people in other countries as well, it seems like most of the most promising interventions are country-, and especially US-, specific -- including the interventions Open Phil recommends, like those discussed here and here. The main reason for this heuristic is

... (read more)
1Tobias_Baumann1yGreat stuff, thanks!
Some promising career ideas beyond 80,000 Hours' priority paths

Hi Rohin,

Thanks for this comment. I don't know a lot about this area, so I'm not confident here. But I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

I guess I think this first becuase bugs seem capable of being big deals in this context (maybe I'm wrong there?), and because it seems like there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Curious to hear if/why you disagree!

I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

We can decompose this into two parts:

1. Proving that the system that we design has certain properties

2. Proving that the system that we implement matches the design (and so has the same properties)

1 is usually done by math-style proofs, which are several orders of magnitude easier to do than direct formal verification of the system in a proof assistant without having first done the... (read more)

Some promising career ideas beyond 80,000 Hours' priority paths

Hm - interesting suggestion! The basic case here seems pretty compelling to me. One question I don't know the answer to is how predicable countries trajectories are -- like how much would a niave extrapolation have predicted the current balance of power 50 years ago? If very unpredictable it might not be worth it in terms of EV to bet on the extrapolation. But

I feel more intuitievely excited about trying to foster home grown EA communities in a range of such countries, since many of the people working on it would probably have reasons to be in and focus on those countries anyway because they're from there.

Problem areas beyond 80,000 Hours' current priorities

Thanks! I'm seeing that I sometimes only used links that worked on the 80k site. Fixing the issue now.

Problem areas beyond 80,000 Hours' current priorities

Hi Will,

To be honest, I'm not that confident in wild animal welfare being on the 'other longtermist' list rather than the 'other global' list -- we had some internal discussion on the matter and opinions differed.

Basically it's on 'other longtemrmist' because the case for it contributing to spreding positive values seems stronger to me than in the case of the other global problems. In some sense working on any issue spreds positive values, but wild animal welfare is sufficiently 'weird' that it's success as a cause area seems more likely to disrupt people'

... (read more)
6willbradshaw1yThanks Arden. I agree this is probably the best case for why WAW is a longtermist cause.
Can I archive the EA forum on the wayback machine (internet archive, ?

Thank you for pointing out! I've had the problem of not being able to wayback forum posts before.

Problem areas beyond 80,000 Hours' current priorities

Hey jackmalde, interesting idea -- though I think I'd lean against writing it. I guess the main reason is something like: There are quite a few issues to explore on the above list so if someone is searching around for something (rather than if they have something in mind already), they might be able to find an idea there. I guess despite what I said to Michael above, I do want people to see it as some positive signal if something's on the list. Having a list of things not on the list would probably not add a lot, because the reasons would just be pretty we

... (read more)
1jackmalde1yHi Arden, yeah that makes sense. You've definitely given the EA community a lot to work on with this post so probably not worth overcomplicating things.
Problem areas beyond 80,000 Hours' current priorities

Hey atlasunshrugged,

I'm afraid I don't know the answers to your specific questions. I agree that there are things worse than great power conflict, and perhaps China becoming the dominent world power could be one of those things. FWIW although war between the US and China does seem like one of the more worrying scinarios at the moment, I meant the description problem to be broader than that and include any great power war.

1atlasunshrugged1yNo worries, I was just curious - I've tried to find data on things like projections of lives lost in combat between the US and China and can't find anything good (best I found was a Rand study from a few years ago but it didn't really give projections of actual deaths) so was curious if you had gotten your hands on that data to make your projections. Sorry for the misunderstanding, I had assumed China/US conflict but makes sense - probably anyone with nuclear capabilities who gets into a serious foreign entanglement will create an extremely dangerous situation for the world.
Load More