Ardenlk

Wiki Contributions

Comments

Disentangling "Improving Institutional Decision-Making"

Nice post : )

I mostly agree with your points, though am a bit more optimistic than it seems like you are about untargeted, value-neutral IIDM having a positive impact.

Your skepticism about this seems to be expressed here:

And yet, it seems possible that there are some institutions that cause an overwhelming amount of harm (e.g. the farming industry or some x-risk-increasing endeavors like gain-of-function research), and that the value-neutral version of IIDM fails to take that into account.

I think this is true, but it still seems like the aims of institutions are pro-social as a general matter -- x-risk and animal suffering in your examples are side effects that aren't means to the ends of the institutions, which are 'increase biosecuirty' and 'make money', and if improving decisionmaking helps orgs get at their ends more efficiently then we should think they will have fewer bad side effects if they have better decisonmaking. Also generally orgs' aims (e.g. "make money") will presuppose the firm's, and therefore humanity's survival, so it still seems good to me as a general matter for orgs to be able to pursue their aims more effectively.

All Possible Views About Humanity's Future Are Wild

Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

All Possible Views About Humanity's Future Are Wild

This is fantastic.

This doesn't take away from your main point, but it would be some definate amount less wild if we won't start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.

[3-hour podcast] Michael Huemer on epistemology, metaethics, EA, utilitarianism and infinite ethics

Thanks for posting this - I actually haven't listened to this ep but I just listened to the science of pleasure episode and thought it was fantastic, and wouldn't have found it without this post. My only wish was that you'd asked him to say specifically what he meant by conscious. I'll def listen to other episodes now.

Some quick notes on "effective altruism"

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

  • It seems like it's been pretty successful? EA has grown a lot under the term, including attracting some great people, and despite having some very controverisal ideas hasn't faced that big of a backlash yet. Hard to know what the counterfactual would be, but it seems non-obvious it would be better.
  • It actually sounds non'ideological' to me if what that means is being comitted to certain ideas of what we should do and how we should think-- it sounds like it's saying 'hey, we want to do the effective and altruistic thing. We're not saying what that is.' it sounds more open, more like 'a question' than many -isms.

Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community.

I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.

Clarifying the core of Effective Altruism

I really like this post! I'm sympathetic to the point about normativity. I particualrly think the point that movements may be able to suffer from not being demanding enough is a potentially really good one and not something I've thought about before. I wonder if there are examples?

For what it's worth, since the antecedent "if you want to contrinute to the common good" is so minimal, ben's def feels kind of near-normative to me -- like it gets someone on the normative hook with "mistake" unless they say "well I jsut don't care about the common good", and then common sense morality tells them they're doing something wrong... so it's kind of like we don't have to explicitly?

Also, I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum. And then it's like... does that really get you out of the cited worries with maximising? Like it still means that "doing a lot of good" will be not good enough a lot of the time (as long as there's still something else you could do that would do much more good), which I think could still run into at least the 2nd and 3rd worries you cite with having maximising in there?

My Career Decision-Making Process

Thanks for this quick and detailed feedback shaybenmoshe, and also for your kind words!

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information.

Yes. We decided to go "ideas/information-first" for various reasons, which has upsides but also downsides. We are hoping to mitigate the downsides by having practical, career-planning resources more emphasised alongside Key Ideas. So in the future the plan is to have better resources on both kinds of things, but they'll likely be separated somewhat -- like here are the ideas [set of articles], and here are the ways to use them in your career [set of articles]. We do plan to introduce the ideas first though, which we think are important for helping people make the most of their careers. That said, none of this is set in stone.

Another important point is that I don't like, and disagree with the choice of, the emphasis on longtermism and AI safety. Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety. More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis. A minority of them felt like me, and the majority understood it as "all you can meaningfully do in EA is AI safety", which was very discouraging for them. I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.

We became aware of the AI safety problem last year -- we've tried to deemphasie AI Safety relative to other work since to make it clearer that, although it's our top choice for most pressing problem and therefore what we'd recommend people work on if they could work on anything equally successfully, that doesnt' mean that it's the only or best choice for everyone (by a long shot!). I'm hoping Key Ideas no longer gives this impression, and that our lists of other problems and paths might help show that we're excited about people working on a variety of things.

Re: Longtermism, I thnk our focus on that is just a product of most people at 80k being more convinced of longtermism's truth/importance, so a longer conversation!

Another point is that the "Global priorities" section takes a completely top-to-bottom approach. I do agree that it is sometimes a good approach, but I think that many times it is not. One reason is the tension between opportunities and cause areas which I already wrote about. The other is that some people might already have their career going, or are particularly interested in a specific path. In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices. Instead, in these situations it would be much better to help people to optimize their current path towards more important goals.

I totally agree with this and think it's a problem with Key Ideas. We are hoping the new career planning process we've released can help with this, but also know that it's not the most accessible right now. Other things we might do: improve our 'advice by expertise' article, and try to make clear in the problems section (similar to the point about ai safety above) that we're talking about what is most pressing and therefore best to work on for the person who could do anything equally successfully, but that career capital and personal fit mean that's not going to be true of the reader, so while we think the problems are important for them to be aware of and an important input to their personal prioritisation, it's not the end of it.

I disagree with the principle of maximizing expected value, and definitely don't think that this is the way it should be phrased as part of the "the big picture".

Similar to longtermism (and likely related) - it's just our honest best guess at what is at least a good decision rule, if not the decision rule.

I really liked the structure of the previous career guide. It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries. Furthermore, its modularity made it very easy to read the parts you are interested in. The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.

Mostly agree with this. We're planning to split key ideas into several articles that are much easier to navigate, but we're having trouble making that happen as quickly as we would like. One thing is that lots of people skipped around the career guide, so we think many readers prefer a more 'shopping'-like experience (like a newspaper) than the career guide had anyway. We're hoping to go for a hybrid in the future.

My Career Decision-Making Process

Hey shaybenmoshe, thanks for this post! I work at 80,000 Hours, so I'm especially interested in it from a feedback perspective. Michelle has already asked for your expended thoughts on cybersecurity and formal verification, so I'll skip those -- would you also be up for expanding on why the Key Ideas page seems less helpful to you vs. the older career guide?

What is going on in the world?

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Load More