Lukas_Gloor

2899Joined Jan 2015

Sequences
1

Moral Anti-Realism

Comments
299

There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 

Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there's a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you're working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
 

  1. ^

    Compare the following two scenarios. If you're the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: "Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good." By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of "at least the conference will be impactful without me" doesn't feel as appropriate (it might be more impactful based on other people's moral and empirical views, but not necessarily yours). (That doesn't mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what's common in EA is entitled to have their perspective validated. I'm just pointing out that we can't fault people from under-represented cause areas for thinking that it's altruistically important for them to get invited – that's what's rational when you worry that the conference wouldn't represent your cause area all that well otherwise. [Even so, I also think it's important for everyone to be understanding of others' perspectives on this. E.g., if lots of people don't share your views, you simply can't be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]

Hi Amy, I think it's hard to justify a policy of never discussing someone's application publicly even when they agree to it and it's in the public interest. This is completely different from protecting people's privacy.

If you read Amy's reply carefully, it sounds like she told Constance some of the reasons for rejection in private and then Constance didn't summarize those reasons (accurately, or at all?) in her post. If so, it's understandable why Amy isn't sure whether Constance would be okay having them shared (because if she was okay, she'd have already shared them?). See this part of Amy's reply:

I did explain to Constance why she was initially rejected as one of the things we discussed on an hour-long call. 
[...]
I don’t think this post reflects what I told Constance, perhaps because she disagrees with us. So, I want to stick to the policy for now.

FWIW, based on everything Constance writes, I think she seems like a good fit for EAG to me and, more importantly, can be extremely proud of her altruism and accomplishments  (and doesn't need validation from other EAs  for that).

I'm just saying that on the particular topic of sharing reasons for the initial rejections, it seems like Amy gave an argument that's more specific than "we never discuss reasons, ever, not even when the person herself is okay with public discussion." And you seem to have missed that in your reply or assumed an uncharitable interpretation. 

Okay, I think you have a good point. The post "PR" is corrosive, "reputation" is not, which I really like and agree with, argues that "reputation" is the thing that actually matters. A good way to describe reputation is indeed "how you come across to people who interact with you in good faith." Based on this definition, I agree with your point!

That said, I interpreted the OP charitably in that I assumed they're talking about what Anna Salomon (author of the linked post) would call "PR risks." Anna's recommendation there is to basically not care about PR risk at all. By contrast, I think it's sometimes okay (but kind of a necessary evil) to care about PR risks. For instance, you have more to lose if you're running for a seat in politics than if you're a niche organization that anyway doesn't do a ton of public-facing communications. (But it's annoying and I would often recommend that orgs don't worry about them much and focus on the things that uphold their reputation, more narrowly construed, i.e., "among people whose opinions are worth caring about.")

Anyway, I reversed my downvote of your comment because I like a definition of "reputational risk" where it's basically generally bad not to care about it. I didn't change it into an upvote because you seem to disagree with the secrecy/censorship elements of the post in general (you gave "reputational risks" as an example, but worded your post in a way that implies you also have qualms with a bunch of other aspects – so far, I don't share this aversion; I think secrecy/censorship are sometimes appropriate).

and most importantly, people who interact with the organisation in good faith would think is bad

Those are your words, not the words in the OP. 

If I was in the evaluation committee it would be one of my  evaluation criteria that people interacting with the organization in good faith would think it was a good deed / good involvement on part of the prize contender (and it would be strange to do it differently, so I don't expect the evaluation committee to think differently).

Thanks, those are good examples and I think you're changing my mind a bit! If the board just lists all kinds of jobs at a particular org and that org also hires for developers (or some other role that requires comparatively little involvement with organizational strategy, perhaps operations in some cases – though note that operations people often take on various responsibility that shape the direction of an organization), that could be quite misleading. This would be a problem even if we don't expect 80k to directly recommend to developers to take developer jobs at an org that they don't think has positive impact.

"does this AI company do more safety or more capabilities?"

That's yet another challenge, yeah. Especially because there may not even always be a consensus among thoughtful EAs on how much safety work (and what sort of org structure) is enough. 

I'd worry that this leads to a false sense of security. Just like jobs that people take purely for career capital require some active thinking on the part of the person about when it's enough and when to pivot, one could make a case that most highly impactful jobs wouldn't be exceptionally impactful without "active thinking" of a similar kind.

For instance, any sort of policy work has more or less impact depending on what specific policies you advocate for, not just how well one does it.

Unfortunately, I think it's somewhat rare that for-profit organizations (especially outside of EA) or governments have streamlined missions and the type of culture that encourages "having impact" as a natural part of one's job description. Hospitals are the main counter-example I could think of, since your job description as a doctor or nurse or even as almost any hospital staff is literally about saving lives and may include instructions for working under triage conditions. By contrast, the way I envision work in policy (you obviously know more about this than I do) or things like biosecurity research, I'd imagine it depends a lot on the specific program / group and that people can make a big difference if they have personal initiative – which are things that require paying close attention to one's path to impact (on top of excelling at one's immediate job description). 

What IMO could be quite useful is if 80k would say how much of a given job's impact comes from "following the job description and doing well in a conventional sense" vs. "introducing particular ideas or policies to this organization based on EA principles." 
 

If somebody can't evaluate jobs on the job board for themselves, I'm not that confident that they'll take a good path regardless.


That was also my instinctive reaction to this post. At least in the sense of "if someone can't distinguish what's mostly for career capital vs. where a specific role ends up saving lives or improving the world, that's a bit strange."

That said, I agree with the post that the communication around the job board can probably be improved!

Do you think this disqualifies the project?


Probably not, especially not in the sense that anyone wanting to implement a low-effort version of this project should feel discouraged. ("Low-effort versions" of this would mostly help make the lives for people in post-apocalyptic scenarios less scary and more easily survivable, which seems obviously valuable. Beyond that, insofar as you manage to preserve information, that seems likely positive despite the caveats I mentioned!)

Still, before people start high-effort versions of the idea that go more in the direction of "civilization re-starter kits" (like vast storages of items to build self-functioning communities) or super bunkers, I'd personally like to see a more in-depth evaluation of the concerns. 

For what it's worth, improving the quality of a newly rebuilt civilization seems more important than making sure rebuilding happens at all even according to the total view on population ethics (that's my guess at least – though it depends on how totalists  would view futures controlled by non-aligned AI), so investigating whether there are ways to focus especially on wisdom and coordination abilities of a new civilization seems important also from that perspective.

It's worth noting that ensuring recovery after a near-extinction event is less robust under moral uncertainty and less cooperative given disagreements on population-ethical views than just "prevent our still-functioning civilization from going extinct." In particular, the latter scenario (preventing extinction for a still-functioning civilization) is really good not just on a totalist view of aggregative consequentialism, but also for all existing people who don't want to die, don't want their relatives or friends or loved ones to die, and want civilization to go on for their personal contributions to continue to matter. All of that gets disrupted in a near-extinction collapse. 

(There's also an effect from considerations of "Which worlds get saved?" where, in a post-collapse scenario, you've updated that humans just aren't very good at getting their shit together. All else equal, you should be less optimistic about our ability to pull off good things in the long-run future compared to in a world where we didn't bring about a self-imposed civilizational collapse / near-extinction event.)

Therefore, one thing that makes the type of intervention you're proposing more robust would be to also focus on improving the quality of the future conditional on successful rebuilding. That is, if you have information or resources that would help a second stage civilization to do better than it otherwise would (at preventing particularly bad future outcomes), that would make the intervention more robustly positive. 

There's an argument to be made that extinction is rather unlikely in general even with the massive population decreases you're describing, and that rebuilding from a "higher base" is likely to lead to a wiser or otherwise morally better civilization than rebuilding from a lower base. (For instance, perhaps because more structures from the previous civilization are preserved, which makes it easier to "learn lessons" and have an inspiring narrative about what mistakes to avoid). That said, these things are hard to predict.[1]

  1. ^

    Firstly, we can tell probable-sounding just-so stories where slower rebuilding leads to better outcomes. Secondly, there isn't necessarily even a straightforward relationship between things like "civilizational wisdom" or "civilization's ability to coordinate" to averting some of the worst possible outcomes with earth-originating space colonization ("s-risks"). In particular, sometimes it's better to fail at some high-risk endeavor in a very stupid way rather than in a way that is "almost right." It's not obvious where on that spectrum a civilization would end up if you just make it a bit wiser and better-coordinated. You could argue that "being wiser is always better" because wisdom means people will want to pause, reflect, and make use of option value when they're faced with an invention that has some chance of turning out to be a Pandora's box. However, the ability to pause and reflect again requires being above a certain threshold on things like wisdom and ability to coordinate – otherwise there may be no "option value" in practice. (When it comes to evaluating whether a given intervention is robust, it concerns me that EAs have historically applied the "option value argument" without caveats to our present civilization, which seems quite distinctly below that threshold the way things are going – though one may hope that we'll somehow be able to change that trajectory, which gives the basis for a more nuanced option-value argument.) 

Update: Zoe and I had a call and the private info she shared with me convinced me that some people with credentials or track record in EA/longtermist research indeed discouraged publication of the paper based on funding concerns. I realized that I originally wasn't imaginative enough to think of situations where those sorts of concerns could apply (in the sense that people would be motivated to voice them for common psychological reasons and not as cartoon villains). When I thought about how EA funding generates pressure to conform, I was much too focused on the parts of EA I was most familiar with. That said, the situation in question arose because of specific features coming together – it wouldn't be accurate to say that all areas of the EA ecosystem face the same pressures to conform. (I think Zoe agrees with this last bit.) Nonetheless, looking forward I can see similar dynamics happening again, so I think it's important to have identified this as a source of bias.

Load More