Sorted by New

Wiki Contributions


Miranda_Zhang's Shortform

I think David Nash does something similar with his EA Updates (here is the most recent one). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.

Ben_Snodin's Shortform

You might be interested in checking out Ingredients for creating disruptive research teams e.g. on vision, autonomy, spaces for interaction.

Shelly Kagan - readings for Ethics and the Future seminar (spring 2021)

I would be happy to hear stories of people becoming significantly less longtermist. What changed their minds?

Misha_Yagudin's Shortform

Mission drift in Gates foundation makes me somewhat more skeptical of patient longtermist. I mean, maybe a patient philanthropist's discounting/expropriation rate shouldn't be too low.

Shallow evaluations of longtermist organizations

I guess (p=.75) Nuño would say that the following interpretation is mostly reasonable: "inside view" here means that Nuño presents his impressions which rely a lot on stories he tells himself about various research directions being valuable or not, which others might reasonably disagree with him about.

I am thinking that because Nuño uses a simple model to estimate a fraction of researchers doing "valuable" work, the subjectivity is rooted in his takes on how valuable their individual research directions are.

[Phrasing this kinda weirdly as I want to get a visceral update on my belief in "when thinking is clearly described, I can guess that the author means by inside/outside view." I also think that (p=.33) Nuño was just not very careful and will say something like "I have no idea what I really meant at the time of writing it."]

Jade Leung: Why companies should be leading on AI governance

Thank you for a speedy reply, Markus! Jade makes three major points (see the attached slide). I would appreciate your high-level impressions on these (if you lack time reporting oneliners like "mostly agree" or "too much nuance to settle on oneliner" still would be valuable).

If you'd take time to elaborate on any of these, I would prefer the last one. Specifically on:

What are the reasons why them preemptively engaging is likely to lead to prosocial regulation? [emphasis mine] Two reasons why. One: the rationale for a firm would be something like, "We should be doing the thing that governance will want us to do, so that they don't then go in and put in regulation that is not good for us." And if you assume that governance has that incentive structure to deliver on public goods, then firms, at the very least, will converge on the idea that they should be mitigating their externalities and delivering on prosocial outcomes in the same way that the state regulation probably would. The more salient one in the case of AI is that public opinion actually plays a fairly large role in dictating what firms think are prosocial. [...]

EA Infrastructure Fund: May 2021 grant recommendations

(Hey Max, consider reposting this to goodreads if you are on the platform.)

Jade Leung: Why companies should be leading on AI governance

Do people at Gov AI generally agree with the message/messaging of the talk 2–3 years later?

The answer would be a nice data point for "are we clueless to give advice on AI policy" debate/sentiments. And I am curious about how beneficial corporations/financiers can be for ~selfish reasons (cf. BlackRock on environmental sustainability and coronavirus cure/vaccine).

Ben_Snodin's Shortform

Thinking along these lines, joining the Effective Altruism movement can be seen as a way to “get in at the ground floor”: if the movement is eventually successful in changing the status quo, you will get brownie points for having been right all along, and the Effective Altruist area you’ve built a career in will get a large prestige boost when everyone agrees that it is indeed effectively altruistic.

Joining EA seems like a very suboptimal way to get brownie points from society at large and even from groups which EA represents the best (students/graduates of elite colleges). Isn't getting into social justice a better investment? What are the subgroups you think EAs try hard to impress?

What are things everyone here should (maybe) read?

Yes, I think that the wording of the forum questions is reasonable. The problem is that I expect that your nuance will get lost in the two layers of communication: commenters recommending intros into X or even specific books; readers adding titles to their Goodreads.

I think this is kinda fine for wellbeing/adulting bits of your advice, which I liked.

Load More