Shortform Content

Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.

Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 m... (read more)

Showing 3 of 5 replies (Click to show all)
0Milan_Griffes14d
When is the independent investigation expected to complete? 

In the post Will said:

Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.

15David Thorstad16d
Going to be honest and say that I think this is a perfectly sensible response and I would do the same in Will's position.

Some comments Duncan made in a private social media conversation:

(Resharing because I think it's useful for EAs to be tracking why rad people are bouncing off EA as a community, not because I share Duncan's feeling—though I think I see where he's coming from!)

I have found that the EA forum is more like a "search for traitors" place, now, than like a "allies in awesomeness" place.

Went there to see what's up just now, and the first thing in recent comments is:

May be an image of text

Which, like. If I had different preconceptions, might land as somebody being like "Oh! Huh! What coo

... (read more)

Gordon Worley adds:

yeah been complaining about this for a while. I'm not sure exactly when things started to fall apart, but it's been in about the last year. the quality of discussion there has fallen off a cliff because it now seems to be full of folks unfamiliar with the basics of rationality or even ea thought. ea has always not been exactly rationality, but historically there was enough overlap to make eaf a cool place. now it's full of people who don't share a common desire to understand the world.

(obviously still good folks on the forum, just enough others to make it less fun and productive to post there)

Could EA benefit from allowing more space for contemplating a response after a post goes up?
 

This is a post from Jason Fried who write a lot about modern work practices implemented at his company 37 signals - https://www.linkedin.com/posts/jason-fried_dont-be-a-knee-jerk-at-most-companies-activity-7043983774434414593-Y0jG

He describes not encouraging instant, first impression reactions to idea pitches through flipping the communication process.  They put out long form content about the idea before the presentation so there can be more developed re... (read more)

Cross-posting a meme here:

21RobBensinger3mo
I made this because because I think it reflects an interesting psychological reality: a lot of EAs IMO are trying to reach a sort of balance between "too weird/uncertain/indirect/abstract" and "obviously lower-EV than some alternative", but there isn't a clear Schelling threshold for where on that spectrum to land, outside of the two extremes.
7Evan_Gaensbauer3mo
I haven't checked if you've posted this in the Dank EA Memes Facebook group yet, though you should if you haven't. This meme would be incredibly popular in that group. It would get hundreds of likes. It would be the discourse premise that would launch one thousand threads. This is one of the rare occasions when posting in Dank EA Memes might net you the kind of serious feedback you want better than posting on this forum or LessWrong or, really, anywhere else on the internet. 

I posted it there and on Twitter. :) Honestly, it plausibly deserves a top-level EA Forum post as well; I don't usually think memes are the best way to carry out discourse, but in this case I feel like it would be healthy for EA to be more self-aware and explicit about the fact that this dynamic is going on, and have a serious larger conversation about it.

(And if people nit-pick some of the specific factual claims implied by my meme, all the better!)

A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less "weird" things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing. 

And even when you got to the AI topic, had to make... (read more)

bruce2d83

Some very quick thoughts from EY's TIME piece from the perspective of someone ~outside of the AI safety work. I have no technical background and don't follow the field closely, so likely to be missing some context and nuance; happy to hear pushback!

Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreement

... (read more)

Who else thinks we should be aiming for a global moratorium on AGI research at at this point? I'm considering ending every comment I make with "AGI research cessandum est", or "Furthermore, AGI research must be stopped".

Showing 3 of 13 replies (Click to show all)

Really great to see the FLI open letter with some big names attached, so soon after posting the above. Great to see some sense prevailing on this issue at a high level. This is a big step in pushing the global conversation on AGI forward toward a much needed moratorium. I'm much more hopeful than I was yesterday! But there is still a lot of work to be done in getting it to actually happen.

4Greg_Colbourn6d
[Edit: I tweeted [https://twitter.com/gcolbourn/status/1640385909433548802] this]
2Greg_Colbourn7d
Loudly and publicly calling for a global moratorium should have the effect of slowing down race-like behaviour, even if it is ultimately unsuccessful. We can at least buy some more time, it's not all or nothing in that sense. And more time can be used to buy yet more time, etc. Factory farming is an interesting analogy, but the trade-off is different. You can think about whether abolitionism or welfarism has higher EV over the long term, but the stakes aren't literally the end of the world if factory farming continues to gain power for 5-15 more years (i.e. humanity won't end up in them). The linked post is great, thanks for the reminder of it (and good to see it so high up the All Time top LW posts now). Who wants to start the institution lc talks about at the end? Who wants to devote significant resources to working on convincing AGI capabilities researchers to stop?

Reasons why EU laws/policies might be important for AI outcomes

Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons. 

But note that:

  • I wrote this quite quickly, and this isn't my area of expertise.
  • My aim is not to make people focus more on the EU, just to make it clearer what some possible reasons for that focus
... (read more)

I’ve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.

Interesting!

(And for others who might be interested and who are based in Singapore, there's this Singapore AI Policy Career Guide.)

There is often a clash between "alignment" and "capabilities" with some saying AI labs are pretending to do alignment while doing capabilities and others say they are so closely tied it's impossible to do good alignment research without producing capability gains.

I'm not sure this discussion will be resolved anytime soon. But I think it's often misdirected.

I think often what people are wondering is roughly "is x a good person for doing this research?" Should it count as beneficial EA-flavored research, or is it just you being an employee at a corporate AI ... (read more)

Risk-Discounting in EA

I am interested in determining how risk-neutral EAs are when trying to do the most good. I have looked through the forums and have seen several posts arguing for or discussing investing in riskier/more variable investments with higher EV than alternatives. GiveWell seems to promote very well proven charities, while funds such as the Longtermism fund and Long-Term Future Fund seem aimed at higher variability, high impact investments, and OpenPhilanthropy also seems willing to invest in possible dead-ends with sufficiently positive EV. ... (read more)

Utility of money is not always logarithmic

EA discussions often assume that the utility of money is logarithmic, but while this is a convenient simplification, it's not always the case. Logarithmic utility is a special case of isoelastic utility, a.k.a. power utility, where the elasticity of marginal utility is . But  can be higher or lower. The most general form of isoelastic utility is the following:

Some special cases:

  • When , we get linear utility, or .
  • When , we get the square ro
... (read more)
Showing 3 of 8 replies (Click to show all)
1Harrison Durland1y
I was hoping for a more dramatic and artistic interpretation of this thread, but I’ll accept what’s been given. In the end, I think there are three main audiences to this short form: 1. People like me who read the first sentence, think “I agree,” and then are baffled by the rest of the post. 2. People who read the first sentence, are confused (or think they disagree), then are baffled by the rest of the post. 3. People who read the first sentence, think “I agree,” are not baffled by the rest of the post and say “Yep, that’s a valid way of framing it.” In contrast, I don’t think there is a large group of people in category 4. Read the first sentence, think “I disagree,” then understand the rest of the post. But do correct me if I’m wrong!
2Charles He1y
Well, I don't agree with this perspective and its premise. I guess my view is that it doesn't seem compatible for what I perceive as the informal, personal character of shortform (like, "live and let live") which is specifically designed to have different norms than posts.   I won't continue this thread because it feels like I'm supplanting or speaking for the OP.

Tentative thoughts on "problem stickiness"

When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.

A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 5

... (read more)

Do you know if anyone else has written more about this? 

On the EA forum redesign: new EAs versus seasoned EAs

In the recent Design changes announcement, many commenters reacted negatively to the design changes. 

One comment from somebody on the forum team said in response: (bolded emphasis mine)

One of our goals on the Forum team is to make the Forum accessible to people who are getting more engaged with the ideas of EA, but haven’t yet been part of the community for a long time.. Without getting into a full theory of change here, I think we’ve neglected designing for this user group a bit over the last sever

... (read more)

Not all "EA" things are good - just saying what everyone knows out loud (copied over with some edits from a twitter thread)

Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray.


Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of d... (read more)

6Nathan Young24d
I would like a norm of writing some criticisms on wiki entries.

I think the wiki entry is a pretty good place for this. It's "the canonical place" as it were. I would think it's important to do this rather fairly. I wouldn't want someone to edit a short CEA article with a "list of criticisms", that (believe you me) could go on for days. And then maybe, just because nobody has a personal motivation to, nobody ends up doing this for Giving What We Can. Or whatever. Seems like the whole thing could quickly prove to be a mess that I would personally judge to be not worth it (unsure). I'd rather see someone own editing a class of orgs and adding in substantial content, including a criticism section that seeks to focus on the highest impact concerns.

Selecting RLHF human raters for desirable traits?

Epistemic status: I wrote this quickly (for my standards) and I have ~zero expertise in this domain.

Introduction

It seems plausible that language models such as GPT3 inherit (however haphazardly) some of the traits, beliefs and value judgments of human raters doing RLHF. For example, Perez et al. (2022) find that models trained via RLHF are more prone to make statements corresponding to Big Five agreeableness than models not trained via RLHF. This is presumably (in part) because human raters gave positive rat... (read more)

giving the alignment research community an edge

epistemic status: shower thought

On whether advancements in humanity's understanding of AI alignment will be fast enough compared to advancements in its understanding of how to create AGI, many factors stack in favor of AGI: more organizations are working on it, there's a direct financial incentive to do so, people tend to be more excited about the prospect of AGI than cautious about misalignment, et cetera. But one factor that gives me a bit of hope (besides the idea that alignment might turn out to be easier ... (read more)

The EA community aims to make a positive difference using two very different approaches.  One of them is much harder than the other.

As I see it, there are two main ways people in the EA community today aim to make a positive difference in the world: (1) identifying existing, high-performing altruistic programs and providing additional resources to support them; and (2) designing and executing new altruistic programs.  I think people use both approaches—though in varying proportions—in each of the major cause areas that people inspired by EA ideas... (read more)

Jason5d20

I think it has potential!

Finally, I think the two approaches require very different sets of skills.  My guess is that there are many more people in the EA community today (which skews young and quantitatively-inclined) with skills that are a good fit for evaluation-and-support than have skills that are an equally good fit for design-and-execution. I worry that this skills gap might increase the risk that people in the EA community might accidentally cause harm while attempting the design-and-execution approach.

This paragraph is a critical component of... (read more)

6Matt_Sharp5d
I liked this and would encourage you to publish it as a top-level post.

Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?

We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.

If it was written... (read more)

I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.

This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.

I think the previous version had some issues: It was confusing, a... (read more)

This is an add-on to this comment I wrote and sort of to all the SBF-EA-related stuff I've written recently. I write this add-on mostly for personal reasons. 

I've argued that we should have patience when assigning blame for EA leadership and not assume leaders deserve blame or ever were necessarily incompetent in a way that counterfactual leaders would not have been. But this point is distinct from thinking there was nothing we could do or no signs to pay attention to. I don't want to be seen as arguing there was nothing that EAs in general could do, ... (read more)

Load more