All of JP Addison's Comments + Replies

JP's Shortform

Maybe you’re suspicious of this claim, but if I think if you convinced me that JP working more hours was good on the margin, I could do some things to make it happen. Like have one saturday a month be a workday, say. That wouldn’t involve doing broadly useful life-improvements.

On “fresh perspective”, I‘m not actually that confident in the claim and don’t really want to defend it. I agree I usually take a while after a long vacation to get context back, which especially matters in programming. But I think (?) some of my best product ideas come after being a... (read more)

4Ben_West9dI see. My model is something like: working uses up some mental resource, and that resource being diminished presents as "it's hard for you to work more hours without some sort of lifestyle change." If you can work more hours without a lifestyle change, that seems to me like evidence your mental resources aren't diminished, and therefore I would predict you to be more productive if you worked more hours. As you say, the most productive form of work might not be programming, but instead talking to random users etc.
JP's Shortform

This is a good response.

JP's Shortform

A few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a pla... (read more)

JP's Shortform

How hard should one work?

Some thoughts on optimal allocation for people who are selfless but nevertheless human.

Baseline: 40 hours a week.

Tiny brain: Work more get more done.

Normal brain: Working more doesn’t really make you more productive, focus on working less to avoid burnout.

Bigger brain: Burnout’s not really caused by overwork, furthermore when you work more you spend more time thinking about your work. You crowd out other distractions that take away your limited attention.

Galaxy brain: Most EA work is creative work that benefits from:

  • Real obsession,
... (read more)
7Ben_West11dThanks for writing this up – I'm really interested in answers to this and have signed up for notifications to comments on this post because I want to see what others say. I find it hard to talk about "working harder" in the abstract, but if I think of interventions that would make the average EA work more hours I think of things like: surrounding themselves by people who work hard, customizing light sources to keep their energy going throughout the day, removing distractions from their environment, exercising and regulating sleep well, etc. I would guess that these interventions would make the average EA more productive, not less. (nb: there are also "hard work" interventions that seem more dubious to me, e.g. "feel bad about yourself for not having worked enough" or "abuse stimulants".) One specific point: I'm not sure I agree regarding the benefits of "fresh perspective". It can sometimes happen that I come back from vacation and realize a clever solution that I missed, but usually me having lost context on a project makes my performance worse, not better.
7nonn12dFor the sake of argument, I'm suspicious of some of the galaxy takes. I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise. Beyond that, it's not obvious working less hard results in better prioritization or execution. A naive look at the intellectual world might suggest the opposite afaict, but selection effects make this hard. I think having spent more time trying hard to prioritize, or trying to learn about how to do prioritization/execution well is more likely to work. I'd count "reading/training up on how to do good prioritization" as work Agree re: the value of fresh perspective, but idk if the evidence actually supports that working less hard results in fresh perspective. It's entirely plausibly to me that what is actually needed is explicit time to take a step back - e.g. Richard Hamming Fridays - to reorient your perspective. (Also, imo good sleep + exercise functions as a better "fresh perspective" that most daily versions of "working less hard", like chilling at home) TBH, I wonder if working on very different projects to reset your assumptions about the previous one or reading books/histories of other important project/etc works better is a better way of gaining fresh perspective, because it's actually forcing you into a different frame of mind. I'd also distinguish vacations from "only working 9-5", which is routine enough that idk if it'd produce particularly fresh perspective. Real obsession definitely seems great, but absent that I still think the above points apply. For most prominent people, I think they aren't obsessed with ~most of the work their doing (it's too widely varied), but they are obsessed with making the project happen. E.g. Elon says he'd prefer to be an engineer, but has to do all this business stuff to make the project happen. Also idk how real obsession develops, but it seems more likely t
4JP Addison12dA few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a place where people can show off hobbies they’d be embarrassed about in other organizations. These organizations (call them Type B) can attract and retain staff who for whatever reason would be worse fits at the startup-y orgs. Perhaps they’re the primary caregiver to their child or have physical or mental health issues. I know many incredibly talented people like that and I’m glad there are some organizations for them.
What are the top priorities in a slow-takeoff, multipolar world?

The Andrew Critch interview is so far exactly what I’m looking for.

What are the top priorities in a slow-takeoff, multipolar world?

I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn't get there without being very protective of 10% of their budgets.

You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I'm not sure what my position is on that actually.

3Mauricio19dThanks! Is the following a good summary of what you have in mind? It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That's more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast. What fills in the "[??]" in the above? I could imagine a couple of possibilities: * Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that). * Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs. Additional ways of challenging those might be: * Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety. * Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety. * Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
What are the top priorities in a slow-takeoff, multipolar world?

Thanks for your answer. (Just to check, I think you are a different Steve Byrnes than the one I met at Stanford EA in 2016 or so?)

I do  want to emphasize  is that I don't doubt that technical AI safety work is one of the top priorities. It does seem like within technical AI safety research the best work seems to shift away from Agent Foundations type of work and towards neural-nets-specific work. It also seems like the technical problem does get easier in expectation if you have more than one shot. By contrast, I claim, many of the Moloch-style problems get harder.

1steve215221dNo I don't think we've met! In 2016 I was a professional physicist living in Boston. I'm not sure if I would have even known what "EA" stood for in 2016. :-) I agree. But maybe I would have said "less hard" rather than "easier" to better convey a certain mood :-P I'm not sure what your model is here. Maybe a useful framing is "alignment tax" [] : if it's possible to make an AI that can do some task X unsafely with a certain amount of time/money/testing/research/compute/whatever, then how much extra time/money/etc. would it take to make an AI that can do task X safely? That's the alignment tax. The goal is for the alignment tax to be as close as possible to 0%. (It's never going to be exactly 0%.) In the fast-takeoff unipolar case, we want a low alignment tax because some organizations will be paying the alignment tax and others won't, and we want one of the former to win the race, not one of the latter. In the slow-takeoff multipolar case, we want a low alignment tax because we're asking organizations to make tradeoffs for safety, and if that's a very big ask, we're less likely to succeed. If the alignment tax is 1%, we might actually succeed. Remember, that there are many reasons that organizations are incentivized to make safe AIs, not least because they want the AIs to stay under their control and do the things they want them to do, not to mention legal risks, reputation risks, employees who care about their children, etc. etc. So if all we're asking is for them to spend 1% more training time, maybe they all will. If instead we're asking them all to spend 100× more compute plus an extra 3 years of pre-deployment test protocols, well, that's much less promising. So either way, we want a low alignment tax. OK, now let's get back to what you wrote. I think maybe your model is: "If Agent Foundations research pans out at all, it would pan out by discovering a high-alignme
4kokotajlod21dI'm pretty confident that if loads more money and talent had been thrown at space exploration, going to the moon would be substantially cheaper and more common today. SpaceX is good evidence of this, for example. As for fusion power, I guess I've got a lot less evidence for that. Perhaps I am wrong. But it seems similar to me. We could also talk about fusion power on the metric of "actually producing more energy than it takes in, sustainably" in which case my understanding is that we haven't got there at all yet.
Buck's Shortform

I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.

Examples of Successful Selective Disclosure in the Life Sciences

First off I want to say thanks for your Forum contributions, Tessa. I'm consistently upvoting your comments, and appreciate the Wiki contributions as well.

I'm pretty confident in information hazards as a concern that are/will be plausibly important, but in these cases and other cases I tend to be at least strongly tempted by openness, which does seem to make it harder to advocate for responsible disclosure. "You should strongly consider selectively disclosing dangerous information, only all of these contentious examples I think should be open."

5tessa1moAw, it's always really nice to hear that people are enjoying the words I fling out onto the internet! Often both the benefits and risks of a given bit of research are pretty speculative, so evaluation of specific cases depends on one's underlying beliefs about potential gains from openness and potential harms from new life sciences insights. My hope is that there are opportunities to limit the risks of disclosure while still getting the benefits of openness, which is why I want to sketch out some of the selective-disclosure landscape between "full secrecy by default" (paranoid?) and "full openness by default" (reckless?). If you're like to read a strong argument against openness in one particular contentious case, I recommend Gregory Koblentz's 2018 paper A Critical Analysis of the Scientific and Commercial Rationales for the De Novo Synthesis of Horsepox Virus []. From the paper:
[PR FAQ] Adding profile pictures to the Forum

I'm guessing you haven't seen, so let me show off the new signup flow!

[PR FAQ] Adding profile pictures to the Forum

Hi Larks, thanks for taking the time to comment. I think your continuum comment is a good contribution to the considerations. I’m going to run with that metaphor, and talk about where I think we should fall. I take this seriously and want to get this right.

I’ve drawn three possible lines for what utility the Forum will get from its position on the continuum. Maybe it’s not actually useful, maybe I just like drawing things. I guess my main point is that we don’t have to figure out the entire space, just the local one:

Anyway, the story for the (locally) impe... (read more)

What 2026 looks like (Daniel's median future)

I think this is great. I especially like the discussion of propaganda, which feels like an important model.

EA Forum feature suggestion thread

Seems right. I doubt it was deliberate.

EA Forum feature suggestion thread

What happens if you log in in incognito?  Do you have any of these settings set?

2Pablo1moAh, I had the first of those options ticked, and the issue disappeared after I unticked it. So this is the cause. Is this behavior deliberate? I think the option should not affect how shortform posts are displayed in the "all posts" view.
EA Forum feature suggestion thread

I can't reproduce this, can you tell me what browser you were using, what settings you have for the allposts page, and whether you can still see the issue?

2Pablo1moYes. Chrome version 92.0.4515.107 (Official Build) (x86_64). However, (1) the issue persists if I change the view settings (selecting "magic", unticking "show low karma" etc makes no difference) and (2) the issue disappears if I open the page in incognito, or in another browser. From this I conclude it is likely caused by one of the many Chrome extensions I have installed. I will keep an eye on this and will let you know if I manage to identify the cause.
JP's Shortform

Temporary site update: I've taken down the allPosts page. It appears we have a bot hitting the page, and it's causing the site to be slow. While I investigate, I've simply taken the page down. My apologies for the inconvenience.

4JP Addison1moThis is over.
EA Forum feature suggestion thread

That’s a bug, thanks for reporting.

Database dumps of the EA Forum

You don't need to use the allPosts page to get a list of all the posts. You can just ask the GraphQL API for the ids of all of them.

Anki deck for "Some key numbers that (almost) every EA should know"

How many chicken years are affected per dollar spent on broiler and cage-free campaigns.

I estimate how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018. According to my estimate, for every dollar spent, 9 to 120 years of chicken life will be affected.

My impression is that cage free campaigns have been very successful and there's much less low-hanging fruit, such that I don't think it's reasonable to extrapolate those results to an ongoing basis.

2Pablo2moI agree that's one way in which the estimate may be misleading. The author lists this and other ways in a dedicated section. I revised [] the note to add a link to that section.
You are allowed to edit Wikipedia

I turned this into a non-question post for you. (Aaron didn't know I could do that, because it's not a normal admin option.)

What is life like at the median global income?

Thanks! That's very much the sort of thing that's helpful.

EA needs consultancies

Those are some pretty compelling numbers, but I'd be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they're engaged with EA ideas in other ways, but now we're into territory where I'd feel like I'd need to do more vetting.)

Posting as an individual who is a consultant, not on behalf of my employer

Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic:

First let me voice a +1 to everything Jeremy has said here already - with the possible exception that I know several McKinsey partners are interfacing with the EA movement on part... (read more)

Posting as an individual who is a consultant, not on behalf of my employer

Hi, one such consultant checking in! I had this post open from the moment I saw it in  this week's EA Forum digest, but... I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.

I'm a member of, but don't run, the EACN network and my take is it's a group of consultants interested in EA with highly varied degrees of familiarity / interest: from "oh, I think I've heard of GiveWell?" to "I'm only working here because... (read more)

6Peterslattery3moI have posted about this in the Facebook group to let them know. IMO they have done a great job setting that group up and probably have just been focusing on more practical work than keeping up with the EA forum, which is a full time job!
Anki deck for "Some key numbers that (almost) every EA should know"

Thanks Pablo and Joseph!

If you're a person who wants to learn this material, but doesn't have an Anki habit, I'd recommend taking this as an opportunity to try things, and give it a go. Turn remembering things into a deliberate choice.

You can get started here.

6Pablo3moI second JP's recommendation. A couple of additional good resources are Michael Nielsen's augmenting long-term memory [] and Gwern's spaced repetition for efficient learning [].
Humanities Research Ideas for Longtermists

Any mistakes are the fault of Linch Zhang

:D   Good line. I hope you snuck this in and Linch didn’t notice.

6Peter Wildeford3mo:D
My current impressions on career choice for longtermists

Thanks for writing this! I like the aptitudes framing.

With respect to software engineering, I would add that EA orgs hiring web developers have historically had a hard time getting the same level of engineering talent as can be found at EA-adjacent AI orgs.* I have a thesis that as the EA community scales, the demand for web developers building custom tools and collaboration platforms will grow as a percentage of direct work roles. With the existing difficulty in hiring and with most EAs not viewing web development as a direct work path, I expect the short... (read more)

I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.

Help me find the crux between EA/XR and Progress Studies

Thanks for writing this post! I'm a fan of your work and am excited for this discussion.

Here's how I think about costs vs benefits:

I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.

I think I can have a tractable impact on reducing that vulnerability. It doesn't seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out — wh... (read more)

6jasoncrawford3moThanks JP! Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).
richard_ngo's Shortform

See also: Effective Altruism is an Ideology not (just) a Question.

Not endorsed by me, personally. I wouldn't call someone "not EA-aligned" if they disagreed about all of the worldview claims you made, but really care about understanding if someone is genuinely trying to answer the Question.

EA Survey 2020: Demographics

Sorry about the delay. I've fixed the issue and have reset the date of posting to now.

EA Survey 2020: How People Get Involved in EA

I fixed a bug that was causing this post to get underweighted by the frontpage algorithm, and have reset the date of posting to now, to correct for the period where it wouldn't have showed up on the frontpage.

The EA Forum Editing Festival has begun!

Voting on edits is recently in the pipeline.  In the mean time you can comment on the tag, which gives the author public recognition.

1PeterMcIntyre4moAwesome, glad to hear that! Thanks, JP!
Ben Garfinkel's Shortform

At least in software, there's a problem I see where young engineers are often overly bought-in to hype trains, but older engineers (on average) stick with technologies they know too much.

I would imagine something similar in academia, where hot new theories are over-valued by the young, but older academics have the problem you describe.

1Ben Garfinkel5moGood point! That consideration -- and the more basic consideration that more junior people often just know less -- definitely pushes in the opposite direction. If you wanted to try some version of seniority-weighted epistemic deference, my guess is that the most reliable cohort would have studied a given topic for at least a few years but less than a couple decades.
Altruistic motivation

I think of a difference between posts-that-are-motivating, and posts-about-motivation. I'd be sad if there wasn't a place to go for posts-that-are-motivating, that was mostly that thing.

2Cullen_OKeefe5moAh, I see. If that's the intended distinction (which I agree makes sense!), I suggesting renaming the 'posts-that-are-motivating' tag to something more distinctive, like "Motivating posts."
What posts do you want someone to write?

This post probably qualifies, but I didn't love it. I'd pay out if you wrote a good one. But see note about my bar being high, I definitely don't want to make promises.

"Insider giving" - An unfortunate donation strategy used by corporate insiders to avoid losses

I think sometimes they can write into the donation various stipulations around how fast they sell it. If you were looking to avoid scrutiny, you might take advantage of that.

I'd be happy to keep this tag and the others, so that someone interested in the topic as a whole can subscribe to any new posts tagged Sentience & Consciousness.

4Aaron Gertler5moI feel as though someone with broad interests in this area should be able to subscribe to multiple tags, and we should encourage that sort of thing more if it isn't what users are naturally doing. Keeping a bigger tag around isn't too harmful, but I think it might lead to lots of people using just that tag rather than looking for more narrow/specific tags. (That's what happened to me when I first made the tag — once it existed, it became an easy catch-all for posts that weren't very similar.)
The EA Forum Editing Festival has begun!
  1. Sorry about the issues. On S-Risks, it is a wiki-only tag, though probably we should change that.
  2. I really like the idea of tagging everything that's been officially produced by an organization with the organization's tag. So you might go to the Rethink Priorities tag, sort by top, and see a "best of" list.
  3. [Edit reply] Not to my knowledge, sorry.
3MichaelA5moI think Chi's point 3 suggestion would sometimes be helpful, and even more so if we could somehow sort-of pre-select a batch of posts for giving tag X to, but then scan through the list and un-select some before the tags are applied. This could be like how many sites (e.g., gmail) let you click one box at the top of the list to select all items in that list, then individually unselect some. And ideally the pre-selection could be for all posts with a given other tag, all posts by a given author, or something else or combos. (E.g., I'd have used this for tagging most Aaron Hamlin posts with the Center for Election Science tag.)
3MichaelA5moOn 2, I share that view, and I'd also add that I think "organisation tags" should also be applied to things about but not by the org. E.g., I think donation writeups that discuss why the person donated to orgs X and Y and considered but ultimately decided against donating to Z should be given the tags for orgs X, Y, and Z. And I think someone's attempt to summarise and critique an org's theory of change and recent outputs should be given that org's tag, even if the person doesn't work there. My thinking is that the same people interested in posts by an org will often also be interested in posts about the org but by other people. But I think we shouldn't do this when a post only includes a very small bit about a given org (e.g. the posts Aaron Gertler and David Nash make which give updates about many orgs at once). I think it might be good to have a clearly visible policy about how organisation tags are to be used. This goes especially if the norm I suggest is indeed adopted, since in that case we wouldn't want people automatically assuming that all post tagged with org X were by someone from org X writing in relation to their work for org X.
Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

Note: that tag is currently a wiki-only tag/wiki page, but could be turned into a proper tag if desired.

Propose and vote on potential EA Wiki entries

Reasonable because the generality, though I think the cryptography ship has long, long since sailed.

Propose and vote on potential EA Wiki entries

Seems good. Maybe we should crosspost one of the recent articles on Sam Bankman-Freid. 

2MichaelA6moI've now created the tag []. Feel free to make those crossposts and give them the tag, of course :) (I won't do it myself, as I have little knowledge about or personal interest in blockchain stuff myself.)
Long-term future

Made worse by the fact that at Pablo's request, I deduplicated the Longtermism (Philosophy) tag, with the Longtermism wiki-entry.

4Pablo1moI will now take a look at the posts currently associated with this tag and will make sure they have the correct tag (long-term future or longtermism).
Load More