All of JP Addison's Comments + Replies

Comments for shorter Cold Takes pieces

I believe Scott Alexander has cited this book’s “ballistically false” claim, and I definitely remember ~believing it and finding it strongly compelling.

danieltehrani's Shortform

Welcome to the Forum! That's a good question.

There's some conversation about likelihood of recovery in the event of total collapse of civilization. Some are optimistic, I think I'm somewhat less so. I could not quickly find a link unfortunately. I'm guessing there could be reasonable things to do to improve our post-collapse chances.

I'm not sure if this qualifies, but Allfed is interested in post-disaster food-security. I recommend this podcast if you haven't heard it.

1danieltehrani4dHi! That seems to be relevant to the things I was looking into. Thank you!
Effective Altruism: The First Decade (Forum Review)

Last call for reviews! You can still submit reviews after the end of this phase, but if you want your reviews to count for inclusion in the next phase, you'll need to submit the reviews before midnight UTC the night of the 15th. That's midnight GMT, or 7pm EST.

Effective Altruism: The First Decade (Forum Review)

We have extended the end of the review phase to the 15th, to give a full month for reviews.

Robin Hanson's Grabby Aliens model explained - part 1

It was the first I watched, selected because I remembered this post, or maybe someone mentioned it a few months ago?

I liked the cute dog aliens, and it was at the sweet spot of novel+specific enough to be interesting, but not too complex for relaxing after work.

Robin Hanson's Grabby Aliens model explained - part 1

I heard people say that this was surprisingly good, and I watched it and am surprised at how good it is. Nice work!

3Writer16dThanks! I'm curious if there's a particular aspect of the video that you found particularly good and if you found it significantly better than the other videos on Rational Animations (if you have watched them). I'm trying to understand what made this particular video more appreciated than the other ones.
Effective Altruism: The First Decade (Forum Review)

They were not nominated during the nominations phase. I'll treat Tessa's posting as a nomination though, and nominate them manually. You should now be able to vote on and review them.

Stefan_Schubert's Shortform

I expect we will once they settle on a more stable UI.

JP's Shortform

I should write up an update about the Decade Review. Do you have questions about it? I can answer them here, but it will also help me inform the update.

Peter Singer – Famine, Affluence, and Morality

This essay had a large effect on me when I read it early on in my EA Journey. It's hard to assign credit, but around the time I read it, I significantly raised my "altruistic ambition", and went from "I should give 10%" to "doing good should be the central organizing principle of my life."

I know many smart people who disagree with me, but I think this argument is basically sound. And it has had, for me anyway, formed a healthy voice in my head pushing me towards strong conviction.

Effective Altruism: The First Decade (Forum Review)

Today's the last day of the Nominations Phase!

2IanDavidMoss1moSuggestion/request: all past Forum Prize winners [https://forum.effectivealtruism.org/tag/forum-prize] should be automatically nominated.
What the EA community can learn from the rise of the neoliberals

This had a large influence on how I view the strategy of community building for EA.

This had a large influence on how I view the strategy of community building for EA.

Scott Alexander — Meditations on Moloch

This post did a really good thing to how I see the world's problems. So much of what's wrong with the world is the fault of no one. Encapsulating the dynamics at play into "Moloch" helped me change the way I viewed/view the world, at a pretty fundamental level.

Effective Altruism is a Question (not an ideology)

Seconded. This describes it's effect on me as well.

Effective Altruism: The First Decade (Forum Review)

Thank you for your crossposting contributions Tessa!

Announcing my retirement

I hope others will join me in saying: thank you for your years serving as the friendly voice of the Forum, and best of luck at Open Philanthropy!

Opportunity Costs of Technical Talent: Intuition and (Simple) Implications

I just want to say I love this metaphor and have already referenced it twice in conversation.

2Ozzie Gooen2moThanks so much, that's really useful to know (it's really hard to tell if these metaphors are useful at all), and also makes me feel much better about this. :)
Where should I donate?

I donate to, and generally advise other small donors to donate to, a donor lottery, for roughly the reasons outlined here.

FTX EA Fellowships

It's a more America-friendly time zone though.

2Linch2moSure but so are other Caribbean countries, and for that matter, Florida.
What are your favourite ways to buy time?

Have you hired a digital assistant? Multiple of my coworkers have, though I think reviews are mixed.

7willbradshaw3moI'd definitely be interested in talking 1:1 with someone who's had success finding a good digital assistant. This (and other "hire a person to do stuff" solutions) seem to me like they require a decent amount of tacit knowledge to pull off successfully.
What are your favourite ways to buy time?

Use flightfox to buy flights. Opt for a human to book your flights and trust them to make decisions about money.

Annual donation rituals?

Hot take: This is one of the largest benefits of the Giving Tuesday shenanigans.

FTX EA Fellowships

I think the “Already working on EA jobs / projects that can be done from the Bahamas” is the answer here. To my read, this isn’t trying to fully fund someone’s work, but rather to incentivize someone to do the work from the Bahamas . If you were self-funding a project from savings, this doesn’t suddenly provide you a full salary, but it still probably looks very good as it could potentially eliminate your cash burn.

3Halffull3moSure, but "already working on an EA project" doesn't mean you have an employer.
Forum Update: New Features (October 2021)

I made some updates that should address a lot of this. Let me know what you think!

2BrianTan3moThanks! I think these updates are good. Some thoughts/suggestions: 1. Maybe instead of saying "unique clients" you can say "unique devices" in the note about the data collection issue. 2. I'm unsure about how valuable or apt "Views by unique devices > 5 minutes" because some Forum posts take less than 5 minutes to read. So that data point will be irrelevant for those points. 3. I think some people will not know what "Bounce rate" is, so maybe you still need an icon that people can click or hover on to explain what that means and/or how it's calculated. Maybe you can also say in that tooltip that "The lower the bounce rate, the better".
Truthful AI

I'm pretty excited about this. It seems to be an approach that my gut actually believes could help with AI-powered propaganda, as written out here.

6kokotajlod3moFWIW, my gut says this is unlikely to work but better than doing nothing and hoping for the best.
Forum Update: New Features (October 2021)
  1. Yep.
  2. Oh, interesting. I think this is a bug related to me viewing the data as an admin. Thanks for the catch.
  3. 👀, still interested in other's view.
  4. Yeah, you can think of what we're measuring as "bounce rate". I was thinking of giving it a relatively "uninterpreted" treatment (ie: leaving the data raw, rather than calculating bounce rate), but I think more interpretation combined with tooltips seems better.

4.5. Re "average time", this turned out to be harder than I expected, so I decided to wait to see if anyone asked for it, but now I have my excuse to spend time figuring it out, mwahaha.

7BrianTan3moOn #3, yeah I'd be interested to hear other's views too. On #4 and 4.5, ah I see. Personally I think # of reads (i.e. # of views where the user spent at least 50% of the time it takes to read the article) or average time spent would be more interesting to me than the bounce rate, although I'm unsure.
Forum Update: New Features (October 2021)

Thanks for your feedback, this is super valuable!

Re 1&2, we should definitely add a note about how early the data goes (it does go all the way back to March 2020). Unfortunately the data I felt was most valuable to plot (views by unique devices), we suffered from a data collection issue in the first half of 2021. Fortunately we do have a note that appears on posts older than June 2021, unfortunately it apparently wasn't noticeable.

Re 3, I had not thought of a dashboard like that, but I like the idea a lot, thanks for making it. (I'd be curious if other authors reading this also like it, let us know!)

9BrianTan3moGlad to hear! Numbering my responses: 1. To clarify, the data collection issue was in getting the daily # of views by unique devices in the first half of 2021 right? That's unfortunate, but anyway hopefully it doesn't happen again. 2. I don't see anything at all about that note on posts older than June 2021. So yes it would be good to mke that noticeable. 3. Yeah I think this dashboard would be more useful than the current implementation. It would take authors 3 clicks to go see the analytics of their post currently, and it's much more valuable and easier to see in one table which of my posts got more views, reads, and karma. 4. In the table or in the individual view, you might even want to include a stat for "read ratio" like Medium does. I wonder though if what the EA Forum should count as a "read" is not just views >10 seconds, but more like a "view where the user spent at least 50% of the estimated time it takes to read that article." An average time people spent on the post could be useful too.
3Linch3moA mere order of magnitude of an order of of magnitude!
Redwood Research

It launched in early August 2021 (Shlegeris 2021).

I think that was referring to the research project, not the org itself.

2Pablo4moOh, I was assuming this was their first project, but on reflection the assumption was unwarranted. This other post [https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research] , from August 2021, describes Redwood Research as a "new... organization", but I wasn't able to find their launch date. I've edited the article to address the issue.
GiveWell Donation Matching

(I’d love it if you crossposted that post, but commenting here until then.) I think there’s another category before 9, which is “Donate to a charity not commonly supported by EAs, such as the World Wildlife Fund or Habitat for Humanity.” So this allows for Giving Tuesday to count as counterfactual. I would hope GiveWell’s was of this type (though I sympathize with Luke’s points).

Then we have another question, which is who are these people that are ~indifferent between any EA charity? They’re probably not the first time donors that GiveWell’s targeting.

4Jeff_Kaufman4moDone! https://forum.effectivealtruism.org/posts/nz2scND85oFyTXTGo/what-should-counterfactual-donation-mean [https://forum.effectivealtruism.org/posts/nz2scND85oFyTXTGo/what-should-counterfactual-donation-mean] Yes, I think that's fine as long as we all agree that the impact of donating to an AA charity is very much higher than donating to one of those charities.
UK's new 10-year "National AI Strategy," released today

[Meta commentary] Damn they have options to view in html, pdf, and mobile-optimized pdf. Holy crap. Why is the UK government so good at technology?

1Flodorner4moHuh? I did not like the double-page style for the non-mobile pdf, as it required some manual rescaling on my PC. And the mobile version has the main table cut between two pages in a pretty horrible way. I think I would have much preferred a single pdf in the mobile/single page style that is actually optimized for that style, rather than this. Maybe I should have used the HTML version instead?
JP's Shortform

Maybe you’re suspicious of this claim, but if I think if you convinced me that JP working more hours was good on the margin, I could do some things to make it happen. Like have one saturday a month be a workday, say. That wouldn’t involve doing broadly useful life-improvements.

On “fresh perspective”, I‘m not actually that confident in the claim and don’t really want to defend it. I agree I usually take a while after a long vacation to get context back, which especially matters in programming. But I think (?) some of my best product ideas come after being a... (read more)

4Ben_West4moI see. My model is something like: working uses up some mental resource, and that resource being diminished presents as "it's hard for you to work more hours without some sort of lifestyle change." If you can work more hours without a lifestyle change, that seems to me like evidence your mental resources aren't diminished, and therefore I would predict you to be more productive if you worked more hours. As you say, the most productive form of work might not be programming, but instead talking to random users etc.
JP's Shortform

This is a good response.

JP's Shortform

A few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a pla... (read more)

JP's Shortform

How hard should one work?

Some thoughts on optimal allocation for people who are selfless but nevertheless human.

Baseline: 40 hours a week.

Tiny brain: Work more get more done.

Normal brain: Working more doesn’t really make you more productive, focus on working less to avoid burnout.

Bigger brain: Burnout’s not really caused by overwork, furthermore when you work more you spend more time thinking about your work. You crowd out other distractions that take away your limited attention.

Galaxy brain: Most EA work is creative work that benefits from:

  • Real obsession,
... (read more)
7Ben_West5moThanks for writing this up – I'm really interested in answers to this and have signed up for notifications to comments on this post because I want to see what others say. I find it hard to talk about "working harder" in the abstract, but if I think of interventions that would make the average EA work more hours I think of things like: surrounding themselves by people who work hard, customizing light sources to keep their energy going throughout the day, removing distractions from their environment, exercising and regulating sleep well, etc. I would guess that these interventions would make the average EA more productive, not less. (nb: there are also "hard work" interventions that seem more dubious to me, e.g. "feel bad about yourself for not having worked enough" or "abuse stimulants".) One specific point: I'm not sure I agree regarding the benefits of "fresh perspective". It can sometimes happen that I come back from vacation and realize a clever solution that I missed, but usually me having lost context on a project makes my performance worse, not better.
7nonn5moFor the sake of argument, I'm suspicious of some of the galaxy takes. I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise. Beyond that, it's not obvious working less hard results in better prioritization or execution. A naive look at the intellectual world might suggest the opposite afaict, but selection effects make this hard. I think having spent more time trying hard to prioritize, or trying to learn about how to do prioritization/execution well is more likely to work. I'd count "reading/training up on how to do good prioritization" as work Agree re: the value of fresh perspective, but idk if the evidence actually supports that working less hard results in fresh perspective. It's entirely plausibly to me that what is actually needed is explicit time to take a step back - e.g. Richard Hamming Fridays - to reorient your perspective. (Also, imo good sleep + exercise functions as a better "fresh perspective" that most daily versions of "working less hard", like chilling at home) TBH, I wonder if working on very different projects to reset your assumptions about the previous one or reading books/histories of other important project/etc works better is a better way of gaining fresh perspective, because it's actually forcing you into a different frame of mind. I'd also distinguish vacations from "only working 9-5", which is routine enough that idk if it'd produce particularly fresh perspective. Real obsession definitely seems great, but absent that I still think the above points apply. For most prominent people, I think they aren't obsessed with ~most of the work their doing (it's too widely varied), but they are obsessed with making the project happen. E.g. Elon says he'd prefer to be an engineer, but has to do all this business stuff to make the project happen. Also idk how real obsession develops, but it seems more likely t
4JP Addison5moA few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a place where people can show off hobbies they’d be embarrassed about in other organizations. These organizations (call them Type B) can attract and retain staff who for whatever reason would be worse fits at the startup-y orgs. Perhaps they’re the primary caregiver to their child or have physical or mental health issues. I know many incredibly talented people like that and I’m glad there are some organizations for them.
What are the top priorities in a slow-takeoff, multipolar world?

The Andrew Critch interview is so far exactly what I’m looking for.

What are the top priorities in a slow-takeoff, multipolar world?

I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn't get there without being very protective of 10% of their budgets.

You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I'm not sure what my position is on that actually.

3Mauricio5moThanks! Is the following a good summary of what you have in mind? It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That's more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast. What fills in the "[??]" in the above? I could imagine a couple of possibilities: * Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that). * Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs. Additional ways of challenging those might be: * Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety. * Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety. * Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
What are the top priorities in a slow-takeoff, multipolar world?

Thanks for your answer. (Just to check, I think you are a different Steve Byrnes than the one I met at Stanford EA in 2016 or so?)

I do  want to emphasize  is that I don't doubt that technical AI safety work is one of the top priorities. It does seem like within technical AI safety research the best work seems to shift away from Agent Foundations type of work and towards neural-nets-specific work. It also seems like the technical problem does get easier in expectation if you have more than one shot. By contrast, I claim, many of the Moloch-style problems get harder.

1steve21525moNo I don't think we've met! In 2016 I was a professional physicist living in Boston. I'm not sure if I would have even known what "EA" stood for in 2016. :-) I agree. But maybe I would have said "less hard" rather than "easier" to better convey a certain mood :-P I'm not sure what your model is here. Maybe a useful framing is "alignment tax" [https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment/] : if it's possible to make an AI that can do some task X unsafely with a certain amount of time/money/testing/research/compute/whatever, then how much extra time/money/etc. would it take to make an AI that can do task X safely? That's the alignment tax. The goal is for the alignment tax to be as close as possible to 0%. (It's never going to be exactly 0%.) In the fast-takeoff unipolar case, we want a low alignment tax because some organizations will be paying the alignment tax and others won't, and we want one of the former to win the race, not one of the latter. In the slow-takeoff multipolar case, we want a low alignment tax because we're asking organizations to make tradeoffs for safety, and if that's a very big ask, we're less likely to succeed. If the alignment tax is 1%, we might actually succeed. Remember, that there are many reasons that organizations are incentivized to make safe AIs, not least because they want the AIs to stay under their control and do the things they want them to do, not to mention legal risks, reputation risks, employees who care about their children, etc. etc. So if all we're asking is for them to spend 1% more training time, maybe they all will. If instead we're asking them all to spend 100× more compute plus an extra 3 years of pre-deployment test protocols, well, that's much less promising. So either way, we want a low alignment tax. OK, now let's get back to what you wrote. I think maybe your model is: "If Agent Foundations research pans out at all, it would pan out by discovering a high-alignme
4kokotajlod5moI'm pretty confident that if loads more money and talent had been thrown at space exploration, going to the moon would be substantially cheaper and more common today. SpaceX is good evidence of this, for example. As for fusion power, I guess I've got a lot less evidence for that. Perhaps I am wrong. But it seems similar to me. We could also talk about fusion power on the metric of "actually producing more energy than it takes in, sustainably" in which case my understanding is that we haven't got there at all yet.
Buck's Shortform

I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.

Examples of Successful Selective Disclosure in the Life Sciences

First off I want to say thanks for your Forum contributions, Tessa. I'm consistently upvoting your comments, and appreciate the Wiki contributions as well.

I'm pretty confident in information hazards as a concern that are/will be plausibly important, but in these cases and other cases I tend to be at least strongly tempted by openness, which does seem to make it harder to advocate for responsible disclosure. "You should strongly consider selectively disclosing dangerous information, only all of these contentious examples I think should be open."

5tessa5moAw, it's always really nice to hear that people are enjoying the words I fling out onto the internet! Often both the benefits and risks of a given bit of research are pretty speculative, so evaluation of specific cases depends on one's underlying beliefs about potential gains from openness and potential harms from new life sciences insights. My hope is that there are opportunities to limit the risks of disclosure while still getting the benefits of openness, which is why I want to sketch out some of the selective-disclosure landscape between "full secrecy by default" (paranoid?) and "full openness by default" (reckless?). If you're like to read a strong argument against openness in one particular contentious case, I recommend Gregory Koblentz's 2018 paper A Critical Analysis of the Scientific and Commercial Rationales for the De Novo Synthesis of Horsepox Virus [https://journals.asm.org/doi/10.1128/msphere.00040-18]. From the paper:
[PR FAQ] Adding profile pictures to the Forum

I'm guessing you haven't seen, so let me show off the new signup flow!

[PR FAQ] Adding profile pictures to the Forum

Hi Larks, thanks for taking the time to comment. I think your continuum comment is a good contribution to the considerations. I’m going to run with that metaphor, and talk about where I think we should fall. I take this seriously and want to get this right.

I’ve drawn three possible lines for what utility the Forum will get from its position on the continuum. Maybe it’s not actually useful, maybe I just like drawing things. I guess my main point is that we don’t have to figure out the entire space, just the local one:

Anyway, the story for the (locally) impe... (read more)

What 2026 looks like (Daniel's median future)

I think this is great. I especially like the discussion of propaganda, which feels like an important model.

EA Forum feature suggestion thread

Seems right. I doubt it was deliberate.

EA Forum feature suggestion thread

What happens if you log in in incognito?  Do you have any of these settings set?

2Pablo6moAh, I had the first of those options ticked, and the issue disappeared after I unticked it. So this is the cause. Is this behavior deliberate? I think the option should not affect how shortform posts are displayed in the "all posts" view.
Load More