All of JP Addison's Comments + Replies

I agree with you, and so does our issue tracker. Sadly, it does seem a bit hard. Tagging @peterhartree as a person who might be able to tell me that it's less hard than I think.

1
Yanni Kyriacos
12h
As someone who works with software engineers, I have respect for how simple-appearing things can actually be technically challenging.

I worked with Sam for 4 years and would recommend the experience. He's an absolute blast to talk tech with, and a great human.

Answer by JP AddisonFeb 27, 202423
11
0

Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)

2
Vasco Grilo
23d
Interesting suggestion, JP. Somewhat relatedly, I think it would be interesting to know the extinction risk per training run employees at Anthropic, OpenAI and Google Deepmind would be willing to endure (e.g. per order of magnitude increase in the effective compute used to train the newest model).

I'm not sure if this qualifies, but the Creative Writing Contest featured some really moving stories.

I have a spotify playlist of songs that seemed to rhyme with EA to me.

Some good kabbalistic significance to our issue tracker, but I'm not sure how.

First, a note: I have heard recommendations to try to lower the number of issues. I've never understood it except as a way to pretend like you don't have bugs. For sure some of those issues are stale and out of date, but quite a few are probably live but ultimately very edge-case and unimportant bugs, or feature requests we probably won't get to but could be good. I don't think it's a good use of time to prune it, and most of the approaches I've seen companies take is to auto-clo... (read more)

Thanks for the report. We currently do the second, which isn't ideal to be sure. If someone redrafts and republishes after a post has been up for a while, an admin will have to adjust the published date manually. This happens surprisingly infrequently relative to what I might've expected, so we haven't prioritized improving that.

Definitely. I agree, and so do a few other users. We have an open ticket on it.

No, sorry. I appreciate the question though, and I'll record a ticket about it.

My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%

This isn't obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plaus... (read more)

2
Ben_West
2mo
Yep, I think this is a good point, thanks! I think this is possibly correct but unfortunately not at a level to be cruxy - animal welfare groups just don't get that much funding to begin with, so even if animal welfare advocates valued EA groups a bit above animal welfare groups, it's still pretty low in absolute.

Relatedly: I expect that the margins change with differing levels of investment. Even if you only cared about AI safety, I suspect that the correct amount of investment in cause-general stuff is significantly non-zero, because you first get the low-hanging fruit of the people who were especially receptive to cause-general material, and so forth.

So it actually feels weird to talk about estimating these relative effectiveness numbers without talking about which margins we're considering them at. (However, I might be overestimating the extent to which these different buckets are best modelled as having distinct diminishing returns curves.)

7
emre kaplan
3mo
One additional cost of cause specific groups is that once you brand yourself inside a movement, you get drawn into the politics of that movement. Other existing groups perceive you as a competitor for influence and activists. Hence they become much less tolerant of differences in your approach.  For example an animal advocacy group advocating for cultivated meat work in my country would frequently be bad-mouthed by other activists for not being a vegan group(because cultivated meat production uses some animal cells taken without consent).  My observation is that animal activists are much more lenient when an organisation doesn't brand itself as an "animal" organisation.

We've discussed something like this, I'm generally in favor, subject to opportunity cost.

I agree with both of these points.

I'm curating this post. I appreciate the careful reasoning, and your taxonomies make sense. I recommend readers who may not have time to read the whole sequence to read up to the start of the preliminaries section.

I really like the ambitious aims of this model, and I like the way you present it. I'm curating this post.

I would like to take the chance to remind readers about the walkthrough and Q&A on Giving Tuesday a ~week from now.

I agree with JWS. There isn't enough of this. If we're supposed to be a cause neutral community, then sometimes we need to actually attempt to scale this mountain. Thank for doing so!

Babble:

  • 1 year pledge (+ 5 year pledge + 10 year pledge)
  • Something to riff off of: Pledgette. Compartmitment.
  • Handfasting Pledge
  • Present Pledge
  • Tour Pledge

You can think of the GWWC pledge as analogous to marriage, and that would make the trial pledge something like moving in together. In the romance analogy, some friends of mine who are reasonably averse to lifelong commitments do "handfasting", or intentionally not lifelong partnerships. A thought I've had for a while is that the Trial Pledge, by virtue of its name if nothing else, does poorly in the position of handfasting, where often the intention is never to get married (/ take the pledge).

(Anyway, all academic for me as I'm crazy enough to have done the lifelong pledge.)

2
Jason
4mo
100% agree that the "Trial Pledge" branding doesn't mesh well with those who are more serious than trialing but do not feel called to the 10%/life pledge. If the GWWC pledge is analogized to marriage, the Trial Pledge covers everything from the analogy to entering into a committed relationship (pledging 1% for a year) to the analogy to temporary marriages (pledging 10% for a time period) and the analogy to a registered domestic partnership (pledging, say, 5% for life).[1] 1. ^ By "domestic partnerships," I mean a legally recognized relationship status that can be seen as less than marriage. In the US, these statuses were often initially created to give some recognition to same-sex relationships, but even after marriage became available to all, these statuses remain on the books as an option for all relationships in some jurisdictions.
4
calebp
4mo
I also like these analogies! Does the marriage analogy as you perceive it include that breaking the pledge further down the line is pretty common and socially okay, but also that it's a serious thing and breaking it is not to be taken lightly?
2
Luke Freeman
4mo
I like the analogies! I've used the former one before but I like the addition of "moving in together" analogy for the trial pledge. Also regarding the name, it was "Try Giving Pledge" before and I think the "Trial Pledge" adjustment is a slight improvement, but really don't think it's been nailed. Would be super interested in alternative ideas and possible consequences of those names.

I'm curating this post. I love the way you have done this link-post, pulling out sections of interest to the EA community. Always helpful to see order of magnitude updates to EA BOTECs.

I'm curating this post. It's very personal, well-written, and I'm excited to, during the Effective Giving Spotlight week, highlight this post from someone who's Earned-to-Give for so long.

I've updated it. One oversight of the page was that it didn't mention that intercom was desktop-only. If you're one desktop and you don't see it, can you try the new debugging steps? But also feel free to email us.

2
weeatquince
5mo
Hi, Debugging worked. It was a Chrome extension I had installed to hide cookie messages what was killing it. Thank you so much!!

I'm curating this post. I really like the honesty in this post. Evidence I have that makes me think it's doing unusually well here:

  • There's not a lot of fluff, or wishy-washy statements
  • It acknowledges places where you disagreed with other cofounders.

One of Alvea’s biggest indirect achievements [...] is the growth and development that our projects catalyzed for the people who worked there.

I agree with this, from my outside vantage point. It seems pretty wild how some environments tend to level people up while others do so much less.

I also like the way you divide up the claims. I think this paper is a really neat demonstration of point 1, and I'm kinda disappointed with the discourse for getting distracted arguing about point 2.

2
Jeff Kaufman
5mo
That's fair, though since a lot of people already knew about #1 and are very interested in whether #2 is true (or might soon become true) it's not that surprising that this is where the interest is

I'm curating this post. There have been several recent posts on the theme of RSPs. I'm featuring this one, but I recommend the other two posts to readers.

I particularly like that these posts mention that they view these policies as good for eventual regulation, and are willing to be clear about this.

There’s some reason to believe shrimp paste could be easier to create plant based substitutes for, compared to other shrimp products, and that the alternative proteins market might not naturally have the right incentives to create excellent substitutes very quickly.

This analysis seems right to me (intuitively).

Idea: an EAA funder pre-commits to purchase (subsidize?) the first $UNITs of plant based shrimp based.

4
Jeff Kaufman
5mo
(setting aside whether it's good to reduce demand for shrimp) If someone is setting up this kind of precommitment I think it would be important to tie it to factors that would influence the success in the market. I would guess those are primarily cost and taste. Cost is hard to predict, and many things get far cheaper when scaled up, but taste seems pretty tractable. Perhaps tie the commitment to having a group of shrimp paste consumers ranking the substitute equal or better in a blind comparison (probably in cooked dishes, ideally where the cooks were also blinded to the condition)?

I'm curating this post. I often find myself agreeing with the discomfort of applying EV maximization in the case of numerous plausibly sentient creatures. And I like the proposed ways to avoid this — it helps me think about the situations in which I think it's reasonable vs unreasonable to apply them.

I'm curating this post. It's a really neat idea, and I love the thoroughness and the tables.

I'm curating this post. This is a well-written summary of the AI Pause Debate, and I'm excited for our community to build on that conversation, through distillation and more back-and-forth.

Thanks for this, I do think we should have something along this direction.

For posts, I feel like the solution is to add the organization as a coauthor. It's what rethink does, for example. I agree we could probably go further in the direction of discouraging org accounts from being the only author.

2
calebp
5mo
Agree that posts as a coauthor works well and doesn't require implementing any new features. Maybe someone should write a frontage post discussing this at some point and that would be enough?

I'm curating this post. I think for many EA donors, knowing about Open Philanthropy's plans will be an important part of their models. I appreciate the transparency in general, and the detailed writeup aimed at critical donors in particular.

I'm curating this post. I know you don't intend for it to be exhaustive, but it is nevertheless very thorough. I agree with @PeterSlattery that people considering founding/running orgs in these spaces would benefit from seeing this information, and I think you do a good job of presenting it.

3
Vilhelm Skoglund
5mo
Thank you for the encouraging words! Will consider doing this again in the future.

I highly recommend Asana. I have used a few different options in my personal life, including Todoist, and like Asana better than them.

If FutureTech is going to be a Notion shop, then I would use Notion for task tracking as well. But if not, then I think your users will find Asana easier to grok.

I'm curating this post — I really like how it was short and focused on very concrete actions that could be done in one weekend.

  • Having the EV board as your board ≠ becoming a coworker of someone at 80k
  • There is an important sense that EA Funds is ""merely"" a project of EV, and another important sense in which they are a ~2 person team
  • Fiscal sponsorship is pretty common
  • EV's board has never (to my knowledge) fired an employee of an organization, but they have fired a CEO

I know a guy[1] who's done the same Manager ↔ IC transition. Google as well. I do really respect this part of Google's culture.

  1. ^

    My dear old Dad

We investigated how much karma from Community posts was distorting how much karma users had relative to what would happen if the Community section karma hadn't been there, and relative to our personal "overrated-vs-underrated commenter" ratings. There was somewhat surprisingly not that much improvement from changing the weighting, so we decided to stop working on the project.

Ultimately, you shouldn't take a user's Forum karma has much correlation with their impact. It's quite easy to have a lot of impact with low karma, or to be mostly a terminally online person who doesn't get much object-level work done.

On the first paragraph: this is definitely something that bothers me a bunch, and I hear about often. Sadly it is quite hard to fix. We'd need a bespoke google docs importer to do so, and that's probably too large of a project.

On the second: Noted.

I'm excited to curate this Career Conversations Week post. It's an easy read and seems helpful in evaluating a very high upside career path.

PSA: Apropos of nothing, did you known you can hide the community section?

(You can get rid of it entirely in your settings as well.)

8
Lorenzo Buonanno
7mo
Is there a way to do this for community quick takes? Most of the quick takes at this moment seem to be about the community.

Is there a way to snooze the community tab or snooze / hide certain posts? I would use this feature.

6
MathiasKB
7mo
Thanks, you just bought me days of productivity

I'm curating this post. I also highly recommend reading the post that I interpret this post as being in conversation with, by @bean .

These posts, along with the conversation in the comments, is what it looks like (at least from my non-expert vantage point) to be actually trying on one of the key cruxes of nuclear security cause prioritization.

1
Vasco Grilo
5mo
Agreed, JP! For reference, following up on Bean's and Mike's posts, I have also done my own in-depth analysis of nuclear winter (with more conversaion in the comments too!).

I'm curating this. Along with other commenters, I really like the focus on the marginal grant. If I were to write a post that would help donors understand the impact of their donations to the Long Term Future Fund, it would look a lot like this. 

While I'm sympathetic to the reasoning, I was sad to hear that EA Funds would stop sharing publicly all its grants.  To my mind to this post goes a long way towards remedying that, and makes me much more likely to recommend the Long Term Future Fund to others. (That strikes me as a surprisingly large update, but I stand by it.)

Thanks a bunch for writing this!

1
calebp
7mo
Thanks for curating it :)

That's what I mean by something automatic. I'm not sure without trying it whether it'd be a terrible and disorienting experience that was wrong most of the time, or whether it'd be successfully useful.

I expect this to be hard to get right, but I think it would in fact remove a major bottleneck to returning to a post. Claim: the hard part is getting people to set their bookmarks. Maybe we could do something automatic?

1
Chi
7mo
Not sure about the claim but possible! I certainly wouldn't say no to something automatic. But I think if setting it yourself is easy enough, it would still get a bunch of the value! I think if the feature was implemented in a similar way to in-line commenting on LessWrong, where you just hover over the correct line and it offers you a bookmark-button that you just need to click, that would be low-friction enough for people like me to use it. (I think anything that's two-click might be too much friction)
2
Larks
7mo
Would it be possible to track how far down the page someone had scrolled, and by default return them to that place the next time they visited?

I've recorded the feedback, thank you! The anticipation that some this might be distracting was the motivation for the feedback button. Which makes me concerend to hear that it's not working for you. Could I ask if you could check your cookies to see if you've enabled functional cookies? (See the link in the second paragraph.)

4
Jaime Sevilla
7mo
As far as I can tell they are enabled - I see there is a cookie in storage for the intercom for example

This is a really inspiring list, thanks for posting! I'm curating.

Thanks a bunch for this very helpful overview — I'm curating it.

This covers well the things I've learned from my casual observation of the field, and introduced me to new considerations and more detail. I'm very glad to have read it and I recommend it.

I assume this is about for your own psychology? My recommendation here is to use your ad-blocker to block out the specific element.

I've just submitted a change that will make this uBlock Origin rule work:

###karma-info

(Note the three #s)

1
Ren Ryba
8mo
Thanks, this is cool and I'll use it. I think more broadly, my comment is roughly equally motivated by three main things: my own psychology; concerns about an author's karma influencing readers' subconscious evaluations of that author's posts and opinions; and, specifically for people who work full-time in the EA community, a vague sense that it feels a bit strange to have a numeric score attached to what is in many ways a professional, and often philosophical, body of work. (The third point of course has an analogy with academic research, but I think that's also a problem with academia.) But since you gave me a solution, I'm personall happy. Thanks again.

I just added this to a recent related improvement. Should be fixed when that Pull Request gets merged.

Load more