Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
6963 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
807

Topic contributions
1

This makes a number of non-trivial assumption and unsourced claims about a number of different issues, from relative moral value of animals to the carrying capacity of different biomes; I know that many of these are seen as common wisdom in EA, but I think failing to lay them out greatly weakens the conclusions.

Also, some questions to think about: Why are insects ignored? How does the transition happen, legally or economically? What are the impacts of land use changes, and do farmers sell the land? (To whom?) Do social norms around meat undermine the viability of a transition?

{making humanity more safe VS shortening AGI timelines} is itself a false dichotomy or false spectrum.

Why? Because in some situations, shortening AGI timelines could make humanity more safe, such as by avoiding an overhang of over-abundant computing resources that AGI could abruptly take advantage of if it’s invented too far in the future (the “compute overhang” argument).


I think this also ignores the counterfactual world with less safety research, where the equivalent advances, which are funded because of commercial incentives, come from less generalizable safety research, and we end up with less well prosaically aligned but similarly capable systems. (And I haven't really laid out this argument before, but I think it generalizes to the counterfactual world without OpenAI or even Deepmind being inspired by AI safety concerns.)

I think it's fine, if orgs are set up for getting the donations. If they have a "donate" button or page, they are set up to get the money, less credit card fees, etc. They problem is that setting up that sort of thing is anywhere from easy to legally very complex.

As someone who runs an organization that does a lot of biorisk work, it's incredibly expensive  in staff time and logistics to receive small donations - but if you're giving more than, say, $5,000, you could just email the organizations to ask, and I'm sure they could figure it out.

But as I answered, CHS does have a donation page. (And NTI does allow donations, with a box to indicate where you'd like the money to go, but it's unclear to me if that actually lets you direct it only to bio.)

LLMs are not AGIs in the sense being discussed, they are at best proto-AGI. That means the logic fails at exactly the point where it matters.

When I ask a friend to give me a dollar when I'm short, they often do so. Is this evidence that I can borrow a billion dollars?  Should I go on a spending spree on the basis that I'll be able to get the money to pay for it from those friends?

When I lift, catch, or throw a 10 pound weight, I usually manage it without hurting myself. Is this evidence that weight isn't an issue? Should I try to catch a 1,000 pound boulder?

No one is really suggesting that a unilateral "pause" is effective, but there is growing support for some non-unilateral version as an important approach to be negotiated.

There was a quite serious discussion of the question, and different views, on the forum late last year (which I participated in,) summarized by Scott Alexander here; https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate

Confirmed; he does work in this area, there's independent reporting about his work on these topics, and has a substack about his very relevant legal work; https://www.nlrbedge.com/

Do you have any comment on the idea that nondisparagement clauses like this could be found invalid for being contrary to public policy? (How would that be established?)

Load more