Davidmanheim

Wiki Contributions

Comments

Guarding Against Pandemics

I'm not responding on behalf of GAP, but since I've been working a bit with them, I'll try to answer.
 

  1. The efforts to find and work with Republican champions are ongoing, and there are at least some (non-public, in the works) efforts which are definitely on the Republican side. I don't know all the details, but I'm assuming the issue for now is that 1) they haven't set up infrastructure for donations independent of ActBlue, and 2) the democrats are in power, and lost of things are happening immediately, so they are the primary target for much current lobbying.
  2. This is a definite topic of discussion, and I'm not sure there's a way to answer briefly, but I think that a well-run and careful lobbying done by a group which aligns itself with the EA movement, but in no way claims to reflect the movement, has limited risks. That said, of course it's very difficult to predict how political lobbying plays out, but companies and other movements certainly negotiate this with a decent ability to avoid trouble. More than that, the alternative which has been embraced so far is to not have any outlet to engage in lobbying directly, and it seems like an important tool, so continuing not to use it seems ill-advised - but I'd be happy to have a more in-depth discussion of this with you.
  3. I can't name who has been involved in discussions, but I'll vouch for the fact that several of the people I would want in the loop on this are, in fact, in the loop. I can't promise that they will have sufficient veto-power, but I think Gabe is sufficiently aware of the issues and the risks of unilateralism that it's fine.

If anyone has a contrary impression on any of these points, feel free to say so, and/or reach out to me privately.

Guarding Against Pandemics

As I said in another comment, I'm working with GAP, but am not speaking on their behalf. And feel free to wait until the presentation before deciding about donating, but yes, there is already effort to push on both sides of the aisle. That said, it's a waste of time and money for a narrowly focused lobbying group to aim to support equal numbers of people on both sides of the aisle, rather than opportunistically finding champions for individual issues on both sides, and building relationships that allow us to get specific items passed. 

That means that when there is a bill which is getting written by the party currently in power in the house, GAP is going to focus on key members of the relevant committees - which is largely, but certainly not exclusively,  the party in power. And given US political dynamics, it is likely that GAP will be talking even more to Republicans during the next year, to ensure they have champions for their work during the next Congress.

What we learned from a year incubating longtermist entrepreneurship

Is the slack or other community  resources still being used / are they still available for additional people to join?

"Epistemaps" for AI Debates? (or for other issues)

(I'm also working on the project.)

We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction - but they are interrelated.

For example, 
Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI already
Argument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1)
Argument 2a: The compute needed to do this is 30 years away.
Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient.
Argument 2c (contra 2b): We don't know what we are optimizing for, exactly.
Argument 2d (supporting 2b): We still manage to do things like computer vision.

Each of these has implications about timelines until AI - we don't just want to look at strength of the arguments, we also want to look at the actual implication for timelines.

Semantica Pro doesn't do quantitative relationships which allow for simulation of outcomes and uncertainty, like "argument X predicts progress will be normal(50%, 5%) faster." On the other hand, Analytica doesn't really do the other half of representing conflicting models - but we're not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn't pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)

It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we've been working on cataloging the questions, we'd potentially be interested in collaborating.

Is it really that a good idea to increase tax deductibility ?

Per my answer, I think it's likely that eliminating tax deductibility would be net negative without other simplifications of the tax code to eliminate the alternative tax shelters.

Is it really that a good idea to increase tax deductibility ?

Agreeing with others that this is a good question - and it's not simple. (Because, of course, policy debates should not appear one-sided!)

Two of the key reasons I'm a fan of tax deductibility is because it's a clear signal about whether something is a charity, and because it's a behavioral incentive to donate - people feel like they are getting something from donating. (Never mind the fact that they are spending - it's the same cognitive effect when people feel like they "saved money" by buying something they don't need on sale.)

On the other hand, I think Rob Reich is right about this, and we'd be better off switching to a system that doesn't undermine our taxation system generally - though tax deductibility is far from the only culprit, and if this is a single change, the other loopholes are less publicly beneficial as side effects, so I would guess it's a net negative unless coupled with broader reform.  Note that I haven't read Rob's latest book, (he is an incredibly fast writer!) and maybe he talks about this. If not, I'd be interested in asking him for his take.

Given all of this, I don't have a strong take on this - and short of general reform, I'd at least be in favor of expanding tax credits for EA charities, so that they aren't relatively disadvantaged as places to give.

Towards a Weaker Longtermism

Thanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.

I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Who should buy them?

I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)

What EA projects could grow to become megaprojects, eventually spending $100m per year?

I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.

Load More