I'm not responding on behalf of GAP, but since I've been working a bit with them, I'll try to answer.
If anyone has a contrary impression on any of these points, feel free to say so, and/or reach out to me privately.
As I said in another comment, I'm working with GAP, but am not speaking on their behalf. And feel free to wait until the presentation before deciding about donating, but yes, there is already effort to push on both sides of the aisle. That said, it's a waste of time and money for a narrowly focused lobbying group to aim to support equal numbers of people on both sides of the aisle, rather than opportunistically finding champions for individual issues on both sides, and building relationships that allow us to get specific items passed.
That means that when there is a bill which is getting written by the party currently in power in the house, GAP is going to focus on key members of the relevant committees - which is largely, but certainly not exclusively, the party in power. And given US political dynamics, it is likely that GAP will be talking even more to Republicans during the next year, to ensure they have champions for their work during the next Congress.
Is the slack or other community resources still being used / are they still available for additional people to join?
(I'm also working on the project.)We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction - but they are interrelated.For example, Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI alreadyArgument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1)Argument 2a: The compute needed to do this is 30 years away.Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient.Argument 2c (contra 2b): We don't know what we are optimizing for, exactly.Argument 2d (supporting 2b): We still manage to do things like computer vision.
Each of these has implications about timelines until AI - we don't just want to look at strength of the arguments, we also want to look at the actual implication for timelines.
Semantica Pro doesn't do quantitative relationships which allow for simulation of outcomes and uncertainty, like "argument X predicts progress will be normal(50%, 5%) faster." On the other hand, Analytica doesn't really do the other half of representing conflicting models - but we're not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn't pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we've been working on cataloging the questions, we'd potentially be interested in collaborating.
Per my answer, I think it's likely that eliminating tax deductibility would be net negative without other simplifications of the tax code to eliminate the alternative tax shelters.
Agreeing with others that this is a good question - and it's not simple. (Because, of course, policy debates should not appear one-sided!)Two of the key reasons I'm a fan of tax deductibility is because it's a clear signal about whether something is a charity, and because it's a behavioral incentive to donate - people feel like they are getting something from donating. (Never mind the fact that they are spending - it's the same cognitive effect when people feel like they "saved money" by buying something they don't need on sale.)On the other hand, I think Rob Reich is right about this, and we'd be better off switching to a system that doesn't undermine our taxation system generally - though tax deductibility is far from the only culprit, and if this is a single change, the other loopholes are less publicly beneficial as side effects, so I would guess it's a net negative unless coupled with broader reform. Note that I haven't read Rob's latest book, (he is an incredibly fast writer!) and maybe he talks about this. If not, I'd be interested in asking him for his take.Given all of this, I don't have a strong take on this - and short of general reform, I'd at least be in favor of expanding tax credits for EA charities, so that they aren't relatively disadvantaged as places to give.
Thanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.
Who should buy them?I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)
Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.