All of Simon Skade's Comments + Replies

I think it is important to keep in mind that we are not very funding constrained. It may be ok to have some false positives, false negatives may often be worse, so I wouldn't be too careful.

I think grantmaking is probably still too reluctant to fund stuff that has an unlikely chance of high impact, especially if they are uncertain because the people aren't EAs.
For example, I told a very exceptional student (who has like 1 in a million problem solving capability) to apply for Atlas fellowship, although I don't know him well, because from my limited knowledg... (read more)

Out of the alternative important skills you mentioned, I think many of them are very correlated, and I think the relevant stuff roughly boils down to rationality (and perhaps also ambition).

Being rational itself is also correlated with being an EA and with being intelligent, and overall I think intelligence and rationality (and ambition) are traits that are really strong predictors of impact.

The impact curve is very heavy-tailed, and smarter people can have OOMs more impact than people with 15 IQ points less. So no, I don't think EA is focusing too much on... (read more)

(Not sure if that has been suggested before, but) you should be able to sort comments by magic (the way posts are sorted on the frontpage) or some other better way to combine top+new properties for comments. Otherwise new contributions that are good are read far too rarely, so only very few people will read and upvote them, while the first comments directly receive many upvotes and so get even more upvotes later. Still, upvotes tell a bit about what comments are good, and not everyone wants to read everything.

I would definitely use it myself, but I would strongly suggest also making it the default way comments are sorted.

(That wouldn't totally remove bad dynamics, but it would be a start.)

Related to the post and very related to this comment is this post: https://www.lesswrong.com/posts/M8cEyKmpcbYzC2Lv5/exercise-taboo-should

The post claims that saying things like should, good or bad, (or other words that carry moral judgement) can often lead to bad reasoning because you fail to anticipate the actual consequences. (I recommend reading this post, or at least the last two sections, the sentence here isn't really a good summary.)

Actually, some replacements suggested in this post may not help in some cases:

Someone in the EA community should do a

... (read more)

AGI will (likely) be quite different from current ML systems.

I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would also be useless, our predictions about forecasting that we use to convince people of the importance of AI Safety equally so.

Wow, the "quite" wasn't meant that strongly, though I agree that I should have expressed myself a bit clearer/differently. And the work of Chris Olah, etc. isn't useless anyway, but yeah AGI won'... (read more)

I must say I strongly agree with Steven.

  1. If you are saying academia has a good track record, then I must say (1) wrong for stuff like ML, where in recent years much (arguably most) relevant progress is made outside of academia, and (2) it may have a good track record for the long history of science, and when you say it's good at solving problems, sure I think it might solve alignment in 100 years, but we need it in 10, and academia is slow. (E.g. read Yudkowsky's sequence on science, if you don't think that academia is slow.)
  2. Do you have some reason why you
... (read more)

There exists the EA Forum feature suggestion thread for such things, though an app may be a special case because it is a rather big feature, but I still think it rather fits there.

We won't solve AI safety by just throwing a bunch of (ML) researchers on it.

AGI will (likely) be quite different from current ML systems. Also, work on aligning current ML systems won't be that useful, and generally what we need is not small advancements, but we rather need breakthroughs. (This is a great post for getting started on understanding why this is the case.)

We much rather need a few Paul Christiano level researchers that build a very deep understanding of the alignment problem and then can make huge advances, than we need many still-great-but-no... (read more)

8
PabloAMC
2y
Hey Simon, thanks for answering! Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place. I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would also be useless, our predictions about forecasting that we use to convince people of the importance of AI Safety equally so. Of course, this does not prove anything; but I think there is a case to be made that Deep Learning seems currently as the only viable path we have found to perhaps get to AGI. And while I think the agnostic approach of MIRI is very valuable, I think it would be foolish to bet all our work to the truth of this statement. It could still be the case if we were much more bottlenecked in people than in research lines, but I don't think that's the case, I think we are more bottlenecked in concrete ideas of how to push forward our understanding. Needless to say, I believe Value Learning and interpretability are things that are very suitable for academia. Breakthroughs only happen when one understands the problem in detail, not when people float around vague ideas. Agreed. But I think there are great researchers at academia, and perhaps we could profit from that. I don't think we have any method to spot good researchers in our community anyways. Academia can sometimes help with that. I think this is a bit exaggerated. What academia does is to ask for well defined problems and concrete solutions. And that's what we want if we want to progress. It is true that some goodharting will happen, but I think we would be closer to the optimum if we were goodharting a bit than where we are right now, unable to measure much progress. Notice also that Shannon and many other people coming up with breakthroughs did so in academic ways.

Another advantage of an app may be that you could download posts, in case you go somewhere where you don't have Internet access, but I think this is rare and not a sufficient reason to create an app either.

Why should there be one? The EAForum website works great on mobile. So my guess is that there is no EA Forum app because it's not needed / wouldn't be that useful, except perhaps for app notifications, but that doesn't seem that important.

1
Simon Skade
2y
Another advantage of an app may be that you could download posts, in case you go somewhere where you don't have Internet access, but I think this is rare and not a sufficient reason to create an app either.
3
Chris Leong
2y
Web apps can do notifications these days
5
Guy Raveh
2y
From which a recommendation comes to mind for people who want an app: add a homescreen shortcut to the website.

that is likely to contain all the high quality ideas that weren't funded yet. 

No, not at all. I agree that this list is valuable, but I expect there to be many more high quality ideas / important projects that are not mentioned in this list. Those are just a few obvious ideas of what we could do next.

(Btw. you apparently just received a strong downvote while I wrote this. That wasn't me, my other comment was strong downvoted too.)

Jup, would have been even funnier if the post content was just ".", but perhaps this wouldn't have helped that much convincing people that short posts are ok. xD

I think another class of really important projects are research projects that try to evaluate what needs to be done. (Like priorities research, though even a bit more applied and generating and evaluating ideas and forecasting to see what seems best.)

The projects that are now on your project list, are good options when we consider what currently seem like good things to do. But in the game against x-risk, we want to be able to look more moves ahead, consider how our opponent may strike us down, and probably invest a lot of effort into improving our long-te... (read more)

Nice, we now have some good project ideas, next we need people to execute them.

I wouldn't expect that to happen automatically in many cases. Therefore, I am particularly excited about projects that help as accelerators for getting other projects started, like actively finding the right people (and convincing them to start/work on a specific project) or making promising people more capable.

In particular, I'd be excited about a great headhunting organization to get the right people (EAs and non-EAs) to work on the right projects. (Like you considered in the ... (read more)

Ooops I missed that, thanks!

What is the minimum amount of money a project should require?

From reading on your website, I somehow get the intuition that you are rather interested in relatively big projects, say requiring $30k+, rather more.

In particular, it does not seem to me like you are looking for applications like "Hey, could you give me 5000$ to fund my research project I plan to do the next months?". But I may be mistaken and I haven't read anything explicit about it not being possible (maybe I just overlooked it).

(And yes, I know there's the LTFF for such things, I'm just curious regardless.)

2
Steen Hoyer
2y
Under the first FAQ question on the Apply page: “In order to distribute labor and keep ourselves focused, we don’t accept applications seeking less than $100,000.”

I think most of the variance of estimates may come from the high variance in estimations of how big x-risk is. (Ok, a lot of the variance here comes from different people using different methods to estimate the answer to the question, but assuming people all would use one method, I expect a lot of variance coming from this.)
Some people may say there is a 50% probability of x-risk this century, and some may say 2%, which causes the amount of money they would be willing to spend to be quite different.
But because in both cases x-risk reduction is still (by fa... (read more)

I agree that it makes much more sense to estimate x-risk on a timescale of 100 years (as I said in the sidenote of my answer), but I think you should specify that in the question, because "How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?" together with your definition of x-risk, implies taking the whole future of humanity into account.
I think it may make sense to explicitly only talk about the risk of existential catastrophe in this or in the next couple of centuries.

2
Linch
2y
Lots of people have different disagreements about how to word this question. I feel like I should pass on editing the question even further, especially given that I don't think it's likely to change people's answers too much. 

I think reducing x-risk is by far the most cost-effective thing we can do, and in an adequate world all our efforts would be flowing into preventing x-risk. 
The utility of 0.01% x-risk reduction is many magnitudes greater than the global GDP, and even if you don't care at all about future people, you should still be willing to pay a lot more than currently is paid for 0.01% x-risk reduction, as Korthon's answer suggests.

But of course, we should not be willing to trade so much money for that x-risk reduction, because we can invest the money more effici... (read more)