All of robertskmiles's Comments + Replies

The AI Messiah

To avoid lowering the quality of discussion by just posting snarky memes, I'll explain my actual position:
"People may have bad reasons to believe X" is a good counter against the argument "People believe X, therefore X".  So for anyone whose thought process is "These EAs are very worried about AI so I am too", I agree that there's a worthwhile discussion to be had about why those EAs believe what they do, what their thought process is, and the track record both of similar claims and of claims made by people using similar thought processes. This is bec... (read more)

6robertskmiles15d
To avoid lowering the quality of discussion by just posting snarky memes, I'll explain my actual position: "People may have bad reasons to believe X" is a good counter against the argument "People believe X, therefore X". So for anyone whose thought process is "These EAs are very worried about AI so I am too", I agree that there's a worthwhile discussion to be had about why those EAs believe what they do, what their thought process is, and the track record both of similar claims and of claims made by people using similar thought processes. This is because you're using their reasoning as a proxy - the causal chain to reality routes through those people's reasoning methods. Like, "My doctor says this medicine will help me, so it will" is an argument that works because it routes through the doctor having access to evidence and arguments that you don't know about, and you have a reason to believe that the doctor's reasoning connects with reality well enough to be a useful proxy. However, the fact that some EAs believe certain things about AI is not the only, nor the main, nor even a major component of the evidence and argument available. You can look at the arguments those people make, and the real world evidence they point to. This is so much stronger than just looking at their final beliefs that it basically makes it irrelevant. Say you go outside and the sun is out, the sky is clear, and there's a pleasantly cool breeze, and you bump into the world's most upbeat and optimistic man, who says "We're having lovely weather today". If you reason that this man's thought process is heavily biased and he's the kind of person who's likely to say the weather is nicer than it is, and therefore you're suspicious of his claim that the weather is nice, I'd say you're making some kind of mistake.
EA Librarian: CEA wants to help answer your EA questions!

This sounds like a really useful thing to make!

Do you think there would be value in using the latest language models to do semantic search over this set of (F)AQs, so people can easily see if a question similar to theirs has already been answered? I ask because I'm thinking of doing this for AI Safety questions, in which case it probably wouldn't be far out of my way to do it for librarian questions as well.

1calebp4mo
That's sounds super cool! I expect this will work best for broader/more general questions e.g. "What do people mean by the word utility? I'm interested in Biosecurity, what are some intro resources that would be suitable for someone with little background in biology" as opposed to "I am a 3rd year undergraduate with a double major in CS and Music, I am worried that majoring in music might but off employers in the AI safety space, how should I test my assumption". I could of course be wrong about the types of questions this would be better for. Questions so far have been more of the form of the latter than the former, I am not entirely sure why this is and we have some ideas for generating more questions like the former so I don't know what the distribution will be like in a few weeks. I'll make a note to get back to you on this further down the line if I think that it would be useful.
Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

I see several comments here expressing an idea like "Perhaps engaging writing is better, but is it worth the extra effort?", and I just don't think that that trade-off is actually real for most people. I think a more conversational and engaging style is quicker and easier to write than the slightly more formal and serious tone which is now the norm. Really good, polished, highly engaging writing may be more work, but on the margin I think there's a direction we can move that is downhill from here on both effort and boringness.

The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal)

The S-Process is fascinating to me! Do you know of any proper write-ups of how it works? I'm especially interested in code or pseudocode, as I might want to try applying something similar to one of my projects

2Larks7mo
Unfortunately I don't think so. Here is a rough summary, based on my recollections, but I was only involved in one part of it so my memory or understanding might be awry: * Charities etc. submit applications * Funders choose evaluators to deputise (can be paid or unpaid) * Evaluators read applications, do calls, read background, other due diligence etc. * Evaluators write up their notes and assign the following parameters for each grant they looked at: * Marginal Utility of the First Dollar to this application * The process is invariant under a linear transformation so this is less onerous than it sounds * Dollar at which Marginal Utility = 0 * (Optional) convexity/concavity * Evaluators read each others' notes and discuss, then make any final adjustments to their own inputs. * Funders read these notes and review recordings of the discussions. * Funders assign the following parameters to the Evaluators: * Marginal Utility of the First Dollar to this Evaluator * Dollar at which Marginal Utility = 0 * (Optional) convexity/concavity * The simulation then basically waterfalls the dollars down, where each funder gives $1,000 to the evaluator they think has the highest marginal utility, who then gives it to the charity they think has the highest marginal utility. Then all the marginal utilities are updated, and the next $1,000 is allocated to an Evaluator, who again then allocates it to a charity. There were also some other 'social' elements like disclosure and conflict of interest policies and the like. This has a number of properties: * If an application is really liked by any one evaluator it will get funded, even if the others dislike it (unless they can persuade the one otherwise). * Not every evaluator has to look at every grant. * There is less incentive for evaluators to be dishonest than in other systems. * It can be counter-intuitive what indivi
Listen to more EA content with The Nonlinear Library

Exciting! But where's the podcast's URL? All I can find is a Spotify link.

Edit: I was able to track it down, here it is https://spkt.io/f/8692/7888/read_8617d3aee53f3ab844a309d37895c143

4Kat Woods7mo
Here it is! Spotify [https://open.spotify.com/show/3EcTioycPRcxwHv00IQEoF?si=0cZvtPUYRC-Yq-lcw6NZ4w] , Google Podcasts [https://podcasts.google.com/feed/aHR0cHM6Ly9zcGt0LmlvL2YvODY5Mi83ODg4L3JlYWRfODYxN2QzYWVlNTNmM2FiODQ0YTMwOWQzNzg5NWMxNDM] , Pocket Casts [https://pca.st/520yn9xh]. Or just search for it in your preferred podcasting app. We put it on all the biggest ones. Just let us know if it's not on one and it'll be easy enough for us to add it. Added clarification at the top of the post to make it easier to find. :)
The case against “EA cause areas”

A minor point, but I think this overestimates the extent to which a small number of people with an EA mindset can help in crowded cause areas that lack such people. Like, I don't think PETA's problem is that there's nobody there talking about impact and effectiveness. Or rather, that is their problem, but adding a few people to do that wouldn't help much, because they wouldn't be listened to. The existing internal political structures and discourse norms of these spaces aren't going to let these ideas gain traction, so while EAs in these areas might be abl... (read more)

2nadavb10mo
I totally agree. In order for an impact-oriented individual to contribute significantly in an area, there has to be some degree of openness to good ideas in that area, and if it is likely that no one will listen to evidence and reason then I'd tend to advise EAs to stay away from there. I think there are such areas where EAs could contribute and be heard. And I think the more mainstream the EA mindset will be, the more such places will exist. That's one of the reasons why we really should want EA to become more mainstream, and why we shouldn't hide ourselves from the rest of the world by operating in such a narrow set of domains.