All of So-Low Growth's Comments + Replies

You can talk to EA Funds before applying

Trying my luck here but would I also be able to get funds for academic projects (my research interests are in Metascience/Innovation/Growth)?

7evhub18dAcademic projects are definitely the sort of thing we fund all the time. I don't know if the sort of research you're doing is longtermist-related, but if you have an explanation of why you think your research would be valuable from a longtermist perspective, we'd love to hear it.
A Twitter Bot that regularly tweets current top posts from the EA Forum

That's the one! OP, I think some version of this is definitely worth implementing/revival. I often share various EA articles on my personal twitter feed, and I know people (for example Stefan Schubert) who share EA articles, which captures an audience (that find the content interesting/engaging) that do not always read the EAF regularly.

2nikos3moAh yes, I had seen the @ealtruist account - but as far as I can tell someone did that manually (it doesn't look like a bot) and then stopped. We could also merge the two - use the old account with this code or something. In principle open for that
A Twitter Bot that regularly tweets current top posts from the EA Forum

I thought something like this already existed but I could be mistaken.

3RyanCarey3moYes, there is [https://twitter.com/ealtruist]! It could make more sense to revive it.
A ranked list of all EA-relevant documentaries, movies, and TV series I've watched

Will check some of these out. Not sure if this fits your criteria but a personal favourite is the documentary about Aaron Swartz.

Writing about my job: Internet Blogger

Thanks for this - one of my favourite blogs!

Few questions (not all directly related to the job, so feel free to skip all/any of them):

  1. How do you think blogging compares to other careers available to you in terms of impact?
  2. Why not set up a Patreon (I'm aware you've got some grants)?
  3. Why remain pseudonymous?
  4. Why the name ADS?
9AppliedDivinityStudies3mo1. It depends on your skillset. My impression is that EA is not really talent constrained, with regards to the talents I currently have. So I would have a bit to offer on the margins, but that's all. I also just don't think I'm nearly as productive when working on a specific set of goals, so there's some tradeoff there. I'm interested in doing RSP one day, and might apply in the future. In theory I think the Vox Future Perfect role could be super high impact. 2. I probably should. 3. The short answer is that it's an irreversible decision, so I'm being overly cautious. But mostly it's aesthetic: I like Ender's Game, Death Note, etc. 4. X-risk = Applied Eschatology. Progress Studies = Applied Theodicy.
Making impact researchful

"For some projects, a small adjustment could unlock huge academic value." Would you be able to provide examples please?

2Michael_Wulfsohn3moI should clarify - I don't mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers. In my own work, using a consumer intertemporal optimisation model, I've tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I'm talking about. A couple of possible ways forward for me: include multiple countries and explore the interactions between x-risk mitigation and global public good provision; or use the setting of existential risk to learn more about a particular type of utility function which someone pointed me to for that purpose.
Economics PhD application support - become a mentee!

Good idea! May be worth reaching out to the LSE Econ PhD programme (I see you're attending!), who trialled something similar last year for underrepresented backgrounds (in order to get some feedback on what applicants want).

I think a good addition to this would be providing help to people applying for pre-docs as well, given how important they have become in the profession.

Inbreeding and global health & development

I should have added the following statement. If anyone would like a quick chat about researching cousin marriage, feel free to message me.

Caveat: I'm still fairly new to the topic (there's a lot of non-econ literature) but can try to help wherever possible. 

Inbreeding and global health & development

I'm currently actively working on this in my PhD (I'm an Economist), which developed from one of my pre-PhD courses. I have a few different ideas and am currently applying for funding for them. Truthfully, this is not one of my core research interests but I think it's relatively fertile ground for research/publication and I have some nice co-authors that I'm working with, so I don't have to devote too much time to the topic. 

A few points:

  1. The negative biological effects seem to be severe where there is persistent cousin marriage. Otherwise it seems tha
... (read more)
6So-Low Growth3moI should have added the following statement. If anyone would like a quick chat about researching cousin marriage, feel free to message me. Caveat: I'm still fairly new to the topic (there's a lot of non-econ literature) but can try to help wherever possible.
What should CEEALAR be called?

I strongly dislike this and think it gives off the wrong impression about the purpose of the Hotel.

A proposal for a small inducement prize platform

Briefly:

  1. I like the idea
  2. Think it will work
  3. Also like the idea of using Metaculus to forecast this
EA Twitter Job Bots and more

In Economics, there's an account that does this quite well, with a slightly different approach but a somewhat similar aim. It tweets economics pre-doc and RA positions. However, I think people tag the account, and then it gets re-tweeted. Here's the handle: https://twitter.com/econ_ra

Why you should give to a donor lottery this Giving Season

Does this help (from the FAQs? "The lottery is administered by the Centre for Effective Altruism (CEA). The Centre for Effective Altruism is a registered charity in England and Wales (Charity Number 1149828) and a registered 501(c)(3) Exempt Organization in the USA (EIN 47-1988398). An entry to the lottery is a donation to CEA; CEA will regrant the lottery money, based on the recommendation of the lottery winner.

All grants made are at CEA’s sole discretion. This is a condition of CEA’s status as a tax-deductible non-profit (both in the UK and the US).... (read more)

AMA: Jason Crawford, The Roots of Progress

Quick thought here Jack and Jason (caveat - haven't thought about this much at all!). 

Yes, the creation of new fields is important. However, even if there are diminishing returns to new fields (sidenote - I've been thinking about ways to try and measure this empirically), what's more important is the applicability of the new field to existing fields. 

For example, even if we only create one new field but that field could be incredibly powerful. For example, APM (atomically precise manufacturing), or an AGI of some sorts, then it will have major ra... (read more)

Will MacAskill has appeared on JRE before and probably talked about GiveWell. But yes, good news :).

1Nathan Young10moSorry, you're right. For anyone interested, the video is here
AMA: Jason Crawford, The Roots of Progress

Aaron, I'm really ignorant about this issue but didn't Peter Singer have a course on EA a while back that if I recall correctly was fairly accessible and could be marketed towards high school students?

AMA: Jason Crawford, The Roots of Progress

Alexey, I'm also skeptical of the findings but haven't had time to dig deeper yet, so it's just hunches at the moment. I have already asked you for the draft :). Honestly, can't wait to read it since you announced it last week! 

AMA: Jason Crawford, The Roots of Progress

What a great question Benjamin! "Why should a longtermist EA work on boosting economic growth? " Is something I have been thinking about myself (my username gives it away...). 

One quick comment on this "I agree Progress Studies itself is far more neglected than general work to boost economic growth"

This spurs a question for me. How is Progress Studies different from people working on Economic Growth? 

5Benjamin_Todd10moOne quick addition is that I see Progress Studies as innovation into how to do innovation, so it's a double market failure :)
AMA: Jason Crawford, The Roots of Progress

What do you think EA could learn from the 'Progress Studies' movement ?

My perception of EA is that a lot of it is focused on saving lives and relieving suffering. I don't see as much focus on general economic growth and scientific and technological progress.

There are two things to consider here. First, there is value in positives above and beyond merely living without suffering. Entertainment, travel, personal fitness and beauty, luxury—all of these are worth pursuing. Second, over the long run, more lives have been saved and suffering relieved by efforts to pursue general growth and progress than direct charitable efforts. S... (read more)

AMA: Jason Crawford, The Roots of Progress

Thanks for doing this Jason. I agree with your  response here. Seems natural to think that there are diminishing marginal returns to ideas within a sector. 

You mention APM, which would spur progress in other sectors.  Are there ways to identify which sectors open up progress in other domains, i.e. identifying the ideas that could remove the constraining factors of progress (small and big)?

7jasoncrawford10moI think basically you have to look at where an innovation sits in the tech tree. Energy technologies tend to be fundamental enablers of other sectors. J. Storrs Hall makes a good case for the need to increase per-capita energy usage, which he calls the Henry Adams Curve: https://rootsofprogress.org/where-is-my-flying-car [https://rootsofprogress.org/where-is-my-flying-car] But also, a fundamentally new way to do manufacturing, transportation, communication, or information processing would enable a lot of downstream progress.
1Rowan_Stanley1yThanks for the rec- I've added that one to my EA playlist
4SamiM1yI came across this playlist [https://open.spotify.com/playlist/18rSj25EJRWgTPxasm40iJ?si=dPSb1YJORaWJI6pEesys9w] about the end of the world, might be of interest.
So-Low Growth's Shortform

Thank you Aaron. That's exactly what I was looking for, and additionally I can dig deeper!

So-Low Growth's Shortform

Question: Imagine we could quantify the amount of suffering the average person does by eating meat and the amount of environmental damage that comes from eating this meat. How much would they need to donate to the most effective charities (climate change and animal suffering) in order to off-set their meat-eating habit?

3Aaron Gertler1yPeople have tried to estimate similar figures before. See Jeff Kaufman on dairy offsets [https://www.jefftk.com/p/how-bad-is-dairy] or Gregory Lewis on meat-eating [https://forum.effectivealtruism.org/posts/eeBwfLfB3iQkpDhz6/at-what-cost-carnivory] (searching the term "moral offset" will help you find other examples I haven't linked). Some people also think this idea is conceptually bad or antithetical to EA [https://forum.effectivealtruism.org/posts/Yix7BzSQLJ9TYaodG/ethical-offsetting-is-antithetical-to-ea] .
Long-Term Future Fund: September 2020 grants

This makes a lot of sense to me Pablo. You highlighted what I was trying to explain when I was making the comment, that: 1) I was uncertain 2) I didn't want to attack someone. I must admit, my choice of words was rather poor and could come across as "bravery talk", although that was not what I intended.

4Linch1yTo be clear, I think your overall comment added to the discussion more than it detracts, and I really appreciate you making it. I definitely did not interpret your claims as an attack, nor did I think it's a particularly egregious example of a bravery framing. One reason I chose to comment here is because I interpreted (correctly, it appears!) you as someone who'd be receptive to such feedback, whereas if somebody started a bravery debate with a clearer "me against the immoral idiots in EA" framing I'd probably be much more inclined to just ignore and move on. It's possible my bar for criticism is too low. In particular, I don't think I've fully modeled meta-level considerations like: 1) That by only choosing to criticize mild rather than egregious cases, I'm creating bad incentives. 2) You appear to be a new commenter, and by criticizing newcomers to the EA Forum I risk making the EA Forum less appealing. 3) That my comment may spawn a long discussion. Nonetheless I think I mostly stand by my original comment.
Long-Term Future Fund: September 2020 grants

All good points Jonas, Ben W, Ben P, and Stefan. Was uncertain at the beginning but am pretty convinced now. Also, side-note, very happy about the nature of all of the comments, in that they understood my POV and engaged with them in a polite manner.

By the way, I also was surprised by Rob only making 4 videos in the last year. But I actually now think Rob is producing a fairly standard number of high-quality videos annually.

The first reason is that (as Jonas points out upthread) he also did three for Computerphile, which brings his total to 7.

The second reason is that I looked into a bunch of top YouTube individual explainers, and I found that they produce a similar number of highly-produced videos annually. Here's a few:

... (read more)
7Ben Pace1y:) Appreciated the conversation! It also gave me an opportunity to clarify my own thoughts about success on YouTube and related things.
Long-Term Future Fund: September 2020 grants

Thanks for the understanding responses Jonas and Linch. Again, I should clarify, I don't know where I stand here but I'm still not entirely convinced.

So, we have four videos in the last year on his channel, plus three videos on Computerphile, giving seven videos. If I remember correctly, The Alignment Newsletter podcast is just reading Shah's newsletter, which may be useful but I don't think that requires a lot of effort.

I should reiterate that I think what Miles does is not easy. I may also be severely underestimating the time it takes to make a YouTube video!

Long-Term Future Fund: September 2020 grants

Thanks for pointing that out. Will refrain from doing so in the future. What I was trying to make clear was that I didn't want my comment to be seen as a personal attack on an individual. I was uneasy about making the comment on a public platform when I don't know all the details nor know much about the subject matter.

2Linch1yYeah that makes a lot of sense. I think the rest of your comment is fine without that initial disclaimer, especially with your caveat in the last sentence! :)

FWIW, I think that the qualification was very appropriate and I didn't see the author as intending to start a "bravery debate". Instead, the purpose appears to have been to emphasize that the concerns were raised in good faith and with limited information. Clarifications of this sort seem very relevant and useful, and quite unlike the phenomenon described in Scott's post.

Long-Term Future Fund: September 2020 grants

This is going to sound controversial here (people are probably going to dislike this but I'm genuinely raising this as a concern) but is the Robert Miles $60,000 grant attached to any requirements? I like his content but it seems to me you could find someone with a similar talent level (explaining fairly basic concepts) who could produce many more videos. I'm not well versed in YouTube but four/five videos in the last year doesn't seem substantial. If the $60,000 was instead offered as a one-year job, I think you could find many talented in... (read more)

4Pongo1yTo state a point in the neighborhood of what Stefan [https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants?commentId=kraetoYibbqsagsto] , Ben P [https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants?commentId=F4q9rfds4dM27taY2] , and Ben W [https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants?commentId=wQgxyLKZXHJaPfa55] have said, I think it's important for LTTF to evaluate the counterfactual where they don't fund something, rather than the counterfactual where the project has more reasonable characteristics. That is, we might prefer a project be more productive, more legible or more organized, but unless that makes it worse than the marginal funding opportunity, it should be funded (where one way a project could be bad is by displacing more reasonable projects that would otherwise fill a gap).

I think one of the things Rob has that is very hard to replace is his audience. Overall I continue to be shocked by the level of engagement Rob Miles' youtube videos get. Averaging over 100k views per video! I mostly disbelieve that it would be plausible to hire someone that can (a) understand technical AI alignment well, and (b) reliably create youtube videos that get over 100k views, for less than something like an order of magnitude higher cost.

I am mostly confused about how Rob gets 100k+ views on each video. My mainline hypothesis is that Rob has succ... (read more)

It might be more relevant to consider the output: 500,000 views (or ~80,000 hours of watch time).  Given that the median video gets 89 views, it might be hard for other creators to match the output, even if they could produce more videos per se. 

9Linch1yMeta: Small nitpick, but I would prefer if we reduce framings like See Scott Alexander on Against Bravery Debates [https://slatestarcodex.com/2013/05/18/against-bravery-debates/].

Thanks for the critique!

In addition to four videos on his own channel, Robert Miles also published three videos on Computerphile during the last 12 months. He also publishes the Alignment Newsletter podcast.  So there's at least some additional output. There's probably more I don't know of.

you could find someone with a similar talent level (explaining fairly basic concepts)

I personally actually think this would be very difficult. Robert Miles' content seems to have been received positively by the AI safety community, but science communications in gene... (read more)

6Linch1yI also notice myself being confused about the output here. I suspect the difficulty of being good at Youtube outreach while fully understanding technical AI safety concepts is a higher bar than you're claiming, but I also intuitively would be surprised if it takes an average of 2+ months to produce a video (though perhaps he spends a lot of time on other activities? This quote alludes to this.
So-Low Growth's Shortform

I'd like feedback on an idea if possible. I have a longer document with more detail that I'm working on but here's a short summary that sketches out the core idea/motivation:

Potential idea: hosting a competition/experiment to find the most convincing argument for donating to long-termist organisations

Brief summary

Recently, Professor Eric Shwitzgebel and Dr Fiery Cushman conducted a study to find the most convincing philosophical/logical argument for short-term causes. By ‘philosophical/logical argument’ I mean an argument that ... (read more)

4Larks1yThis is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone's decision to keep the money wouldn't necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.
Study results: The most convincing argument for effective donations

I wonder if something similar could also be done but with donations to long-term issues instead? I.e. the same set-up but searching for the most convincing long-term arguments. Would this be of interest? (I've been thinking about setting something up along the lines of this).

2Jackson Wagner4moYes, I was just going to ask if anyone had looked at longtermist arguments in a similar way, or even just compiled a similar list of any short, punchy longtermist pitches that are out there. I've been thinking of printing out some pamphlets or something to distribute around town when I go for walks, and it might be nice to be able to represent multiple EA pillars on one pamplet. I also think it would be interesting to see results on longtermism because it's a much stranger, less familiar idea (more different than other charity messaging people have heard before), so it might be harder to explain in a short format, but there might be correspondingly big wins from introducing people to such a totally new concept.