All of Tetraspace Grouping's Comments + Replies

Honoring Petrov Day on the EA Forum: 2021

As of this comment: 40%, 38%, 37%, 5%. I haven't taken into account time passing since the button appeared.

With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40%  (). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.

I think there's a 5% chance that there's a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, an... (read more)

3SiebeRozendal2moAlso, the reference class of launches doesn't fully represent the current situation: last launch was more of a self-destruct. This time, it's harming another website/community, which seems more prohibitive. So I think the prior is lower than 40%.
2SiebeRozendal2moThere is a chance to remove MAD by removing Peter's launch codes' validity, per my request [https://forum.effectivealtruism.org/posts/hyWgdmHTNGSHM5ZaE/honoring-petrov-day-on-the-ea-forum-2021?commentId=GpzDsT7ytAQMaNdYX] .
What EA projects could grow to become megaprojects, eventually spending $100m per year?

I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.

Tetraspace Grouping's Shortform

Another principle, conservation of total expected credit:

Say a donor lottery has you, who donates a fraction   of the total with an impact judged by you if you win of , the other participants, who collectively donate a fraction  of the total with an average impact as judged by you if they win of , and the benefactor, who donates a fraction  of the total with an average impact if they win of . Then total expected credit assigned by you should be  (followed by A, B and C), and total credit... (read more)

Tetraspace Grouping's Shortform

I've been thinking of how to assign credit for a donor lottery.

Some ways that seem compelling:

  • A: You get X% credit for the actual impact of the winner
  • B: You get 100% credit for the impact if you win, and 0% credit otherwise
  • C: You get X% credit for what your impact would have been, if you won

Some principles about assigning credit:

  • Credit is predictable and proportional to the amount you pay to fund an outcome (violated by B)
  • Credit depends on what actually happens in real life (violated by C)
  • Your credit depends on what you do, not what uncorrelated other peop
... (read more)
1Tetraspace Grouping3moAnother principle, conservation of total expected credit: Say a donor lottery has you, who donates a fractionpof the total with an impact judged by you if you win ofX, the other participants, who collectively donate a fractionqof the total with an average impact as judged by you if they win ofY, and the benefactor, who donates a fraction1−p−qof the total with an average impact if they win of0. Then total expected credit assigned by you should bepX+q Y(followed by A, B and C), and total credit assigned by you should beXif you win,Yif they win, and0otherwise (violated by C). * Under A, if you win, your credit ispX, their credit isqX, and the benefactor's credit is(1−p−q)X, for a total credit ofX. If they win, your credit ispY, their credit isqY, and the benefactor's credit is(1−p−q)Y, for a total credit ofY. * Your expected credit isp(pX+qY), their expected credit isq(pX+qY), and the benefactor's expected credit is(1−p−q)(pX+qY), for a total expected credit ofpX+qY. * Under B, if you win, your credit isXand everyone else's credit is0, for a total credit ofX. If they win, their credit isYand everyone else's credit is0 , for a total credit ofY. If the benefactor wins, everyone gets no credit. * Your expected credit ispXand their expected credit ispY, for a total expected credit ofpX+qY. * Under C, under all circumstances your credit ispXand their credit isqY, for a total credit ofpX+qY. * Your expected credit ispXand their expected credit isqY, for a total expected credit ofpX+qY.
How are resources in EA allocated across issues?

What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?

My upcoming CEEALAR stay

I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?

5quinn1yMaybe! I'm only going after a steady stream of 2-3 chapters per week. Be in touch if you're interested: I'm re-reading the first quarter of PLF since they published a new version in the time since I knocked out the first quarter of it.
Prize: Interesting Examples of Evaluations

Simple linear models, including improper ones(!!). In Chapter 21 of Thinking Fast and Slow, Kahneman writes about Meehl's book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review, which finds that simple algorithms made by getting some factors related to the final judgement and weighting them gives you surprisingly good results.

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between humans and algorithms has not changed. About 60% of

... (read more)
AMA: Rob Mather, founder and CEO of the Against Malaria Foundation

How has the landscape of malaria prevention changed since you started? Especially since AMF alone has bought on the order of 100 million nets, which seems not insignificant compared to the total scale of the entire problem.

9RobM2yThere is more malaria prevention happening now. When AMF started in 2004/05, 5 million LLINs were distributed globally by all contributors. It is now around 200 million nets per year. There is a greater focus on data I am pleased to say with funders ever more focused on ensuring nationwide campaigns are well targeted and not wasteful. More money has come into malaria prevention through a combination of greater awareness of the disease, its impact and what can be done about it, as well as, in our experience, donors having greater confidence that funds being given to a charity focused on a problem in Africa will be well directed and used with significant impact. There is still a very significant gap in funding each year for basic malaria control (covering people with nets) so there is still much work to do and support to gain. There has been some progress on developing a vaccine but we do not yet have a highly effective vaccine that could make the sort of impact on reducing malaria that we would all like to see. My understanding is we are at least 5 and probably 10 years away, at the earliest, of having a vaccine that is ‘really interesting’ (but others will have a more informed and up to date opinion here than mine). There has been significant progress with gene drive technology and there is growing hope that it may make a significant contribution to malaria control in the coming years. But we are not there yet. My understanding is we are at least five, and maybe more, years away from developments that could be, similarly, ‘really interesting’ (similar disclaimer as above).
Long-Term Future Fund: November 2019 short grant writeups

In the list at the top, Sam Hilton's grant summary is "Writing EA-themed fiction that addresses X-risk topics", rather than being about the APPG for Future Generations.

Miranda Dixon-Luinenburg's grant is listed as being $23,000, when lower down it's listed as $20,000 (the former is the amount consistent with the total being $471k).


6Aaron Gertler2yThanks for this note! I've fixed the grant amount in this Forum post, and Sam's description in this post and on the Funds site.
Conversation on AI risk with Adam Gleave

Christiano operationalises a slow takeoff as

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.

in Takeoff speeds, and a fast takeoff as one where there isn't a complete 4 year interval before the first 1 year interval.

Tetraspace Grouping's Shortform

The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:

  • StrongMinds
  • International Refugee Assistance Project
  • Massachusetts Bail Fund

StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.

The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member o... (read more)

5Aaron Gertler2yOpen Phil has made multiple grants to the Brooklyn Community Bail Fund, [https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/brooklyn-community-bail-fund-general-support] which seems to do similar work to the MA Bail Fund (and was included in Dan Smith's 2017 match). I don't know why MA is still here and Brooklyn isn't, but it may have something to do with room for more funding or a switch in one of the orgs' priorities. You've probably seen this, but Michael Plant included StrongMinds [https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health] in his mental health writeup on the Forum.
Tetraspace Grouping's Shortform

The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the "total distributed" figure $439,197, and listed above these grants is the "payout amount" figure $445,697. Huh?

8JP Addison2yHi, I saw this and asked on our slack about it. These was a leftover figures from when the post was in draft and the grants weren't finalized; someone's now fixed it. If you see anything else wrong, feel free to reach out to funds@effectivelatruism.org.
[Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form.

Two people mentioned the CEA not being very effective as an unpopular opinion they hold; has any good recent criticism of the CEA been published?

Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering

You mention the Jhanas and metta meditation as both being immensely pleasurable experiences. Since these come from meditation, they seem like they might be possible for people to do "at home" at very little risk (save for the opportunity costs from the time investment). Do you have any thoughts on encouraging meditation aimed towards achieving these highly pleasurable states specifically as a cause area and/or something we should be doing personally?

5algekalipso2yAccording to "Right Concentration: A Practical Guide to the Jhanas" by L. Brasington and "The Mind Illuminated" by Culadasa, it is feasible to achieve Jhana states within two years of dedicated practice. This entails a few hours of meditation a day and attending at least one 9-day retreat over the course of this time period. The books explain in detail how to get there in a very practical and no-nonsense way. I personally have yet to invest that time into this task, but I know that one of the other core members of the Qualia Research Institute, Romeo Stevens, is now able to achieve Jhanas thanks to his meditation practice. I do intend to do this in the near future. Also, we are looking into doing EEG and fMRI studies on people who can enter those states as a means to test the CDNS [https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/] approach to valence quantification, which is a core part of our research plan.
Tetraspace Grouping's Shortform

In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.

As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.

The times currently remaining on the clocks a... (read more)

4Khorton2yI really like seeing problems presented like this. It makes them easier to understand.

The division-by-zero type error is that EV(preventing holocaust|universe is infinite) would be calculated as ∞-∞, which in the extended reals is undefined rather than zero. If it was zero, then you could prove 0 = ∞-∞ = (∞+1)-∞ = (∞-∞)+1 = 1.

4bhalperin2yWhen you write it like that, it seems obvious :) Thanks.
Ask Me Anything!

This reminds me of the most important AMA question of all:

MacAskill, would you rather fight 1 horse-sized chicken, or 100 chicken-sized horses?

I'm pretty terrified of chickens, so I'd go for the horses.

Tetraspace Grouping's Shortform

One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.

However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.

And yet! The doomsday clock, so evocative! And I would l... (read more)

Ask Me Anything!

Will there be anything in the book new for people already on board with longtermism?

Tetraspace Grouping's Shortform

In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).

There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so... (read more)

What posts you are planning on writing?

"How targeted should donation recommendations be" (sorta)

I've noticed that Givewell targets specific programs (e.g. their recommendation), ACE targets whole organisations, and among far future charities you just kinda get promising-sounding cause areas.

I'm interested in what kind of differences between cause areas lead to this, and also whether anything can be done to make more fine-grained evaluations more desirable in practice.

Sperm sorting in cattle

The total number of cows probably stays about the same, because if they had space to raise more cows they would have just done that - I don't think that availability of semen is the main limiting factor. So the amount of suffering averted by this intervention can be found by comparing the suffering per cow per year in either cases.

Model a cow as having two kids of experiences: normal farm life where it experiences some amount of suffering x in a year, and slaughter where it experiences some amount of suffering y all at once.

In equilibrium, the population o

... (read more)
1Koushik Raghavan2yYes, the first-order effect makes sense. I am worried about the second-order effects. Assuming that a cow is kept alive usually for 6 calvings, the cow would have produced 3 male and 3 female calves. If sex-sorted semen is used, the cow will now produce 6 female calves, i.e. (10x + y quantum of suffering units)*6 per cow per 10 years that is inseminated with sex-sorted semen. The ripple effects of that would only produce more and more suffering (at an exponential scale), assuming that all of the female calves that are born via sex-sorted semen will again be inseminated with sex-sorted semen. Also, can you please clarify your calculation wherein you arrive at y/15.
If physics is many-worlds, does ethics matter?

If you want to make a decision, you will probably agree with me that it's more likely that you'll end up making that decision, or at least that it's possible to alter the likelyhood that you'll make a certain decision by thinking (otherwise your question would be better stated as "if physics is deterministic, does ethics matter"). And, under many worlds, if something is more likely to happen, then there will be more worlds where that happens, and more observers that see that happen (I think this is usually how it's posed, anyway). So while there'll always be some worlds where you're not altruistic, no matter what you do, you can change how many worlds are like that.

6Milan_Griffes2yThanks, I haven't thought about this enough to say with confidence, but it seems plausible that many-worlds implies determinism such that this is really a question about determinism / living in a deterministic system.
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens?

When I have a question about the future, I like to ask it on Metaculus. Do you have any operationalisations of synthetic biology milestones that would be useful to ask there?

Get-Out-Of-Hell-Free Necklace

What is agmatine, and how would it help someone who suspects they've been brainwashed?

Agmatine is an aminoacid you can buy over the counter at supplement stores and online. It is used as a workout supplement, to make weed feel stronger, and as a hangover prevention remedy. Agmatine has a high affinity for a number of receptors sites, and it is currently being debated whether it satisfies the criteria for being called a neurotransmitter.

Of particular note is agmatine's high affinity to the imidazoline receptor, which according to Thomas Ray- who analyzed the receptor affinity of 30+ psychedelics- might be one of the keys to the "ma... (read more)

How much do current cultured animal products cost?

This 2019 article has some costs listed:

  • Fish: "it costs Finless slightly less than $4,000 to make a pound of tuna"
  • Beef: "Aleph said it had gotten the cost down to $100 per lb."
  • Beef(?): "industry insiders say American companies are getting the cost to $50 per lb."
Should we talk about altruism or talk about justice?

GiveWell did an intervention report on maternal mortality 10 years ago, and at the time concluded that the evidence is less compelling than for their top charities (though they say that it is now probably out of date).

1Jemma2yThanks, looks interesting --- it seems from this report like what reduces maternal mortality rates is likely to be a combination of factors, or a factor that hasn't been discovered yet. Though maybe now GiveWell has incubation grants, they're in a position to support more investigation into the final option presented (clean birthing kits and/or associated education), which seemed promising?
New study in Science implies that tree planting is the cheapest climate change solution

The amount of carbon that they say could be captured by restoring these trees is 205 GtC, which for $300bn to restore comes to ~70¢/ton of CO2 ~40¢/ton of CO2. Founders Pledge estimates that, on the margin, Coalition for Rainforest Nations averts a ton of CO2e for 12¢ (range: factor of 6) and the Clean Air Task Force averts a ton of CO2e for 100¢ (range: order of magnitude). So those numbers do check out.

I did not look at the details, but it appears that neither of these estimates take into account opportunity costs. Typical farming profit is around $200 per hectare per year, so if instead you sequester 5 tCO2e per hectare per year, that would cost ~$40 per tCO2e, ~2 orders of magnitude more expensive. By the way, I believe $300 billion divided by 205 billion tons carbon = 750 billion tons CO2 would be $0.40 per ton CO2.

The Case for Superintelligence Safety As A Cause: A Non-Technical Summary
You can't just ask the AI to "be good", because the whole problem is getting the AI to do what you mean instead of what you ask. But what if you asked the AI to "make itself smart"? On the one hand, instrumental convergence implies that the AI should make itself smart. On the other hand, the AI will misunderstand what you mean, hence not making itself smart. Can you point the way out of this seeming contradiction?

(Under the background assumptions already being made in the scenario where you can "ask things" to "the A... (read more)

Two AI Safety events at EA Hotel in August

The signup form for the Learning-by-doing AI Safety workshop currently links to the edit page for the form on google docs, rather than the page where one actually fills out the form; the link should be this one (and the form should probably not be publicly editable).

7beth​3ySame for the unconference, should be this link [https://forms.gle/bMCfxkVaRV5r5RwU7].
New Top EA Cause: Flying Cars

The Terra Ignota series takes place in a world where global poverty has been solved by flying cars, so this is definitely well-supported by fictional evidence (from which we should generalise).

quant model for ai safety donations?

In MIRI's fundraiser they released their 2019 budget estimate, which spends about half on research personnel. I'm not sure how this compares to similar organizations.

quant model for ai safety donations?

The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).

0rafa_fanboy3ywhats the charity overhead for something like miri or fhi?
3G Gordon Worley III3yRight. For comparison software engineers (of all kinds, including ML engineers) at early-stage startups generally add between $500k and $1mm to the company's valuation, i.e. investors believe these employees make the company worth buying/selling for that much additional money. There's a lot that goes into where that number comes from, but it does at least suggest that O($1mm) is reasonable.
quant model for ai safety donations?

One can convert the utility-per-researcher into utility-per-dollar by dividing everything by a cost per researcher. So if before you would have 1e-6 x-risk reduction per researcher, and you also decide to value researchers at $1M/researcher, then your evaluation in terms of cost is 1e-12 x-risk per dollar.

For some values (i.e. fake numbers but still acceptable for comparing orders-of-magnitude of cause areas) that I've saw used: The Oxford Prioritisation Project uses 1.8 million (lognormal distribution between $1M and $3M) for a MIRI researcher over t... (read more)

-1rafa_fanboy3yok, im not sure if ai researchers get paid that much though
Higher and more equal: a case for optimism

I love that “one person out of extreme poverty per second” statistic! It’s much easier to picture in my head than a group of 1,000 million people, since a second is something I’m familiar with seeing every day.

2Jemma3yYes, it's great. I was talking to some people about this topic on New Year's Eve, wish I'd had this stat and the link to this article then!
Long-Term Future Fund AMA

Are there any organisations you investigated and found promising, but concluded that they didn't have much room for extra funding?

4Habryka3yIn the last grant round AI Impacts was an organization whose work I was excited about, but that currently seemed to not have significant room for extra funding. (If anyone from AI Impacts disagrees with this, please comment and let me know otherwise! )