All of rileyharris's Comments + Replies

I actually think this is a pretty reasonable division now, removed the automatic upvote on my comment.

More EA success stories:

Pandemics. We have now had the first truly global pandemic in decades, perhaps ever.

Nuclear war. Thanks to recent events, the world is closer than ever to a nuclear catastrophe.

It's not all good news though. Unfortunately, poverty seems to be trending down, there's less lead in the paint, and some say AI could solve most problems despite the risks.

Summaries of papers on the nature of consciousness (focusing on artificial consciousness in particular).

A post on how EA research differs from academic research, why people who like one distrust the other, and how in the longterm academic research may be more impactful.

A post explaining what I take to be the best reply to Thorstad's skeptical paper On the Singularity Hypothesis.

Very personal and unconventional research advice that no-one told me that I would have found helpful in my first 2 years of academic research. What I would change about this advice after taking a break and then starting a PhD.

2
Cameron_Meyer_Shorb
2mo
This seems interesting and helpful!

I feel like these actions and attitudes embody many of the virtues of effective altruism. You really genuinely wanted to help somebody, and you took personally costly actions to do so. I feel great about having people like you in the EA Community. My advice is to keep the feeling of how important you were to Tlalok's life as you do good effectively with other parts of your time and effort, knowing you are perhaps making a profound difference in many lives.

What is the timeline for announcing the result of this competition?

4
Daystar Eld
6mo
I'm finishing writing the post now :)

Was the result of this competition ever announced? I can't seem to locate it.

2
Writer
6mo
There is a single winner so far, and it will be announced with the corresponding video release. The contest is still open, though! Edit: another person claimed a bonus prize, too.

Are these fellowships open to applicants outside of computer science/engineering etc. doing relevant work?

I really like time shifter but honestly the following has worked better for me:

Fast for ~16 hours prior to 7am in my new time-zone.

Take melatonin, usually ~10pm in my new timezone and again if I wake up and stop feeling sleepy before around 5am in my new timezone. (I have no idea if this second dosing is optimal but it seems to work).

I highly recommend getting a good neck pillow, earplugs, and eye mask if you travel often or on long trips (e.g. if you are Australian and go overseas almost anywhere).

Thanks to Chris Watkins for suggesting the fasting routine.

2
Alexander Saeri
7mo
Glad that fasting works for you! I have tried it a couple of times and have found myself too hungry or uncomfortable to sleep at the times I need to (eg, a nap in the middle of the flight). Great points on equipment; I think they are necessary and think that the bulk of a good neck pillow in carry on luggage is justified because I can't sleep without it. I also have some comically ugly and oversized sunglasses that fit over my regular glasses and block light from all sides.

The schedule looks like it's all dated for August, is that the right link?

2
Siao Si
7mo
Fixed now, thanks for flagging!

I'd also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.

I think Thorstad's "Against the singularity hypothesis" might complement the week 10 readings.

4
rileyharris
7mo
I'd also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.

A quick clarification: I mean that "maximize expected utility" is what both CDT and EDT do, so saying "In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility" is perhaps misleading

I quite like this post. I think though that your conclusion, to use CDT when probabilities aren't affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).

I think there's something to pointing out that CDT in fact recommends one boxing wherever your action ... (read more)

1
rileyharris
7mo
A quick clarification: I mean that "maximize expected utility" is what both CDT and EDT do, so saying "In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility" is perhaps misleading

David Thorstad (Reflective Altruism/GPI/Vanderbilt) Tyler John (Longview) Rory Stewart (GiveDirectly)

5
Arepo
3mo
+1 David Thorstad

+1 on Rory Stewart- as well as being the President of GD, he was the Secretary of State for International Development in the UK, has started and run his own charity (I believe with his wife) in the developing world, has mentioned EA previously, is known to be an enjoyable person to listen to (judging by the success of his podcast), and has just released a book- and therefore might be more likely than usual to engage with popular media.

7
NickLaing
7mo
Rory Stewart is always a good time, surprised he hasn't been interviewed already!

Thanks for posting, I have a few quick comments I want to make:

  1. I recently got into a top program in philosophy despite having clear association with EA (I didn't cite "EA sources" in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on "EA Sources" which are not widely viewed as credible.

  2. Totally agree that prospects are very bad outside of top 10 and lean towards "even outside of top 5 seriously consider other options"

  3. On the other hand, if you really would be okay with fail

... (read more)

My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future. 

In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existenti... (read more)

2
Jonas Hallgren
7mo
Alright, that makes sense; thank you!
3
titotal
7mo
Yes, essentially preventing extinction "pays off" more in the low risk situation because the effects ripple on for longer.  Mathematically, if the value of one century is v, the "standard" chance of extinction is r, and the rate of extinction just for this century is d, then the expected value of the remaining world will be  v(1−d)+v(1−d)(1−r)+v(1−d)(1−r)2+...  = v(1−d)/r  (using geometric sums).  In the world where background risk is 20%, but we reduce this century risk from 20% to 10%, the total value goes from 4*v to 4.5*v.  In the world where background risk is 2%, but we reduce this century risk from 20% to 10%, the total value goes from 49*v to 49.5*v. In both cases, our intervention has added 0.5v to the total value.

"There are three main branches of decision theory: descriptive decision theory (how real agents make decisions), prescriptive decision theory (how real agents should make decisions), and normative decision theory (how ideal agents should make outcomes)."

This doesn't seem right to me, I would say: an interesting way you can divide up decision theory is between descriptive decision theory (how people make decisions) and normative decision theory (how we should make decisions).

The last line of your description, "how ideal agents should make outcomes" seems es... (read more)

1
rileyharris
22d
I actually think this is a pretty reasonable division now, removed the automatic upvote on my comment.

This is a fantastic initiative! I'm not personally vegan, but believe the "default" for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.

How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?

7
Linch
8mo
I don't know how many points I can really cleanly communicate to such a heterogeneous group, and I'm really worried about anything I say in this context being misunderstood or reified in unhelpful ways. But here goes nothing: * First of all, I don't know man, should you really listen to my opinion? I'm just one guy, who happened to have some resources/power/attention vested in me; I worry that people (especially the younger EAs) vastly overestimate how much my judgment is worth, relative to their own opinions and local context. * Thank you for applying, and for wanting to do the right thing. I genuinely appreciate everybody who applies, whether for a small project or large, in the hopes that their work can make the world a better place. It's emotionally hard and risky, and I have a lot of appreciation for the very small number people who tried to take a step in making the world better. * These decisions are really hard, and we're likely to screw up. Morality is hard and longtermism by its very nature means worse feedback loops than normal. I'm sure you're familiar with how selection/rejections can often be extremely noisy in other domains (colleges, jobs, etc). There aren't many reasons to think we'll do better, and some key reasons to think we'd do worse. We tried our best to make the best funding decisions we could, given limited resources, limited grantmaker time, and limited attention and cognitive capabilities. It's very likely that we have and will continue to consistently fuck up.  * This probably means that if you continue to be excited about your project in the absence of LTFF funding, it makes sense to continue to pursue it either under your own time or while seeking other funding. * Funding is a constraint again, at least for now. So earning-to-give might make sense. The wonderful thing about earning-to-give is that money is fungible; anybody can contribute, and probabilistically our grantees and would-be grantees are likely to be people with amon

If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?

5
Linch
8mo
Nah, at least in my own evaluation I don't think Open Phil evaluations take a large role in my evaluation qua evaluation. That said, LTFF has historically[1] been pretty constrained on grantmaker time so if we think OP evaluation can save us time, obviously that's good. A few exceptions I can think of: * I think OP is reasonably good at avoiding types-of-downside-risks-that-I-model-OP-as-caring-about (eg reputational harm), so I tend to spend less time vetting grants for that downside risk vector when OP has already funded them. * For grants into technical areas I think OP has experience in (eg biosecurity), if a project has already been funded by OP (or sometimes rejected) I might ask OP for a quick explanation of their evaluation. Often they know key object-level facts that I don't. * In the past, OP has given grants to us. I think OP didn't want to both fund orgs and to fund us to then fund those orgs, so we reduced evaluation of orgs (not individuals) that OP has already funded. I think switching over from a "OP gives grants to LTFF" model to a "OP matches external donations to us" model hopefully means this is no longer an issue. Another factor going forwards is that we'll trying to increase epistemic independence and decrease our reliance on OP even further, so I expect to try to actively reduce how much OP judgments influence my thinking. 1. ^ And probably currently as well, though at this very moment funding is a larger concern/constraint. We did make some guest fund manager hires recently so hopefully we're less time-bottlenecked now. But I won't be too surprised if grantmaker time becomes a constraint again after this current round of fundraising is over.

My entry is called Project Apep, it's set in a world where alignment is difficult, but a series of high profile incidents lead to extremely secure and cautious development of AI. It tugs at the tensions between how AI can make the future wonderful or terrible.

I'm working on a related distillation project, I'd love to have a chat so we can coordinate our efforts! (riley@wor.land)

I agree that regulation is enormously important, but I'm not sure about the following claim:

"That means that aligning an AGI, while creating lots of value, would not reduce existential risk"

It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn't completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.

1
Otto
8mo
Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at. If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I think your option falls under what I described in my post as "Types of AI (hardware) regulation may be possible where the state actors implementing the regulation are aided by aligned AIs". I think this is indeed a realistic option and it may reduce existential risk somewhat. Getting the regulation in place at all, however, seems more important at this point than developing what I see as a pretty far-fetched and - at the moment - intractable way to implement it more effectively.

What do you think are the biggest wins in technical safety so far? What do you see as the most promising strategies going forward?

Great to see attempts to measure impact in such difficult areas. I'm wondering if there's a problem of attribution that looks like this (I'm not up to date on this discussion):

  1. An organisation like the Future Academy or 80,000 hours or someone says "look, we probably got this person into a career in AI safety, which has a higher impact, and cost us $x, so our impact per dollar is $x per probable career spent on AI safety".
  2. The person goes to do a training program, and they say "we trained this person to do good work in AI safety, which allows them to have
... (read more)
3
SebastianSchmidt
9mo
Hi Riley, Thanks a lot for your comment. I'll mainly speak to our (Impact Academy) approach to impact evaluation but I'll also share my impressions with the general landscape. Our primary metric (*counter-factual* expected career contributions) explicitly attempts to take this into account. To give an example of how we roughly evaluate the impact:  Take an imaginary fellow, Alice. Before the intervention, based on our surveys and initial interactions, we expected that she may have an impactful career, but that she is unlikely to pursue a priority path based on IA principles. We rate her Expected Career Contribution (ECC) to be 2. After the program, based on surveys and interactions, we rate her as 10 (ECC) because we have seen that she’s now applying for a full-time junior role in a priority path guided by impartial altruism. We also asked her (and ourselves) to what extent that change was due to IA and estimate that to be 10%. To get our final Counterfactual Expected Career Contribution (CECC) for Alice, we subtract her initial ECC score of 2 from her final score of 10 to get 8, then multiply that score by 0.1 to get the portion of the expected career contribution which we believe we are responsible for. The final score is 0.8 CECC. As an formula: 10 (ECC after the program) - 2 (ECC before the program) * 0.1 (our counterfactual influence) = 0.8 CECC. You can read more here: https://docs.google.com/document/d/1Pb1HeD362xX8UtInJtl7gaKNKYCDsfCybcoAdrWijWM/edit#heading=h.vqlyvfwc0v22 I have the sense that other orgs are quite careful about this too. E.g., 80,000hours seems to think that they only caused a relatively modest amount of significant career changes because they discovered that the people had updated significantly due to reasons not related to 80,000hours.  

Thanks for writing such a thoughtful comment. The post has to reflect the content of the paper, so I'm glad your comment can provide extra context. The post now reflects that the paper was written in 2019, and I plan to address the 30x figure soon.

Thanks for pointing this out, the version on the GPI website has been corrected.

Thanks, this is really helpful information about trusts and the 4% rule! 

On self trust: I feel that a common pattern might be that when you're young, you're 'idealistic' and want to do things like donate. When you're older, you feel like spending your money (if you have it) in ways that might not make you particularly happy. I might even decide I would rather give it all to my kids (if I have some). This makes me think there's a good chance I won't donate it later if I haven't pre-committed. 

On safety: I am from Australia, and to some extent my c... (read more)

1
Joseph Lemien
1y
Oh, Australia. I fell prey to the common mistake of "assuming other people people are like me." I know a good deal about personal finance in a USA context, but only parts of that are universal: good chunks of it are particular to a specific national context. The national context matters a lot in personal finance issues. Your idea of "have a little money that's easily accessible and most of it in a trust" does make sense. Have an 'emergency fund' or 'support myself fund' with enough money for a a year or two of expenses, and then have everything else in a fund that transfers X% into your 'support myself fund' each year (or 1/12th of X% each month). If you do it right, the trust should grow indefinitely, and the inflow to your 'support myself fund' will be larger than your expenses. I think that I don't have anything particularly wise or useful to write about the whole 'trusting your future self' topic. But I imagine that there are likely personal finance professionals who have done research about that time of thing. It might take some poking around to find it though.

Here are some articles I think would make good scripts (I'll also be submitting one script of my own). 

Summaries of the following papers:

... (read more)

This is really great to see!

I think economic growth is rated too highly by this framework. It gets a very high rating on the first criteria because many organisations think it's something worth considering—but none of them rate it as their top priority, or even a particularly high priority (to my knowledge). My intuition is that it wouldn't get such a high rating if the criteria was importance, rather than consensus that it is one of the issues worth considering—and that importance is what matters here?

1
krystal_h
2y
Thanks Riley! Apologies for our late response.  We incorporated importance in the second part of our criteria after doing the deep-dives, because we wanted to assess the importance of a given issue in the Australian policy context - so it did come through, but a bit later on.  In any case, our deeper policy analyses aren't complete yet, but on what we've found so far we tend to agree that economic growth shouldn't be prioritised too highly.

Ask him about counterfactuals, ask him if his views have any implications for our ideas of counterfactual impact?

Ask him whether relative expectations can help us get out wagers like this one from Hayden Wilkinson's paper:

Dyson's Wager

You have $2,000 to use for charitable purposes. You can donate it to either of two charities. 

The first charity distributes bednets in low-income countries in which malaria is endemic. With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this. ... (read more)

Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils”. In it, he models the value of reducing existential risk on a range of different assumptions.

The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it.

It struck me that one of the results is part... (read more)

Thanks for the post - this seems like a really important contribution! 

[Caveat: I am not at all an expert on this and just spent some time googling]. Snake antivenom actually requires that you milk venom from a snake to produce, and I wonder how much this is contributing to the high cost ($55–$640)  of snake venom [1]. I wonder if R&D would be a better investment, especially given the potentially high storage and transport costs for snake venom (see below). It would be interesting to see someone investigate this more thoroughly.

Storage costs ... (read more)

5
Jesper Magnusson
2y
I was also thinking of the high production costs as a potential area of intervention. A few minutes of browsing turned up some potential advancements in production methods of antivenom, e.g. using synthetic biology, and I would be interested in learning about the potential cost-effectiveness of implementing or scaling up such alternative production methods. It seems like many of them are still in the R&D-stage though, but this could be an area to keep a close eye on. A recent article on the topic: https://www.drugdiscoverynews.com/snakebite-antivenoms-step-into-the-future-15378  
2
John Litborn
2y
If the lateral-flow test can be cheaply produced, distributed and stored at smaller clinics - then you might be able to then quickly drive patients to larger clinics once positively diagnosed and might not then have to worry as much about the larger costs of the anti-venom. Will depend a lot on the time/distance to nearest larger clinics though.

Thanks, it looks like you've put a lot of effort into summarising this information (it actually looks better and higher effort than my original post, oop). 

Thank you! I really appreciate the encouragement! 

I'm all for pricing in carbon and sensible policy that regulates in proportion to our best estimate of the risk!

I think my (updated based on the comments so far) conclusion is the same as yours!

Digging into this a bit, I may have gotten the original argument for nuclear wrong - it does seem like some countries would struggle to source their energy from renewables due to space constraints (arguably, less of a problem in Australia). 

"I’m not even sure it’s physically possible with 100% renewables... if you were to try and just replace oil in a country like Korea or Japan, so a densely populated country without huge amounts of spare land, you have to take up a significant proportion of the entire nation with solar panels... In the UK... if you ... (read more)

  • If someone was looking to work for OPP would an honours* or masters program be more beneficial than an undergraduate degree?

  • Are there particular questions or areas that could be worked on for a research project in honours/masters that are particularly helpful directly or develop the right kinds of skills for OPP? (especially in economics, philosophy or cognitive science)

  • ("Honours" in Australia is a 1 year research/coursework program)

0
lukeprog
6y
Completion of an honours or masters program provides us with a bit more evidence about an applicant's capabilities than an undergraduate degree does, but both are less informative to us than the applicant's performance on the various work samples that are part of our application process. Because our roles are so "generalist," there are few domains that are especially relevant, though microeconomics and statistics are two unusually broadly relevant fields. In general, we find that those with a STEM background do especially well at the kind of work we do, but a STEM background is not required. A couple other things that are likely helpful for getting and excelling in an Open Phil research analyst role are calibration training and practice making Fermi estimates.