All of alexrjl's Comments + Replies

Where is a good place to start learning about Forecasting?

https://youtu.be/e6Q7Ez3PkOw

I put together this video series a while ago to help people get started. The book superforecasting is also a good intro.

Seeking Math + Programming Accelerated Learning Advice

A couple of things which will be really helpful:

Not at the start but very useful for once you're starting to read papers.

IMHO the single best educational youtuber - There's a series on linear algebra, another on neural networks, and even one on calculus. 

The videos on their own won't be enough to learn, but they are a fantastic supplement and visualisation prompt.

2calebp7dI flipping love 3b1b. The linear algebra and calculus series are particularly great.
Why I'm concerned about Giving Green

There was substantial evidence of TSM's rapid growth available at the time I originally wrote this piece, some of which I included in it. It therefore seems somewhat strange that the thing which prompted the de-recommendation is that TSM appeared to grow rapidly. Nonetheless, the de-recommendation itself seems good.

Consider trying the ELK contest (I am)

I am pretty confident that ARC would want you to submit those strategies, especially given your background. Even if both trivially fail, it seems useful for them to  know that they did not seem obviously ruled out to you by the provided material.

A guided cause prioritisation flowchart

I think if you present a simplified/summarised thing along with more detailed guidance you should assume that almost nobody will read the guidance.

3JackM17dAlmost nobody? I'd imagine at least some people are interested in making an informed decision on cause area and would be interested in learning. You might be right though. I'm not getting a huge amount of positive reception on this post (to put it lightly) so it may be that such a guided flowchart is a doomed enterprise. EDIT: you could literally make it that you click on a box and guidance pops up so it could theoretically be very easy to engage with it.
A guided cause prioritisation flowchart

" future people are as morally important as those alive now" seems like a very high bar for longtermism. If e.g. you think future people are 0.1% as important, but there's no time discount for when (as long as they don't exist yet), this doesn't prevent you from concluding the future is hugely important. Similarly for some exponential discounts (though they need to be extremely small).

6JackM17dAbsolutely agree with that. My idea of a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple. It may be however that that box can be reworded to something like "Are future people (even in millions of years) of non-negligible moral worth" or something like that. Ideally someone would read the guidance for each box to ensure they progressing through the flowchart correctly.
Aiming for the minimum of self-care is dangerous

"Working really hard carries such a significant risk of burnout that in expectation it's bad" is completely consistent with "I can point to people I know of who are working really hard and really productive".

planewithreddots.gif

Listen to more EA content with The Nonlinear Library

I would really like this option for the alignment forum

What Small Weird Thing Do You Fund?

I have previously offered to pay for therapy for another member of the community*, and would do so again if the situation arose. I think many people can feel worried/awkward/bad about spending money on their own health, especially mental health, so making this sort of offer can be really worth doing when the situation arises.

I've had people make the same offer to me, and think that the offer made me seek therapy sooner than I otherwise would have, and that this was a great decision.

*In both cases, no money actually changed hands, but the offer was genuine, not just made as a signal. 

7HowieL2moI've also done this.
EA Should Spend Its “Funding Overhang” on Curing Infectious Diseases

I liked this post overall, but I think it may be optimistic about the effect of challenge trials on speeding up vaccine development in a couple of the mentioned cases. I was in a malaria vaccine challenge trial 11 years ago, and if I remember correctly the same lab was also testing tb vaccines with challenge trials. I think challenge trials are a really good idea, but if they're already being used for some of the diseases mentioned then we don't have an opportunity to improve things much by funding more of them.

6Davidmanheim2moThere are certainly places where HCTs aren't the right tool - and Josh suggested that several approaches are worth pursuing - but they do seem underused compared to their value, and they are almost unique in their ability to allow testing of certain vaccines. For instance, a universal influenza vaccine is difficult to trial naturally, because you can't see what it does or does not protect against, and you only get data about the variants circulating the current year. And the existence of some HCT work doesn't imply that we're anywhere near to optimal level - yes, there have been a few hundred HCTs in the past 40 years, but they haven't been used much or at all for many diseases, sometimes for relatively justifiable reasons that I would disagree with, and other times for no reason other than no-one has done it. (There are also cases where HCTs are, in fact, actually unethical or impossible - for example, there are some bird flus that are 100% fatal when humans catch them, but which aren't human to human transmissible. But I'm leaving those aside for now.) In any case, yes, this isn't a brand new idea, but neither were mosquito nets, direct cash payments, or treating schistosomiasis - but they are effective causes nonetheless.
FTX EA Fellowships

Oh wow, that is a pretty big update.

List of EA funding opportunities

I'm finding this difficult to interpret - I can't find a way of phrasing my question without it seeming snarky but this isn't intended.

One reading of this offer looks something like:

if you have an idea which may enable some progress, it's really important that you be able to try and I'll get you the funding to make sure you do

Another version of this offer looks more like:

I expect basically never to have to pay out because almost all ideas in the space are useless, but if you can convince me yours is the one thing that isn't useless I guess I'll get y

... (read more)
6HowieL3moEliezer gave some more color on this here: https://www.facebook.com/yudkowsky/posts/10159562959764228 [https://www.facebook.com/yudkowsky/posts/10159562959764228] There might be more discussion in the thread.
2Linch3moI interpreted it as the former fwiw. Skimming his FB timeline, Eliezer has recently spoken positively [https://www.facebook.com/yudkowsky/posts/10159920495664228] of Redwood Research, and in the past about Chris Olah's work on interpretability.
FTX EA Fellowships

It's 10k plus travel plus housing plus co-working space, so it sounds like other than food basically the whole 10k would be disposable income. Potentially the housing provides food also. I'm not sure what cost of living is like in the Bahamas but that hardly sounds like "really low pay"

4Linch2moMy friend said that prices in the Bahamas for things like groceries and restaurants were ~20-50% higher than the Bay Area, which I did not expect.

Yep definitely not intended to impose a financial strain on anyone! Since travel, housing, and office space will be paid for, we intended $10k as a reasonable amount for expenses above that. But it wasn't chosen super carefully, so we can run some numbers and increase it if it seems like more will be helpful.

Listen to more EA content with The Nonlinear Library

This seems roughly consistent with "somewhat unlikely", I expect the fraction is similar for me.

Listen to more EA content with The Nonlinear Library

If we had to ask each person before converting their text to audio, it just wouldn’t feasibly happen.

 

Which part isn't feasible? If you have the skill and capacity on the team to write something which will scrape forum posts, check if they have karma above a threshold, convert them to speech, and post them as audio, it seems more likely than not that you'd have the skill/capacity to edit the tool such that it DMs the authors of the posts it wants to scrape, asking them to reply "Yes" or "OK", and then only uploads posts where the author did respond to... (read more)

If I were to receive such messages, I would likely fail to respond (unintentionally) at least 20% of the time.

Concerns about AMF from GiveWell reading - Part 1

I think telling someone not to post criticism without having done X, Y or Z seems bad, but I think asking someone for a title change to make clear that this is a set of concerns one person has come up with rather than e.g. news of an evaluation change from GiveWell is reasonable, and that's what I read the request as.

Specifically about asking organisations ahead of posting criticism, I think this is a good thing to do, but absolutely shouldn't be required before posting. In this case, I expect asking someone before posting would have led to a much higher quality post, as the responses from Charles and Linch would almost certainly have come up, and there would have been a chance to discuss them.

Clarifying the Petrov Day Exercise

I interpreted your comment as saying that I was "lambasting the foibles of being a well intentioned unilateralist", and that I should not be doing so. If that was not the intent I'm glad.

3Linch4moI interpreted you as proposing doing an unilateralist action to demonstrate to others the harm of unilateralist actions. Apologies if I misread.
Clarifying the Petrov Day Exercise

The lesson people I would want people to learn is "I might not have considered all the reasons people might do stuff". See comment below.

Clarifying the Petrov Day Exercise

This is closer, I think the framing I might have had in mind is closer to:

  • people underestimate the probability of tail risks.

  • I think one of the reasons why is that they don't appreciate the size of the space of unknown unknowns (which in this case includes people pushing the button for reasons like this).

  • causing them to see something from the unknown unknown space is therefore useful.

  • I think last year's phishing incident was actually a reasonable example of this. I don't think many people would have put sufficiently high probability on it happening, even given the button getting pressed.

Clarifying the Petrov Day Exercise

Yeah I guess you could read what I'm saying as that I actually think I should have pressed it for these reasons, but my moral conviction is not strong enough to have borne the social cost of doing so.

One read of that is that the community is strong enough in its social pressure to quiet bad actors like me from doing stupid harmful stuff we think is right.

Another is that social pressure is often enough to stop people from doing the right thing, and that we should be extra grateful to Petrov, and others in similar situations, because of this.

Either reading seems reasonable to discuss today.

2DanielFilan4moBut if you actually should press the button, and do so because you correctly understand why you should, then people shouldn't learn the lesson "people will do wild crazy stuff out of misunderstandings or malice", because that won't be what happened.
Clarifying the Petrov Day Exercise

This wasn't intended as a "you should have felt sorry for me if I'd done a unilateralist thing without thinking". It was intended as a way of giving more information about the probability of unilateralist action than people would otherwise have had, which seems well within the spirit of the day.

I also think it's noteworthy that in the situation being celebrated the ability to resist social pressure was pointing in the opposite direction to the way it goes here, which seems like a problem with the current structure, but I didn't end up finding a good way to articulate it, and someone else said something similar already.

3Linch4moI think either you misinterpreted my comment or I misinterpreted yours. I'm genuinely confused how you could have gotten that interpretation from my comment. So to be clearer, Petrov here should naively be read [https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/linch-s-shortform?commentId=no9szQcHeYS94GYgx] as a well-intentioned unilateralist. He happened to be right, and reasonable people can disagree about whether he was wrong but lucky or right all along. Regardless, I think it's not very much in the spirit of the day to talk about or act out all the harms of being a well-intentioned unilateralist, though if you wish to do so, more power to you. I agree, and have complained about this before. I'm also complaining about it now, in case that was not previously clear.
Clarifying the Petrov Day Exercise

It seems fairly likely (25%) to me that had Kirsten not started this discussion (on Twitter) I would have pushed the button because:

  • actually preventing the destruction of the world is important to me.

  • doing so, especially as a "trusted community member", would hammer home the danger of well intentioned unilateralists in the way an essay can't, and I think that idea is important.

  • despite being aware of lesswrong and having co-authored one post there, I didn't really understand how seriously some people took the game previously.

  • worse, I was in the

... (read more)
3DanielFilan4moIt seems to me that either the decision to push the button is net negative, and you shouldn't do it, or it isn't, and if you do it people should learn the lesson "people in my community will do helpful net-positive things". There's something strange about the reasoning of "if I do X, people will realize that people do things like X for reasons like Y, even tho I would not be doing it for reasons like Y" (compare e.g. "I will lie about Santa to my child because that will teach them that other people in the world aren't careful about only communicating true things", which I am similarly suspicious of).
6Linch4moRegardless of what you think of the unilateralist's curse, I think Petrov Day is a uniquely bad time to lambast the foibles of being a well-intentioned unilateralist.
Magnitude of uncertainty with longtermism

This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.

7Venky10244moThis is a very interesting paper and while it covers a lot of ground that I have described in the introduction, the actual cubic growth model used has a number of limitations, perhaps the most significant of which is the assumption that it considers the causal effect of an intervention to diminish over time and converge towards some inevitable state: more precisely it assumes|P(St|A)−P(St|B )|→0ast→∞, whereStis some desirable future state and A and B are some distinct interventions at present. Please correct me if I am wrong about this. However, the introduction considers not just interventions fading out in terms of their ability to influence future events but often the sheer unpredictability of them. In fact, much like I did, the idea from chaos theory is cited: But the model does not consider any of these cases. In any case, by the author's own analysis ( which is based on a large number of assumptions), there are several scenarios where the outcome is not favorable to the longtermist. Again, interesting work, but this modeling framework is not very persuasive to begin with (regardless of which way the final results point to).
More EAs should consider “non-EA” jobs

In my case it was the opposite - I spent several years considering only non-EA jobs as I had formed the (as it turns out mistaken) impression that I would not be a serious candidate for any roles at EA orgs.

What things did you do to gain experience related to EA?

NB - None of the things below were done with the goal of building prestige/signalling. I did them because they were some combinaion of interesting, fun, and useful to the world. I doubt I'd have been able to stick with any if I'd viewed them as purely instrumental. I've listed them roughly in the order in which I think they were helpful in developing my understanding. The signalling value ordering is probably different (maybe even exactly reversed), but my experience of getting hired by an EA org is that you should prioritise developing skill/knowledge/und... (read more)

(Sentinel is a system for testing new diseases such that unknown pathogens could be recognised from the first sample. Listen to the podcast alexrjl has linked) 

A Sequence Against Strong Longtermism

I don't think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.

6Linch5moYes, this is what I mean. I was unsure how to be diplomatic about it.
Career advice for Australian science undergrad interested in welfare biology

https://www.animaladvocacycareers.org/ seems like a good option to check out if you're set on Animal welfare work. Given that you're thinking about keeping AI on the table, you should probably at least consider keeping pandemic prevention similarly on the table, it seems like a smaller step sideways from your current interests. Have you considered applying to speak to someone at 80,000 hours*?

*I'll be working on the 1-1 team from September, but this is, as far as I can tell, the advice I'd have given anyway, and shouldn't be treated as advice from the team.

3ripbennyharvey6moThanks for the link, I have had a bit of a look at that website but I should have another look. I think I did consider the option of 1-on-1 advice at one point but I'm not sure why I didn't follow it up, so I really appreciate the suggestion and reminder :) That's a good point about pandemic preparedness, it is definitely less of a move from the biology fields I'm considering. Unfortunately I'm not very knowledgeable about the kind of work required and the need for people there, so I'll definitely follow that up. Again, perhaps a 1-1 session with 80K Hours would be a good idea for getting a better understanding of that area. I have been developing an interest in programming and have done a few classes at university so I think I'll develop that on the side for now, and work on keeping the option of pandemic prevention work open as you've suggested. Congrats on the position on the 1-1 team, and thanks again for your insight, means a lot.
The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

How do you approach identity? If ~no future people are "necessary", does this just reduce to critical-level utilitarianism (but still counting people with negative welfare, can't remember if critical level does that)? Are you ok with that?

1Stijn6moMy theory would be like critical level utilitarianism, where necessary people, possible people with negative welfare and possible people with high positive welfare have zero critical levels, and possible people with low positive welfare have a critical level equal to their own welfare. So people can have different critical levels, and the critical level might depend on the welfare of the person. The problem of identity could become difficult, when we consider identity as something fluid or vague. If for example copying a person (a kind of teleportation but without destroying the source person) would be possible: which of the two copies is the necessary person and which is the possible person? I guess the two copies have to fight over this for themselves. In general: once person A in state X identifies herself with a unique person B in state Y, and B identifies herself with A, only then are persons A and B considered identical. A necessary person is a person who is able to identify himself with a unique person in each other available state.
The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

Trying to summarise for my own understanding.

Is the below a reasonable tl;dr?

Total utilitarianism, except you ignore people who satisfy all of:

  • won't definitely  exist
  • Have welfare between 0 and T

Where T is a threshold chosen democratically by them, and lives with positive utility are taken to be "worth living".

If so, does this reduce to total utilitarianism in the case that people would choose not to be ignored if their lives were worth living?

3Stijn6moThat's a good summary, except that the threshold is chosen democratically by those who definitely exist. If these people choose not to ignore those people who don't definitely exist and have welfare between 0 and T, then it reduces to total utilitarianism
4Linch7moSince a lot of the feedback of forecasting comes after weeks or months, you can learn forecasting while also learning something else!
What are the 'PlayPumps' of cause prioritisation?

I think plastic straws are a v good option here, when you consider that:

  • paper straws are just a worse experience for ~everyone
  • metal/glass arguably worse for the environment given number of uses and resources required to produce (see also reusable bags)
  • Some disabled people rely on straws and paper replacements terrible for them

 

This is certainly closer to the playpumps [actively harmful once you think properly about it] than the ALS [not a huge issue but it's not like stopping ALS would be actually bad in a vacuum].

3DavidZhang7moThanks - do you know of any analysis / data behind your three bullet points which I could point to? Instinctively I agree that the costs almost certainly outweigh the benefits, but I anticipate scepticism from readers!
Why did EA organizations fail at fighting to prevent the COVID-19 pandemic?

Is the claim here that EA orgs focusing on GCRs didn't think GoF research was a serious problem and consequently didn't do enough to prevent it, even though they easily could have if they had just tried harder?
 

My impression is that many organisations and individual EAs were both conerned about risks due to GoF research, and were working on trying to prevent it. A postmortem about strategies used seems plausibly useful, as does a retrospective on whether it should have been an even bigger focus, but the claim as stated above I think is false, and probably unhelpful.
 

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Overall I liked this post, and in particular I very strongly endorse the view that it's worth spending nontrivial time/energy/money to improve your health, energy, productivity etc. I don't have a strong view about how useful the specific pieces of advice were, my impression is that the literature is fairly poor in many of these areas. Partly because of this, my favourite section was:

One thing people sometimes say when I tell them there is a small chance taking some pill will fix their problems is that this seems somehow like cheating because it doesn’t re

... (read more)
5alexlintz7moOh yeah, I think you're right on that! I shouldn't have been so down on symptom-reducing treatment. It does seem clearly better to fix root causes but given they can be so hard to fix it can often be the case that the best solution is to treat symptoms (and in some cases, like mental health, that can help improve root cause as well). I'll change that language so it's more positive on those
EA is a Career Endpoint

I agree, a single rejection is not close to conclusive evidence, but it is still evidence on which you should update (though, depending on the field, possibly not very much)

What are things everyone here should (maybe) read?

Agree with this but would note that "The Signal and the Noise" should probably be your first intro or likely isn't worth bothering with. It's a reasonable intro but I got ~nothing out of it when I read it (while already familiar with Bayesian stats).

Some global catastrophic risk estimates

The "metaculus" forecast weights users' forecasts by their track record and corrects for calibration, I don't think the details for how are public. Yes you can only see the community one on open questions.

I'd recommend against drawing the conclusion you did from the second paragraph (or at least, against putting too much weight on it). Community predictions on different questions about the same topic on Metaculus can be fairly inconsistent, due to different users predicting on each.

2Benjamin_Todd9moAh thanks for clarifying (that's a shame!). Maybe we could add another question like "what's the chance it's caused by something that's not one of the others listed?" Or maybe there's a better way at getting at the issue?
On the longtermist case for working on farmed animals [Uncertainties & research ideas]

I already believed it and had actually been recently talking to someone about it, so I was surpsied and pleased to come across the post, but couldn't find a phrasing which said this which didn't just sound like I was saying "oh yeah thanks for writing up my idea". Sorry for the confusion!

On the longtermist case for working on farmed animals [Uncertainties & research ideas]

Thanks for writing this, even accounting for suspicious convergence (which you were right to flag), it just seems really plausible that improving animal welfare now could turn out to be important from a longtermist perspective, and I'd be really excited to hear about more research in this field happening.

5MichaelA9moIs this just something you already believed, or are you indicating that this post updated you a bit more towards believing this? I initially assumed you meant the latter, which I found slightly surprising, though on reflection it seems reasonable. Why I found it surprising: When I wrote the original version of this post in 2020, I was actually coming at it mainly from an angle of "Here's an assumption which seems necessary for the standard longtermist case for working on farmed animals, but which is usually not highlighted or argued for explicitly, and which seems like it could easily be wrong." So I guess I would've assumed it'd mostly cause people to update slightly away from believing that longtermist case for working on farmed animals. (But only slightly; this post mainly raises questions rather than strong critiques.) But I guess it really depends on the reader: While some people are familiar with and at least somewhat bought into that longtermist case for working on farmed animals but have probably paid insufficient attention to the fact that Premise 4 might be wrong, some other people haven't really encountered a clear description of that longtermist case, and some people mostly discuss longtermism as if it is necessarily about humans. So for some people, I think it'd make sense for this post to update them towards that longtermist case for working on farmed animals.
Research suggests BLM protests increase murder overall

released his preliminary findings on the Social Science Research network as a preprint, meaning the study has yet to receive a formal peer review.

 

It’s worth noting that Campbell didn’t subject the homicide findings to the same battery of statistical tests as he did the police killings since they were not the main focus of his research.

 

I thought there had also been some cautionary tales learned in the last year about widely publicisng and discussing headline conclusions from preprint data without appropriate caveats. Apparently not.

Actions to take for a career change towards EA (advice needed)

There's the EA jobs facebook group, and I'll pm you a discord link.

It's worth noting that 80k has a lot of useful advice on how to think about career impact, and also the option to apply for advising, as well as the jobs board. There's also Probably Good (search for their forum post) and Animal Advocacy careers.

EA Debate Championship & Lecture Series

I want to echo this. I think my own experience of debating has been useful to me in terms of my ability to intelligence-signal in person, but was pretty bad overall for my epistemics. One interesting thing about BP (which was the format I competed in most frequently at the highest level) was the importance in the 4th speaker role identifying the cruxes of the debate (usually referring to them as "clash"), which I think is really useful. Concluding that the side you've been told to favour has then "won" all of the cruxes is... less so.

Actions to take for a career change towards EA (advice needed)

All this advice seems realy good, and I want to particularly echo this bit:

It might be worth reframing how you think about this as "how can I find a job that has the biggest impact", rather than "how can I get an EA job".

1JoePeirson9moThanks for the advice! Yes, I agree. Looking for a job that has the biggest impact is a better goal. There are a lot of helpful questions here for me to go away and think about. Seems quite a daunting process so far, many jobs I have applied for have come back with responses of "700 people applied for these 5 positions" etc. Can either of you point me in the direction of useful job boards apart from 80k hours?
"Hinge of History" Refuted (April Fools' Day)

This post is already having a huge impact on some of the most influential philosophers alive today! Thanks so much for writing it.

Forget replaceability? (for ~community projects)

Evidence Action are another great example of "stop if you are in the  downside case" done really well.

Any EAs familiar with Partha Dasgupta's work?

I was under the impression CSER was pretty "core EA"! Certainly I'd expect most highly engaged EAs to have heard of them, and there aren't that many people working on x-risk anywhere.

(Disclaimer: am co-director of CSER): EA is a strong influence at CSER, but one of a number. At a guess, I'd say maybe a third to a half of people actively engage with EA/EA-led projects (some ambiguity based on how you define), but a lot are coming from other academic backgrounds relevant to GCR and working in broader GCR contexts, and there's no expectation or requirement to be invoved with EA. We aim to be a broad church in this regard.

Among our senior advisers/board, folks like Martin Rees and Jaan Tallinn engage more actively with EA.  There's be... (read more)

6MichaelPlant10moI'm not sure how to assess what counts as 'core EA'! But I don't think the org bills itself as EA, or that the overwhelming majority of its staff self-identify as EAs (cf. the way the staff at, um, CEA probably do...)
How much does performance differ between people?

I've been much less successful than LivB but would endorse it, though I'd note that there are substantially better objective metrics than cash prizes for many kinds of online play, and I'd have a harder time arguing that those were less reliable than subjective judgements of other good players. It somewhat depends on sample though, at the highest stakes the combination of v small playerpool and fairly small samples make this quite believable.

Is laziness immoral?

Hi Jacob,

I think you might really enjoy and benefit from reading this blog by Julia Wise. While it's great that you have such a strong instinct to help people, we're in this game for the long haul, and you won't have a big impact by feeling terrible about yourself and feeling guilty if you don't make sacrifices.

In particular, it's very likely that focusing on doing well in college and then university is going to make a much bigger different to your lifetime impact than whether you can get a part-time job to donate right now.

Load More