All of Lukas_Finnveden's Comments + Replies

How to succeed as an early-stage researcher: the “lean startup” approach

I'm confused about your FAQ's advice here. Some quotes from the longer example:

Let’s say that Alice is an expert in AI alignment, and Bob wants to get into the field, and trusts Alice’s judgment. Bob asks Alice what she thinks is most valuable to work on, and she replies, “probably robustness of neural networks”. [...]  I think Bob should instead spend some time thinking about how a solution to robustness would mean that AI risk has been meaningfully reduced. [...] It’s possible that after all this reflection, Bob concludes that impact regularization

... (read more)
3rohinmshah9dIn that example, Alice has ~5 min of time to give feedback to Bob; in Toby's case the senior researchers are (in aggregate) spending at least multiple hours providing feedback (where "Bob spent 15 min talking to Alice and seeing what she got excited about" counts as 15 min of feedback from Alice). That's the major difference. I guess one way you could interpret Toby's advice is to simply get a project idea from a senior person, and then go work on it yourself without feedback from that senior person -- I would disagree with that particular advice. I think it's important to have iterative / continual feedback from senior people.
What is the EU AI Act and why should you care about it?

Thank you for this! Very useful.

The AI act creates institutions responsible for monitoring high-risk systems and the monitoring of AI progress as a whole.

In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?

5MathiasKB10dSorry I should have said "monitoring AI progress in Europe as a whole" and even then I think it might be misleading. One of the three central tasks of the AI board is to 'coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;' For example, if a high-risk AI system is compliant but still poses a risk the provider is required to immediately inform the AI Board. The national supervisory authorities must also regularly report back to the AI Board about the results of their market surveillance and more. So the AI Board both gets the mandate and the information to monitor how AI progresses in the EU. And they have to do so to carry out their task effectively even if it's not directly stated anywhere that they are required to do so. I hope this clears it up, I'm happy that you found the post useful!
How to succeed as an early-stage researcher: the “lean startup” approach

One reason to publish papers (specifically) about AI governance (specifically) is if you want to build an academic field working on AI governance. This is good both to get more brainpower and to get more people (who otherwise wouldn't read EA research) to take the research seriously, in the long term. C.f. the last section here https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact

Moral dilemma

Sorry to hear you're struggling! As others have said, getting to a less tormented state of mind should likely be your top priority right now.

(I think this would be true even if  you only cared about understanding these issues and acting accordingly, because they're difficult enough that it's hard to make progress without being able to think clearly about them. I think that focusing on getting better would be your best bet even if there's some probability that you'll care less about these issues in the future, as you mentioned worrying about in a diffe... (read more)

Most research/advocacy charities are not scalable

With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource  -  opportunitycost/resource)  *  resource.

If impact/resource  is much bigger than opportunitycost/resource  (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.

If so, assuming that resource=$ in this case, this roughly translates to the heuristic "if the opportunity cost of money isn't that high (compared to your project), you should optimise for total impact without thinking much about  the monetary costs".

2MichaelStJules1moGood point. We could also read "impact/resource - opportunitycost/resource" as a cost-effectiveness estimate that takes opportunity costs into account. I think Charity Entrepreneurship has been optimizing for this (at least sometimes, based on the work I've seen in the animal space) and they refer to it as a cost-effectiveness estimate, but I think this is not typical in EA. Also, this is looking more like cost-benefit analysis than cost-effectiveness analysis.
Most research/advocacy charities are not scalable

Based on vaguely remembered hearsay, my heuristic has been that the large AI  labs like DeepMind and OpenAI spend roughly as much on compute as they do on people, which would make for a ~2x increase in costs. Googling around doesn't immediately get me any great sources, although this page says "Cloud computing services are a major cost for OpenAI, which spent $7.9 million on cloud computing in the 2017 tax year, or about a quarter of its total functional expenses for that year".

I'd be curious to get a better estimate, if anyone knows anything relevant.

Most research/advocacy charities are not scalable

There may be reasons why building such 100m+ projects are different both from many smaller  "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.

One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved

This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they're currently sp... (read more)

1Charles He1moThank you for pointing this out. You are right, and I think maybe even a reasonable guess is that CSET funding is starting out at less than 10M a year.
4Benjamin_Todd1moYes, I wouldn't say CSET is a mega project, though more CSET-like things would also be amazing.
Further thoughts on charter cities and effective altruism

this page has some statistics on openphil's giving (though it is noted to be preliminary)  https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy

[Future Perfect] How to be a good ancestor

Sweden has a “Ministry of the Future,”

Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )

What are some key numbers that (almost) every EA should know?

The last two should be 10^11 - 10^12 and 10^11, respectively?

5Habryka3moOops, that's why you don't try to do mental arithmetic that will shape the future of our lightcone at 1AM in the morning.
A ranked list of all EA-relevant (audio)books I've read

This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be

Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.

Were the Great Tragedies of History “Mere Ripples”?

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most in... (read more)

Were the Great Tragedies of History “Mere Ripples”?

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn'... (read more)

4Alex HT7moThanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest. At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists). I'm also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views But, I do agree that some longtermists do think * there are likely to be very transformative events soon eg. within 50 years * in the long run, if they go well, these events will massively improve the human condition And there's some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes.
Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging

This seems right to me.

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

Yeah, I have no quibbles with this. FWIW, I personally didn't  interpret the passage as sayi... (read more)

3Ben_West8moThat makes sense; your interpretation does seem reasonable, so perhaps a rephrase a would be helpful.
Lessons from my time in Effective Altruism

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

3richard_ngo8moYepp, I agree with this. On the other hand, since AI safety is mentorship-constrained, if you have good opportunities to upskill in mainstream ML, then that frees up some resources for other people. And it also involves building up wider networks. So maybe "similar expected value" is a bit too strong, but not that much.
Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives

If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.

3Ben_West8moI think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc. As a toy example, say thatS(x)is some bounded sigmoid function, and my utility function is to maximizeE[S(x)]; it's always going to be the case thatE[S(x1)]≥E[ S(x2)]⇔x1≥x2so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging. (Correct me if this is wrong though.)
Lessons from my time in Effective Altruism

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you thi... (read more)

3richard_ngo8moI do think that this turned out well for me, and that I would have been significantly worse off if I hadn't started working in safety directly. But this was partly a lucky coincidence, since I didn't intend to become a philosopher three years ago when making this decision. If I hadn't gotten a job at DeepMind, then my underestimate of the usefulness of upskilling might have led me astray.
Lessons from my time in Effective Altruism

Great post!

EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA.

I can't immediately see why a lack of experience with political maneuvering would mean that we often waste prestigious peoples' time. Could you give an example? Is this just when an EA is talking to somoene prestigious and asks a silly questi... (read more)

"Don't have much time for X" is an idiom which roughly means "have a low tolerance for X". I'm not saying that their time actually gets wasted, just that they get a bad impression. Might edit to clarify.

And yes, it's partly about silly questions, partly about negative vibes from being too ideological, partly about general lack of understanding about how organisations work. On balance, I'm happy that EAs are enthusiastic about doing good and open to weird ideas; I'm just noting that this can sometimes play out badly for people without experience of "normal" jobs when interacting in more hierarchical contexts.

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repu... (read more)

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, m... (read more)

2Halstead9moHi, The A population and the Z population are both composed of merely possible future people, so person-affecting intuitions can't ground the repugnance. Some impartialist theories (critical level utilitaianism) are explicitly designed to avoid the repugnant conclusion. The case is analogous to the debate in aggregation about whether one should cure a billion headaches or save someone's life.
The Fermi Paradox has not been dissolved

with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen

I haven't run the numbers, but I wouldn't be quite so dismissive. Intergalactic travel is probably possible, so with numbers as high as these, I would've expected us to encounter some early civilisation from another galaxy. So if these numbers were right, it'd be some evidence that intergalactic travel is impossible, or that something else str... (read more)

3Davidmanheim9moTo respond to your substantive point, intergalactic travel is possible, but slow - on the order of tens of millions of years at the very fastest. And the distribution of probable civilizations is tilted towards late in galactic evolution because of the need for heavier elements, so it's unclear that early civilizations are possible, or at least as likely. And somewhat similar to your point, see my tweet from a couple years back [https://twitter.com/davidmanheim/status/1017001672500023298]: "We don't see time travelers. This means either time travel is impossible, or humanity doesn't survive. Evidence of the theoretical plausibility of time travel is therefore strong evidence that we will be extinct in the nearer term future."
The Fermi Paradox has not been dissolved

I hadn't seen the Lineweaver and Davis paper before, thanks for pointing it out! I'm sceptical of the methodology, though. They start out with a uniform prior between 0 and 1 of the probability that life emerges in a ~0.5B year time window. This is pretty much assuming their conclusion already, as it assigns <0.1% probability to life emerging with less than 0.1% probability (I much prefer log-uniform priors). The exact timing of abiogenesis is then used to get a very modest bayesian update (less than 2:1 in favor of "life always happens as soon as possi... (read more)

Thoughts on whether we're living at the most influential time in history

I actually think the negative exponential gives too little weight to later people, because I'm not certain that late people can't be influential. But if I had a person from the first 1e-89 of all people who've ever lived and a random person from the middle, I'd certainly say that the former was more likely to be one of the most influential people. They'd also be more likely to be one of the least influential people! Their position is just so special!

Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and ... (read more)

"Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential."

I strongly agree with this. The fact that under a mix of  distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion. 

And then we're also very distinctive in other ways: being on one planet, being at such a high-growth period, etc. 

Thoughts on whether we're living at the most influential time in history

One way to frame this is that we do need extraordinarily strong evidence to update from thinking that we're almost certainly not the most influential time to thinking that we might plausibly be the most influential time. However, we don't  need extraordinarily strong evidence pointing towards us almost certainly being the most influential (that then "averages out" to thinking that we're plausibly the most influential). It's sufficient to get extraordinarily strong evidence that we are at a point in history which is plausibly the most influential. And ... (read more)

Thoughts on whether we're living at the most influential time in history

I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness.

Let the set H="the 1e10 (i.e. 10 billion) most influential people who will ever live"  and let E="the 1e11 (i.e. 100 billion) earliest people who will ever live". Assume that the future will contain 1e100 people. Let X be a randomly sampled person.

For our unconditional prior P(X in... (read more)

"If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness."

Thanks, Lukas, I thought this was very clear and exactly right. 

"So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect ... (read more)

2richard_ngo10moThe question which seems important to me now is: does Will think that the probability of high influentialness conditional on birth rank (but before accounting for any empirical knowledge) is roughly the same as the negative exponential distribution Toby discussed in the comments on his original post?

One way to frame this is that we do need extraordinarily strong evidence to update from thinking that we're almost certainly not the most influential time to thinking that we might plausibly be the most influential time. However, we don't  need extraordinarily strong evidence pointing towards us almost certainly being the most influential (that then "averages out" to thinking that we're plausibly the most influential). It's sufficient to get extraordinarily strong evidence that we are at a point in history which is plausibly the most influential. And ... (read more)

Getting money out of politics and into charity

Another relevant post is Paul Christiano's Repledge++, which suggests some nice variations. (It might still be worth going with something simple to ease communication, but it seems good to consider options and be aware of concerns.)

As one potential problem with the basic idea, it notes that

I'm not donating to politics, so wouldn't use it.

isn't necessarily true, because if you thought that your money would be matched with high probability, you could remove money from the other campaign at no cost to your favorite charity. This is bad, because it gives p... (read more)

5UnexpectedValues1yYeah, I agree this would be bad. I talk a bit about this here: https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/ [https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/] A possible solution is to send only half of any matched money to charity. Then, from an apolitical altruist's perspective, donating $100 to the platform would cause at most $100 extra to go to charity, and less if their money doesn't end up matched. (On the other hand, this still leaves the problem of s slightly political altruist, who cares somewhat about politics but more about charity; I don't know how to solve this problem.) And yeah, we've run into Repledge++ and are trying a small informal trial with it right now!
Getting money out of politics and into charity

We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

Both links go to the same felicifia page. I suspect you're referring to the moral trade paper: http://www.amirrorclear.net/files/moral-trade.pdf

2RyanCarey1yfixed
How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna?

Givewell estimates that they directed or influenced about 161 million dollars in 2018. 64 million came from Good Ventures grants. Good Ventures is the philanthropic foundation founded and funded by Dustin and Cari. It seems like the 161 million directed by Give Well represents a comfortable majority of total 'EA' donation.

If you want to count OpenPhil's donations as EA donations, that majority isn't so comfortable. In 2018, OpenPhil recommended a bit less than 120 million (excluding Good Venture's donations to GiveWell charities) of which almost all came f... (read more)

AMA: Markus Anderljung (PM at GovAI, FHI)

Thanks, that's helpful.

The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you're unsure about cause prior or because the roles you're aiming at require wide skillsets), the less frequently changing roles makes sense.

Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.

1MarkusAnderljung1yIt is indeed! Editing the comment. Thanks!
AMA: Markus Anderljung (PM at GovAI, FHI)

Hi Markus! I like the list of unusal views.

I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.

I would've expected you to cite the threshold for specialisation as longer than a year; as stated, I think most EAs would agree with the last sentence. Do you think that the gains from specialisation keep accumulating after a yea... (read more)

2MarkusAnderljung1yThanks for the question, Lukas. I think you're right. My view is probably stronger than this. I'll focus on some reasons in favour of specialisation. I think your ability to carry out a role keeps increasing for several years, but the rate on improvement presumably goes tapers off with time. However, the relationship between skill in a role and your impact is less clear. It seems plausible that there could be threshold effects and the like, such that even though your skill doesn't keep increasing at the same rate, the impact you have in the role could keep increasing at the same or an even higher rate. This seems for example to be the case with research. It's much better to produce the very best piece on one topic than to produce 5 mediocre pieces on different topics. You could imagine that the same thing happens with organisations. One important consideration - especially early in your career - is how staying in one role for a long time affects your career capital. The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you are aiming to work on a particular cause or in a particular type of role), the less frequently changing roles makes sense. There's also the consideration of what happens when we coordinate. In the ideal scenario, more coordination in terms of careers should mean people try to build more narrow career capital, which means that they'd hop around less between different roles. I liked this post [https://forum.effectivealtruism.org/posts/bEPBMu2wB2DE4kaku/comparative-advantage-in-the-talent-market] by Denise Melchin from a while back on this topic. It's also plausible that you get a lot of the gains from specialisation not from staying in the same role, but primarily in staying in the same field or in the same organisation. And so, you can have your growth and still get the gains from specialisation by staying in the same org
Space governance is important, tractable and neglected

Why is that? I don't know much about the area, but my impression is that we currently don't know what space governance would be good from an EA perspective, so we can't advocate for any specific improvement. Advocating for more generic research into space-governance would probably be net-positive, but it seems a lot less leveraged than having EAs look into the area, since I expect longtermists to have different priorities and pay attention to different things (e.g. that laws should be robust to vastly improved technology, and that colonization of other solar systems matter more than asteroid mining despite being further away in time).

saulius's Shortform

If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)

If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.

I'm not sure whether I should expect google or imgur to preserve their image-links for longer.

What are examples of EA work being reviewed by non-EA researchers?

Since then, the related paper Cheating Death in Damascus has apparently been accepted by The Journal of Philosophy, though it doesn't seem to be published yet.

EAF/FRI are now the Center on Long-Term Risk (CLR)

Good job on completing the rebranding! Do you have an opinion on whether CLR should be pronounced as "see ell are" or as "clear"?

9SoerenMind2yJust as a data point, "eye clear" took off for the conference ICLR so people seem to find the "clear" pronunciation intuitive.
4MaxRa2yThe downside of "see ell are", as mentioned by JasperGeh, would be that, as I've understood, CEEALAR is supposed to be pronounced "see ale-are". So it would sound similar.
2JP Addison2ySEE-ler?
Insomnia with an EA lens: Bigger than malaria?

(Nearly) every insomniac I’ve spoken to knows multiple others

Just want to highlight a potential selection effect: If these people spontaneously tell you that they're insomniacs, they're the type of people who will tell other people about their insomnia, and thus get to know multiple others. There might also be silent insomniacs, who don't tell people they're insomniacs and don't know any others. You're less likely to speak with those, so it would be hard to tell how common they are.

8 things I believe about climate change
Climate change by itself should not be considered a global catastrophic risk (>10% chance of causing >10% of human mortality)

I'm not sure if any natural class of events could be considered global catastrophic risks under this definition, except possibly all kinds of wars and AI. It seems pretty weird to not classify e.g. asteroids or nuclear war as global catastrophic risks, just because they're relatively unlikely. Or is the 10% supposed to mean that there's a 10% probability of >10% of humans dying conditioned on some event in th... (read more)

2Davidmanheim2yThe other fairly plausible GCR that is discussed is biological. Black death likely killed 20% of the population (excluding the Americas, but not China or Africa, which we affected) in the middle ages. Many think that bioengineered pathogens or other threats could plausibly have similar effects now. Supervolcanos and asteroids are also on the list of potential GCRs, but we have better ideas about their frequency / probability. Of course, Toby's book will discuss all of this - and it's coming out soon [https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/0316484911] !
8 things I believe about climate change
It’s very difficult to communicate to someone that you think their life’s work is misguided

Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the "marginal best thing to work on for most EA people with flexible career capital" is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I'd only be tempted to call work on climate change &... (read more)

7Linch2yYeah I think that's fair. I think in practice most people who get convinced to work on eg, biorisk or AI Safety issues instead of climate change often do so for neglectedness or personal fit reasons. Feel free to suggest a different wording on my point above. EDIT: I changed "misguided"->"necessarily the best thing to do with limited resources" I also think we have some different interpretations of the connotations of "misguided." Like I probably mean it in a weaker sense than you're taking it as. Eg, I also think selfishness is misguided because closed individualism isn't philosophically sound, and that my younger self was misguided for not being a longtermist.
JP's Shortform

It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor.

Copying images from public Gdocs to the non-markdown editor works fine.

Competition is a sign of neglect in important causes with long time horizons for impact.

It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why.

If a field is bottlenecked on mentors, it has too few mentors per applicants, or put differently, more applicants than the mentors can accept. Assuming that each applicant needs some fixed amount of time with a mentor before becoming senior themselves, increasing the size of the applicant-pool doesn't increase the number of future senior people, because the present m

... (read more)
1AllAmericanBreakfast2yIn my OP, I just meant that if the applicant gets in, they can teach. Too many applicants doesn't necessarily indicate that the field is oversubscribed, it just means that there's a mentorship bottleneck. One possible reason is that senior people in the field simply enjoy direct work more than teaching and choose not to focus on it. Insofar as that's the case, candidates are especially suitable if they're willing to focus more on providing mentorship if they get in and a bottleneck remains by the time they become senior. Thanks for the feedback, it helps me understand that my original post may not have been as clear as I thought.
Are we living at the most influential time in history?

Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.

1trammell2yAnd that P(simulation) > 0.
Are we living at the most influential time in history?

Ok, I see.

people seem to put credence in it even before Will’s argument.

This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution

... (read more)
Are we living at the most influential time in history?

Not necessarily.

P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation))

Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us as

... (read more)
1SoerenMind2yAgreed, I was assuming that the prior for the simulation hypothesis isn't very low because people seem to put credence in it even before Will's argument. But I found it worth noting that Will's inequality only follows from mine (the likelihood ratio) plus having a reasonably even prior odds ratio.
Competition is a sign of neglect in important causes with long time horizons for impact.

It's certainly true that fields bottlenecked on mentors could make use of more mentors, right now. If you're already skilled in the area, you can therefore have very high impact by joining/staying in the field.

However, when young people are considering whether they should join in order to become mentors, as you suggest, they should consider whether the field will be bottlenecked on mentors at the time when they would become one, in 10 years time or so. Since there are lots of junior applicants right now, the seniority bottleneck will presumably b... (read more)

1AllAmericanBreakfast2yin the absence of other empirical information, I think it's a safe assumption that present bottlenecks correlate with future bottlenecks, though your first point is well taken. I'm not quite following your second argument. It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why. Enlighten me? Your third point is also correct. Stated generally, finding ways to increase the availability of the primary bottlenecked resource, or accomplish the same goal while using less of it, is how we can get the most leverage.
I find this forum increasingly difficult to navigate
Images can't be added to comments; is that what you were trying to find a workaround for?

It's possible to add images to comments by selecting and copying them from anywhere public (note that it doesn't work if you right click and choose 'copy image'). In this thread, I do it in this comment.

I see how I can't do it manually, though, by selecting text. I wouldn't expect it to be too difficult to add that possibility, though, given that it's already possible in another way?

9Habryka2yOn LessWrong we intentionally didn't want to encourage pictures in the comments, since that provides a way to hijack people's attention in a way that seemed too easy. You can use markdown syntax to add pictures, both in the markdown editor and the WYSIWYG editor.
I find this forum increasingly difficult to navigate

With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?

Actually, I'll test copying an image from a google doc into this comment: (edit: seems to be working!)

6Jon_Behar2yTesting a reply with an image copy/pasted from a public google doc (shows up as camera in the editor) Edit: it worked! Good to know about this workaround (though the direct google doc import Ben mentioned [https://forum.effectivealtruism.org/posts/m6s7zKaDhYerFNKZf/i-find-this-forum-increasingly-difficult-to-navigate-1#9ZifTuuGLEXBeLj7D] would still be preferable since it'd deal with footnotes too).
I find this forum increasingly difficult to navigate

Copying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that's partly solved automatically by people mentioning/linking to it, and it showing up in google.

I suggest a putting a “help” button in the editor, right next to the “save as draft” and “submit” buttons. This info should be super easy to find when someone’s writing a post.

Relatedly, when the instructions are being refreshed for the planned update I think it’s important to run them by someone non-technical (and probably at least one generation older than the person writing the instructions) to see if they can understand them.

I find this forum increasingly difficult to navigate

There's a section on writing in the lesswrong faq (named Posting & Commenting). If any information is missing from there, you can suggest adding it in the comments.

Of course, even given that such instructions exists somewhere, it's important to make sure that it's findable. Not sure what the best way to do that is.

8Lukas_Finnveden2yCopying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that's partly solved automatically by people mentioning/linking to it, and it showing up in google.
Announcing the launch of the Happier Lives Institute

I'm by no means schooled in academic philosophy, so I could also be wrong about this.

I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian 'we should keep all the complexities of human value around'-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories' wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the fi... (read more)

3Habryka2yThis seems reasonable. I changed it to say "ethical".
Load More