All of D0TheMath's Comments + Replies

How to Talk to Lefties in Your Intro Fellowship

I am skeptical that people who are so subject to framing effects that the strategies used in this post are required to convince them of ideas are the kinds of people you should be introducing to your EA group.

The reason EA can do as much good as it can is because of its high level of epistemic rigor. If you pull in people with a lower-than-EA-average level of epistemic rigor, this lowers EA as a whole's ability to do good. This may be a good idea if we're very people-constrained, and can't find anyone with an EA average or greater than EA average level of ... (read more)

1utilistrutil5d
https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo
Recommendations for EA-themed sci-fi and fantasy?

It’s far easier to see the irrationalities and possible exploits of other people’s work than your own, rationalizing a world possibly takes different skills than creating an interesting one, its easier to write & build an audience, and you don’t have to spend so much time explaining the setting/magic system/other important info.

Recommendations for EA-themed sci-fi and fantasy?

r/rational put together a spreadsheet of a bunch of rationalist fiction. You should find much EA related material here https://docs.google.com/spreadsheets/d/1OEoxYzFeF0UpJmHY5pqHP_Yam-cw9kXDyXZbH6ANJiM/htmlview

4JoelMcGuire1mo
Do you know why fan fiction appears to be the go-to medium for rationalists? This seems odd.
Every moment of an electron's existence is suffering

Strongly downvoted for reasons stated above.

Every moment of an electron's existence is suffering

I know that you had a paragraph where you said this, but you didn't actually explain why you thought this or why you thought others were wrong, and far more of the article was devoted to stating why you thought those arguing in favor were inauthentic in their beliefs. This was also argued in a way which gave no insight into why you thought the issue was intractable.

-7MattBall2mo
On Deference and Yudkowsky's AI Risk Estimates

Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.

Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.

Every moment of an electron's existence is suffering

Seems like your first article doesn’t actually engage with discussions about wild animal suffering in a meaningful way, except to say that you’re unsure whether wild animal suffering people are authentic in their beliefs, but 1) in my experience they are, and 2) if they’re not but their arguments are still valid, then we should prioritize wild animal suffering anyway, and tell the pre-existing wild animal suffering people to take their very important cause more seriously.

I’m glad you liked the post, but I wasn’t actually trying to make any points about EA’s weirdness going too far. Most of the points made about electrons here are very philosophically flawed.

1MattBall2mo
With regards to wild animal suffering, my main point is tractability.
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I agree the name is non-ideal, and doesn't quite capture differences. A better term may be conventionalists versus non-conventionalists (or to make the two sides stand for something positive, conventionalists versus longtermists).

Conventionalists focus on cause areas like global poverty reduction, animal welfare, governance reforms, improving institutional decision making, and other things which have (to some extent) been done before.

Non-conventionalists focus on cause areas like global catastrophic risk prevention, s-risk prevention, improving our underst... (read more)

4david_reinstein4mo
I agree that conventionists versus non-conventionalists may be a thing, but I don't think this captures what people are talking about when they talk about being a long-term missed or not a long-termist. This seems a different axis.
AI and impact opportunities

It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won't work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.

If you mean advocating for policies which subsidize good safety rese... (read more)

1brb2435mo
OK, makes sense - since this is basically mostly benefit of individuals it is like AI and impact - interpretability - well sure some of the areas can relate to that, such as social media wellbeing optimization. Yes, probably the level of thinking is at the 'governance' level, not technical alignment (e. g. not quite at a place where a poorly coded drone could decide to advance selfish objectives instead of SDGs..).
AI and impact opportunities

It seems a bit misleading to call many of these “AI alignment opportunities”. AI alignment has to do with the relatively narrow problem of solving the AI control problem (ie making it so very powerful models don’t decide to destroy all value in the world), and increasing the chances society decides to use that solution.

These opportunities are more along the lines of using ml to do good in a general sense.

4brb2435mo
Ok, AI and impact. Although what about in these ways of developing institutions so that human actors use increasingly powerful AI to objectives that are better aligned, generate content on which AI can learn methods that always do good, and advance systems that would prevent even a superintelligent AI to be harmful (e. g. mutual accountability checks).
EA Obligations versus Financial Security

This framing doesn’t clarify the issue much to me. Why do you think this billionaire would want young professionals to build their safety net over donating? Seems there are considerations (gcrs, potential monetary ease of building safety nets, low expected return on some people’s long-term career, low expectation that people stay involved in EA long-term, etc) which may flip the calculus for any given person.

1Yonatan Cale5mo
Hey, I'll dive deeper to why I think so (maybe you'll change my mind) 1. Most of the impact comes from late-stage professionals 2. We want people to reach the late-stage part 3. Something that might prevent someone from reaching a high impact late career is a financial problem (I'm imagining losing 6-12 months of salary) where they don't have a safety net 4. Another failure mode might be the fear of such a financial problem, which would cause the professional to not dare switch jobs or so [I think this is not a theoretical problem, I can elaborate] 5. Or not be able to save time (or improve productivity) by paying money, like 1. Not having a reasonable office or computer 2. Not having a quiet apartment to sleep in 3. Taking bad cheap public transport Another (similar) way to think about it: 1. If everyone would have a safety net of only 2 months or so (because they'd donate everything else), I think 1. EA would have lots more donations from early career people 2. EA would have a lot less very strong late career people, because many wouldn't make it I also want to say you changed my mind and I agree with you: 1. Yes there's less value in getting donations later 1. Because we can't use them today 2. Because we might not get them at all Regarding "may flip the calculus for any given person" - I already agree with that. That's part of what I meant by "that would be a complicated project".
2ofer5mo
Here's one consideration: If someone in EA find themselves in a financially scary situation, and their ability to earn income depends on them publishing/doing impressive things about anthropogenic x-risks, then it seemingly becomes more likely that they will cause accidental harm due to biased judgment. (By drawing attention to something in a harmful way, etc. [https://80000hours.org/articles/accidental-harm/])
How can someone bet against EA?

They could find someone who agrees with Tyler, work out some measurement for the influence EA has on the world, and bet against each other on where that measurement will be several years or decades down the line.

Plausible influence measures include:

  • The amount of money moved to EA-aligned charities
  • The number of people who browse the forum
  • The number of federal politicians &/or billionaires who subscribe to EA ideas
  • Google trends data
  • The number of people who know about EA
  • The number of people who buy Doing Good Better
  • The number of charities who use
... (read more)
2Bingo5mo
Depending on the terms, I'd be willing to take the pro side of this bet (i.e. the side that agrees with Tyler). It could be set up on https://longbets.org/
The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized

There are a lot of articles I've wanted to write for a long time on how those in effective altruism can help each other do more good and overall change effective altruism to even better than it is now. Yet there is a single barrier left stopping me. It's the culture of fear in effective altruism.

I suggest writing the articles anyway. I predict that unless your arguments are bad, the articles (supposing you will write 5 articles) will get >=0 karma each a week after publication, and am willing to bet $10 at 1:10 odds this is the case. We can agree on a neutral party to judge the quality of the articles if you'd like. We can adjust the odds and the karma level if you like.

The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized

I’d like to see some data on how prevalent fearing that criticizing major EA orgs will lead to negative outcomes is in EA, as well as why people think this. Anecdotal accounts mean we should include such questions on more surveys, but I’m skeptical of updating too much in favor of one particular hypothesis of the cause based on what could be just a few cases of miscommunication/irrational fears/anti-centralized-org priors/anxiety/other-things-I-can’t-think-of.

From what I’ve experienced, the opposite is largely true: that EAs mostly reward people with good criticisms with respect & status. Though granted I have little first hand experience interacting with large EA orgs, and even less giving good criticisms of them.

Examples of pure altruism towards future generations?

Many other religions and cults nowadays try to increase the fertility rate among their members so that in a few generations they will have taken over the politics of the country they’re based in. This may or may not count, depending on how much you’d like to hold them to utilitarianism. Though, note, if a big reason why they do this is because of all the people sent hell for sins, then they are using utilitarianism (despite having a terribly inaccurate world model).

Examples of pure altruism towards future generations?

The Aztecs made human sacrifices in order to attempt to avert the end of the world. Depending on what motivations (for instance, status-seeking behavior, perhaps forcible sacrifices, etc) you ascribe to them, this may or may not have been purely altruistic.

1D0TheMath7mo
Many other religions and cults nowadays try to increase the fertility rate among their members so that in a few generations they will have taken over the politics of the country they’re based in. This may or may not count, depending on how much you’d like to hold them to utilitarianism. Though, note, if a big reason why they do this is because of all the people sent hell for sins, then they are using utilitarianism (despite having a terribly inaccurate world model).
Prioritization when size matters

Don’t click on this link. It leads to a sketchy website.

I'm from the lead exposure charity mentioned (LEEP: https://leadelimination.org/) - if banning lead paint counts as urban development then feel free to email me clare@leadelimination.org - we can definitely suggest some countries we're beginning work in.  

"Fixing Adolescence" as a Cause Area?

Scanning through the wikipedia article you linked, very few previous reforms focused much on student suffering, and much more on the content of the learning & performance measures for the teacher. There may be a selection effect going on here, where only ineffective reforms go through. It would be better to look at a list of failed reforms.

Also, I forgot to mention this in my above comment, but really spectacular work writing this up. I always suspected this was the case, but I didn’t know it was as cost effective as it seems.

"Fixing Adolescence" as a Cause Area?

I think it’s likely a large effect of this is not bullying, but problems with school, independent of interpersonal interactions.

An example:

During adolescence, kids naturally want to go to sleep at 1, and wake up at 10, but we force those kids to wake up at latest at 7, likely causing severe sleep department. I think this may be the greatest component. Certainly 6 years of sleep deprivation has some negative long term effects too.

There are other horrible aspects of the school environment which I’m sure you can think of, which likely have terrible near term ... (read more)

3kirchner.jan7mo
Great points, I agree! I guess I fell prey to the Streetlight effect [https://en.wikipedia.org/wiki/Streetlight_effect] there. I found this [https://www.overcomingbias.com/2016/04/school-is-to-submit.html] article by Robin Hanson interesting, Mason Hartman has interesting thoughts on Twitter [https://threadreaderapp.com/thread/975037425901748224.html] (her most recent stuff is pretty extreme though), and there is a lot on YouTube [https://www.youtube.com/watch?v=XAZrH1wM5wE] on how the School System is broken in many ways. But despite a lot of educational reform [https://en.wikipedia.org/wiki/Education_reform], there are some issues that prove very hard to tackle. But perhaps there is something smart & unorthodox that can be done...
"Fixing Adolescence" as a Cause Area?

Here life satisfaction is scaled to the range between 0 and 1, so we have to multiply them by 10 to compare these values with the decrease in LSP during adolescence. This would put adolescence in the ballpark of "some problem washing or dressing", "moderate pain or discomfort", and "unable to perform usual activities".

“Unable to perform usual activities” scaled 10x gives ~-3 LSP, an order of magnitude above the estimated -0.2 to -0.4 LSP.

5kirchner.jan7mo
True, thanks for spotting! Should be fixed now.
Is EA over-invested in Crypto?

Your reasoning seems reasonable in the absence of evidence. I don't know how you were trying to signal it was a question (other than the question-mark in the title, which almost never indicates the intent to simply provoke discussion, and more often means "here is the question I explored in a research project, the details & conclusions of which will follow"). Instead, I think you should have had maybe an epistemic status disclaimer near the beginning. Something like,

Epistemic status: I don't actually know whether EA is over-invested in crypto. This post is intended to spark discussion in the topic.

1Max Clarke7mo
Perfect, that's what I'm looking for
EA Librarian: CEA wants to help answer your EA questions!

This sounds like a very cool & useful service, and I hope enough people take advantage of it to justify it's costs! I will certainly direct fellows to it.

1calebp7mo
Thanks for directing fellows towards the service! We think that there is quite a lot of information to value to running this programme so the breakeven point is likely quite low in terms of justifying costs. That said I do hope that we get a lot of people using the service!
Is EA over-invested in Crypto?

I think you should have made this post a question. It being a post made me think you actually had an answer, so I read it, and was disappointed you didn’t actually conclude anything.

2Max Clarke7mo
I was thinking about this too (and tried to signal it was a question, rather than an answer). But since I think that no one has an answer and it's more a post designed to spur discussion, I made it a post.
Forecast procedure competitions

This sounds interesting. Alternatively, you could have the procedure-makers not know what questions will be forecasted, and their procedures given to people or teams with some stake in getting the forecast right (perhaps they are paid in proportion to their log-odds calibration score).

After doing enough trials, we should get some idea about what kinds of advice result in better forecasts.

How big are the intra-household spillovers for cash transfers and psychotherapy? Contribute your prediction for our analysis.

Question 7 is a bit confusing. The answer format implies cash transfers have both a 10% and 40% impact, and makes it impossible for (say) cash & psychotherapy to both have a 10% impact.

2JoelMcGuire7mo
Hi there, Thank you for bringing this to my attention. I should have edited the form to allow only one answer per column and multiple answers per row.
How To Raise Others’ Aspirations in 17 Easy Steps

With a few modifications, all of these are great questions to ask yourself as well.

Sasha Chapin on bad social norms in EA

What makes you think it isn't? To me it seems both like a reasonable interpretation of the quote (private guts are precisely the kinds of positions you can't necessarily justify, and it's talking about having beliefs you can't justify) as well as a dynamic that feels like one that I recognize as one that has been occasionally present in the community.

Because it also mentions woo, so I think it’s talking about a broader class if unjustified beliefs than you think.

Even if this interpretation wasn't actually the author's intent, choosing to steelman the

... (read more)
7Kaj_Sotala9mo
My earlier comment mentioned that "there are also lots of different claims that seem (or even are) irrational but are pointing to true facts about the world [https://www.lesswrong.com/posts/MPj7t2w3nk4s9EYYh/incorrect-hypotheses-point-to-correct-observations] ." That was intended to touch upon "woo"; e.g. meditation used to be, and to some extent still is, considered "woo", but there nonetheless seem to be reasonable grounds to think that there's nonetheless something of value to be found in meditation (despite there also being various crazy claims around it). My above link mentions a few other examples (out-of-body experiences, folk traditions, "Ki" in martial arts) that have claims around them that are false if taken as the literal truth, but are still pointing to some true aspect of the world. Notably, a policy of "reject all woo things" could easily be taken to imply rejecting all such things as superstition that's not worth looking at, thus missing out on the parts of the woo that were actually valuable. IME, the more I look into them, the more I come to find that "woo" things that I'd previously rejected as not worth looking at because of them being obviously woo and false, are actually pointing to significantly valuable things. (Even if there is also quite a lot of nonsense floating around those same topics.) That's fair.
Sasha Chapin on bad social norms in EA

If this is what the line was saying, I agree. But it’s not, and having intuitions & a track record (or some reason to believe) those intuitions correlate with reality, and useful but known to be not true models of the world is a far cry from having unjustified beliefs & believing in woo, and the lack of these is what the post actually claims is the toxic social norm in rationality.

6Kaj_Sotala9mo
What makes you think it isn't? To me it seems both like a reasonable interpretation of the quote (private guts are precisely the kinds of positions you can't necessarily justify, and it's talking about having beliefs you can't justify) as well as a dynamic that feels like one that I recognize as one that has been occasionally present in the community. Fortunately posts like the one about private guts have helped push back against it. Even if this interpretation wasn't actually the author's intent, choosing to steelman the claim in that way turns the essay into a pretty solid one, so we might as well engage with the strongest interpretation of it.
Sasha Chapin on bad social norms in EA

Sure, but that isn’t what the quoted text is saying. Trusting your gut or following social norms are not even on the same level as woo, or adopting beliefs with no justification.

If the harmful social norms Sasha actually had in mind were not trusting your gut & violating social norms with no gain, then I’d agree these actions are bad, and possibly a result of social norms in the rationality community. Another alternative is that the community’s made up of a bunch of socially awkward nerds, who are known for their social ineptness and inability to trust their gut.

But as it stands, this doesn’t seem to be what’s being argued, as the quoted text is tangential to what you said at best.

Sasha Chapin on bad social norms in EA

you must reject beliefs that you can’t justify, sentiments that don’t seem rational, and woo things.

This isn’t a toxic social norm. This is the point of rationality, is it not?

6Kaj_Sotala9mo
There are a few different ways of interpreting the quote, but there's a concept of public positions and private guts [https://www.lesswrong.com/posts/Zbf2L4ZJf4ykZqmPA/public-positions-and-private-guts] . Public positions are ones that you can justify in public if pressed on, while private guts are illegible intuitions you hold which may nonetheless be correct - e.g. an expert mathematician may have [https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/] a strong intuition that a particular proof or claim is correct, which they will then eventually translate to a publicly-verifiable proof. As an another example, in the recent dialog on AGI alignment [https://forum.effectivealtruism.org/posts/iGYTt3qvJFGppxJbk/ngo-and-yudkowsky-on-alignment-difficulty] , Yudkowsky frequently referenced having strong intuitions about how minds work that come from studying specific things in detail (and from having "done the homework"), but which he does not know how to straightforwardly translate into a publicly justifiable argument. Private guts are very important and arguably the thing that mostly guides people's behavior, but they are often also ones that the person can't justify. If a person felt like they should reject any beliefs they couldn't justify, they would quickly become incapable of doing anything at all. Separately, there are also lots of different claims that seem (or even are) irrational but are pointing to true facts about the world [https://www.lesswrong.com/posts/MPj7t2w3nk4s9EYYh/incorrect-hypotheses-point-to-correct-observations] .

Ah. In one sense, a core part of rationality is indeed rejecting beliefs you can't justify. Similarly, a core part of EA is thinking carefully about your impact. However, I think one claim you could make here is that naively, intensely optimising these things will not actually win (e.g. lead to the formation of accurate beliefs; save the world). Specifically:

  • Rationality: often a deep integration with your feelings is required to form accurate beliefs--paying attention to a note of confusion, or something you can't explain in rational terms yet. Indeed, som
... (read more)
What stops you doing more forecasting?

Overthinking forecasts, causing writing them down & tracking them diligently to be too much of a mental-overhead for me to bother with.

What are the bad EA memes? How could we reframe them?

When I introduce AI risk to someone, I generally start by talking about how we don't actually know what's going on inside of our ml systems, that we're bad at making their goals what we actually want, and we have no way of trusting that the systems actually have the goals we're telling them to optimize for.

Next I say this is a problem because as the state of the art of AI progresses, we're going to be giving more and more power to these systems to make decisions for us, and if they are optimizing for goals different from ours this could have terrible effec... (read more)

D0TheMath's Shortform

I don't know what the standard approach would be. I haven't read any books on evolutionary biology. I did listen to a bit of this online lecture series: https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL848F2368C90DDC3D and it seems fun & informative.

2acylhalide9mo
Thanks!
EA Online Learning Buddy Platform

Great idea! Note also the existence of The University of Bayes on Discord. This doesn’t focus specifically on EA-aligned subject areas, but it is doing something similar to your proposal: ie you can freely join classes, and learn with other members of the Discord topics like Bayesian statistics, calculus, and machine learning.

D0TheMath's Shortform

I’ve been using the models I’ve been learning to understand the problems associated with inner alignment to model evolution during this discussion, as it is a stochastic gradient descent process, so many of the arguments for properties that trained models should have can be applied to evolutionary processes.

So I guess you can start with Hubinger et al’s Risks from Learned Optimization? But this seems a nonstandard approach to trying to learn evolutionary biology.

1acylhalide9mo
I've read that paper :) I'll take the standard approach then, is there any material you'd recommend?
D0TheMath's Shortform

Do you feel it is possible for evolution to select for beings who care about their copies in Everett branches, over beings that don't? For the purposes of this question let's say we ignore the "simplicity" complication of the previous point, and assume both species have been created, if that is possible.

It likely depends on what it means for evolution to select for something, and for a species to care about it's copies in other Everett branches. It's plausible to imagine a very low-amplitude Everett branch which has a species that uses quantum mechanica... (read more)

1acylhalide9mo
Makes sense. Valid. As an aside, would you recommend any material or book of evolutionary bio? Ideally focussed particularly on human behaviour, cooperation, social behaviours, psychology, that kind of stuff. Just out of curiosity, since you seem more knowledgible than me.
D0TheMath's Shortform

Evolution doesn't select for that, but it's also important to note that such tendencies are not disselected for, and the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future.

Also, while you cannot meaningfully influence Everett branches which have split off in the past, you can influence Everett branches that will split off some time in the future.

1acylhalide9mo
Great reply! I'd be keen to know why you say that, although it feels less important in the discussion after reading your other points. Yep this is valid. If I had a deeper understanding about Everett branches maybe I could ascribe non-zero care to them, same as I ascribe non-zero care to far future. Maybe I'm just hesitant to commit in the face of uncertainty. This is valid, I didn't think it through. Do you feel it is possible for evolution to select for beings who care about their copies in Everett branches, over beings that don't? For the purposes of this question let's say we ignore the "simplicity" complication of the previous point, and assume both species have been created, if that is possible. I'm still trying to wrap my head around how evolution even works in such a world.
D0TheMath's Shortform

I’m not certain. I’m tempted to say I care about them in proportion to their “probabilities” of occurring, but if I knew I was on a very low-“probability” branch & there was a way to influence a higher “probability” branch at some cost to this branch, then I’m pretty sure I’d weight the two equally.

1acylhalide9mo
Got it. I personally find it counter-intuitive to care about infinitely many realities I cannot causally impact. (And there really are practically infinitely many, molecules move at tens if not hundreds of metres per second) I'm pretty sure many people won't take it seriously for the same reason. But maybe some could, if you post more about it. I'm unkeen to comment on whether we should or shouldn't care in an prescriptivist sense. What I will note however is that we are likely not trained to care for them, as part of our genetic training history, in a purely descriptivist sense. Evolution can select for "beings who derive positive neurochemical rewards from ensuring their own survival and the survival of beings with similar genetic code" (which the mind translates as "humans who care about themselves and each other"). Evolution can't select for "beings who care about their copies on other branches" because caring or not caring has no impact on the survival of either you or the copies.
D0TheMath's Shortform

Are there any obvious reasons why this line of argument is wrong:

Suppose Everett interpretation of qm is true, and an x-risk curtailing humanity's future is >99% certain, with no leads on the solution to it. Then, given a qm bit generator, which generates some high number of bits, for any particular combination of bits, there exists a universe in which that combination was generated. In particular, the combination of bits encoding actions one can take to solve the x-risk are generated in some world. Thus, one should use such a qm bit generator to genera... (read more)

3acylhalide9mo
Do you care about Everett branches other than your own? In a moral sense.
Discussion with Eliezer Yudkowsky on AGI interventions

Perhaps its best strategy would be to play nice for the time being so that humans would voluntarily give it more compute and control over the world.

This is essentially the thesis of the Deceptive Alignment section of Hubinger et al's Risks from Learned Optimization paper, and related work on inner alignment.

Hm, if an agent is consequentialist, then it will have convergent instrumental subgoals. But what if the agent isn't consequentialist to begin with? For example, if we imagine that GPT-7 is human-level AGI, this AGI might have human-type common sen

... (read more)
D0TheMath's Shortform

I saw this comment on LessWrong

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is

... (read more)
Why Undergrads Should Take History Classes

I'm skeptical of your claim that primary sources are better than secondary books. In particular, it seems that the insight-to-effort ratio is very small, as given a secondary book which comes recommended by people knowledgeable in the field, it seems you can get approximately the same insights as a primary source, but with far far less effort.

Can you expand on why you think either the fidelity of the insight transfer from the primary to secondary source is small, or why I'm overestimating the difficulty of reading primary sources (or some other reason you think I should care more about primary sources which I haven't thought of)?

7ThomasW9mo
I definitely don't mean to say that classes shouldn't have secondary sources; they should and these sources are important (I am less excited about tertiary sources). I think a key to primary sources is something like the ability to read current sources as primary sources. If you develop the skills to be able to understand primary sources in the context of history, it helps enable you to be able to evaluate primary sources of today. I see history as a good way to learn how to evaluate the world at present, and the world at present has more primary than secondary sources about it.
[Creative writing contest] Blue bird and black bird

Death of the author interpretation: currently there are few, large, EA-aligned organizations which were created by EAs. Much of the funding for EA aligned projects just supports smart people who happen to be doing effective altruism.

The blue bird represents the EA community going to smart people, symbolized by the black bird, and asking why they’re working on what they’re working on. If the answer is a good one, the community / blue bird will pitch in and help.

2Lizka1y
I'm highly enjoying the "death of the author" interpretation (and even just its existence), thanks! :)
2Hamish Huggard1y
Oh nice. Socratic irony. I like it.
[Creative writing contest] Blue bird and black bird

I felt some cognitive dissonance at the small tree / lumberjack scene. Black Bird could have helped fight the lumberjack, then cut down the sprout. So it doesn’t map very well to actual catastrophic risk tradeoffs. I don’t know how to fix it though.

8Matt_Sharp1y
Yeah, and I don't think the example of the sprout maps particularly well to catastrophic risks in itself. If the sprout grows into a giant oak tree that is literally right next to their current tree, it seems like they could easily just move to the giant oak tree. It sounds like the 'giant oak' would eventually be bigger than their current tree, meaning more space per bird, allowing for more birds. Oh and some birds eat acorns! In this case I think black bird could be making things worse for future birds.

I did also initially think that it might be good to try to change the lumberjack instance, if possible, although it wasn't for the same reason: I just feel that there is much more of a case to make that the lumberjack deserves a whole-of-community effort since there is a plausible chance the extra bird could make a difference. But after considering this about the non-urgency of the sprout vs the lumberjack, I especially feel it may not the best example. Still, I understood the message/idea, and it's hard to know how non-EAs might react to the situation. Just something to keep in mind.

Needed: Input on testing fit for your career

This seems like it could be a very valuable resource, and I will totally use it.

3Miranda_Zhang1y
Agreed! Most of my EA networking is geared towards answering this question.
In favor of more anthropics research

Ah, thanks. It was a while ago, so I guess I was misremembering.

Load More