You can imagine that OP has limited opportunities or interest or time to improve, and can only focus on one thing. In that case I'd strongly encourage focusing on higher quality arguments over better style, as I usually find the lack of the former much more offputting than the latter.
For what it's worth, I have the opposite reaction, and between the OP having higher quality arguments and less snark, would strongly prefer higher quality arguments.
I have a mild preference for narrations to not show up on the front-page of the EA Forum, and instead eg be comments to the relevant posts, or bunded up together in a long intro sequence post each time.I don't know how unusual this preference is (eg I'm also like maybe in the fifth percentile of EAs for how many podcasts I listen to, for example)
I know this is a really mainstream opinion, but I recently watched a recording of the musical Hamilton and I really liked it. I think Hamilton (the character, not the historical figure which I know very little about) has many key flaws (most notably selfishness, pride, and misogyny(?)) but also virtues/attitudes that are useful to emulate.I especially found the Non-stop song(lyrics) highly relatable/aspirational, at least for a subset of EA research that looks more like "reading lots and synthesize many thoughts quickly" and less like "think ver... (read more)
I thought Decision Problem: Paperclips introduced a subset of AI risk arguments fairly well in gamified form, but I'm not aware of anybody where the game made them become interested enough in AGI alignment/risk/safety enough to work on it. Does anybody else on this forum have data/anecdata?
Do we have strong evidence that "average donors" even have "cause areas," as an accurate/descriptively useful mapping of how they understand the world? My young and pre-EA self feels so distant from me that it's barely worth mentioning, but I vaguely recall that teenage me donated to things as disparate as earthquake relief in Sichuan, local beggars, LGBT stuff and probably something something climate change. I don't think I ever consciously considered until several years later how dumb it was to a) donate to multiple things at the tiny amounts I was donating at the time and b) to have multiple cause areas of very varying cost-effectiveness and theories of change.
I agree that a good Bayesian should grant the hypothesis of continuity nonzero credence, as well as other ways the universe can be infinite. I think the critique will be more compelling if it was framed as "there's a small chance the universe is infinite, Bayesian consequentialism by default will incorporate small probability of infinity, the decision theory can potentially blow up under those constraints "Then we see that this is a special unresolved case of infinity (which is likely an issue with many other decision theories) rather than a claim that the... (read more)
Hmm, I think 3 does not follow from 2. If I think there's a 10% chance I will quit my job upon further reflection, and I do the reflection, and then quit my job, this does not mean that before the reflection I cannot make any quantified statements about the expected earnings from my job.
Keen to get feedback on whether I've over/underestimated any variables.
Average person's value of time (USD)
As a normal distribution between $20-30 is too low, many EA applicants counterfactually have upper middle class professional jobs in the US.
I also want to flag that you are assuming that the time is
unpaid labour time
but many EA orgs do in fact pay for work trials. "trial week" especially should almost always be paid.
Got it, I agree with you that this can be what's going on! When the intuition is spelled out we clearly see the "trick" is comparing individual incomes as if they were comparable to household incomes. Living in the Bay Area, I think some of my friends do forget that in addition to being extremely rich by international standards, they are also somewhere between fairly and extremely rich by American standards as well.
Speaking of the second video, I have my own fan theory that "Blank Space" is based on popular manga and anime series Death Note.
I dunno, I feel like these are two fairly different claims. I also expect the average non-American household to be larger than the average American household, not smaller (so there will be <6 B households worldwide).
I thought you were making an empirical claim with the quoted sentence, not a normative claim.
Not your fault, but
the median American household is comfortably in the top richest 1% globally
does not seem plausible to me, because the US has ~4% of the world population.
Below this level of consumption, they’ll prefer consuming dollars to donating them, and so they will always consume them. And above it, they’ll prefer donating dollars to consuming them, and so will always donate them. And this is why the GWWC pledge asks you to input the C such that dF(C)/d(C) is 1, and you pledge to donate everything above it and nothing below it.
Wait the standard GWWC pledge is a 10% of your income, presumably based on cultural norms like tithing which in themselves might reflect an implicit understanding that (if we assume log utility)... (read more)
The set of all possible futures is infinite, regardless of whether we consider the life of the universe to be infinite. Why is this? Add to any finite set of possible futures a future where someone spontaneously shouts “1”!, and a future where someone spontaneously shouts “2”!, and a future where someone spontaneously shouts
Wait are you assuming that physics is continuous? If so, isn't this a rejection of modern physics? If not, how do you respond to the objection that there is a limited number of possible configurations for atoms in our controllable unive... (read more)
This seems unlikely from your description, but do you do or know of any work on biologics by any chance? I ask because I'm writing a report on cultured meat and would like a slightly larger pool of reviewers from adjacent industries (eg people who have experience scaling use of CHO cells).
For the first question, I was one of the forecasters who gave close to the current Metaculus median answer (~30%). I can't remember my exact reasoning, but roughly:1. Outside view on how frequently things have changed + some estimates on how likely things are to change in the future, from an entirely curve fitting perspective.2. Decent probability that the current top charities will go down in effectiveness as the problems become less neglected/we've had stronger partial solutions for them/we discover new evidence about them. Concretely:Malaria: CRISPR or ... (read more)
EDIT: I'm less certain this is true because I think I didn't fully update on how much the vaccines reduce the risks of covid for young people. I think maybe not getting tested is fine if you aren't likely to be exposed to non-vaccinated people and you aren't in a position to interact heavily with many people.
I've been informed 3 days ago that someone at the event now has covid, likely from the event itself.
Dear Linchuan, One of the attendees of the EA Picnic let us know they developed COVID symptoms on Friday July 16th and tested positive on Sunday J
One of the attendees of the EA Picnic let us know they developed COVID symptoms on Friday July 16th and tested positive on Sunday J
(I work for Rethink Priorities in a different team. I had no input into the charter cities intervention report other than feedback in a very early version of the draft. All comments here truly my own. Due to time constraints I did not run it by anybody else at the org before commenting).
The Rethink Priorities report used a 2017 World Bank article on special economic zones as the reference point for potential growth rates for charter cities. The World Bank report concludes, “rather than catalyzing economic development, in the aggregate, most zones’ pe
The belief that micro-credit has good investment ROIs for the typical recipient.
I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods. I do think there are nontrivial pro... (read more)
I think this is an interesting point but I'm not convinced that it's true with high enough probability that the alternative isn't worth considering. In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions. Another issue is the nature of precommitments. It seems that under most games/simple decision theories for playing those games (eg "Chicken" in CDT), being the first to credibly precommit gives you a strategic edge under most circumsta... (read more)
I think this is a much more plausible view of much of the drop-out phenomena than is “credit constraints.” First, strictly speaking, “credit constraints” is not a very good description of the problem. Let us take the author’s numbers seriously that the return to schooling is, say, 8-10 percent. Let us suppose that families in developing countries could borrow at the prime interest rate. The real interest rate in many countries in the world is around 8 to 10 percent. So given the opportunity to borrow at prime to finance schooling many households would rati
Now, there are many other ways that spending on primary education can be justified—that education is a universal human right, education is a merit good, the demands of political socialization demand universal education. I suspect that the actual positive theory of education has more to do with those than with the economic returns. But for the purposes of the present exercise of comparing alternative uses of public funds across sectors one cannot invoke “human rights” as a reason to spend on schooling without a counter of “intrinsic values” of an unchanged
I'm a generalist researcher at Rethink Priorities. I'm on the longtermism team and that's what I try to spend most of my time doing, but some of my projects touch on global health and some of projects are relevant to animal welfare as well (I think doing work across cause areas is fairly common at RP, though this will likely decrease with time as the org gets larger and individual researchers become more specialized). I'm happy to talk about my job, but unclear how valuable this is, given that a) "generalist researcher" is probably one of the most wel... (read more)
In addition to what avacyn said about hydrolysates (very important! Amino acids are really expensive!), off the top of my head:
Not all actions humans are capable of doing are good.
I'd want the orange pill, I think.
Can you be less abstract and point, quantitatively, to which numbers I gave seem vastly off to you and insert your own numbers? I definitely think my numbers are pretty fuzzy but I'd like to see different ones before just arguing verbally instead. (Also I think my actual original argument was a conditional claim, so it feels a little bit weird to be challenged on the premises of them! :)).
Can you give a concrete and detailed (anonymized) example of this? As presented, it feels like the people you're talking to aren't saying something very useful, but I only have your side of the conversation so it might be helpful for us to understand in a bit more detail what was actually going on.
It was a serious question, maybe presented in a slightly glib way.
I too am excited about this! In the "about" page, you say:
Most of the posts on this blog are written at least a month before they're posted, sometimes much longer. I try to post things that are worth posting even so, hence the name "Cold Takes."
So my question here is, what's your preferred feedback policy/commenting norms? Should we bias towards more normal "EA Forum commenting norms" or closer to "write out our comments at least a month before they're posted, sometimes much longer, and only comment if upon >1 month of reflection we still think they're worth your time/attention to read?"
This comment made me laugh out loud, all the more so because I couldn't tell whether you were joking.
This is not exactly the answer you're looking for, and I'm not confident about this, but I think it's maybe good to refine your reasons for working on AI risk and being clear what you mean first, and after you get a good sense of what you mean (at least enough to convince a much more skeptical version of yourself), a more easily explainable version of the arguments may come naturally to you. (Take everything I say here with a huge lump of salt...FWIW I don't know how to explain EA or longtermism or forecasting stuff to my own parents, partially due to the language barrier).
This seems surprisingly low to me. Do you have some notes or a writeup of the analysis somewhere?
On a semi-related note, Peter Singer appeared on the podcast of a Canadian MP, which I thought was pretty cool.
One additional risk: if done poorly, harsh criticism of someone else's blog post from several years ago could be pretty unpleasant and make the EA community seem less friendly.
I think I agree this is a concern. But just so we're on the same page here, what's your threat model? Are you more worried about
I'm actually super excited about this idea though - let's set some courtesy norms around contacting the author privately before red-teaming their paper and then get going!
Thanks for the excitement! I agree that contacting someone ahead of time might be good (so at least someone doesn't learn about their project being red teamed until social media blows up), but I feel like it might not mitigate most of the potential unpleasantness/harshness. Like I don't see a good cultural way to both incentivize Red Teaming and allow a face-saving way to refuse to let yo... (read more)
Is Microcovid.org or other people in EA tracking Delta and the possibility of scarier variants? Personally, I continue to follow some epidemiologists and virologists and data people on Twitter but other than that I've stopped following Covid almost completely. I'm wondering if this is a sane choice to assume that "the community" (or broader society) has enough of a grip on things and can give us forewarning in case the correct choice later is for (even fully vaccinated) people to go into partial or full lockdowns again.
No, weaker claim than that, just saying that P(we spread to the stars|we don't all die or are otherwise curtailed from AI in the next 100 years) > 1%. (I should figure out my actual probabilities on AI and existential risk with at least moderate rigor at some point, but I've never actually done this so far).
That's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point?
Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval. If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into th... (read more)
We'll likely have at least one more internship round before you graduate, so stay tuned!
Hmm I feel more uneasy about the truthiness grounds of considering some of these examples as "ground truth" (except maybe the Clauset et al example, not sure). I'd rather either a) train people to Red Team existing EA orthodoxy stuff and let their own internal senses + mentor guidance decide whether the red teaming is credible or b) for basic scientific literacy stuff where you do want clear ground truths, let them challenge stuff that's closer to obvious junk (Why We Sleep, some climate science stuff, maybe some covid papers, maybe pull up examples from Calling Bullshit, which I have not read).
Hmm I think the most likely way downside stuff will happen is by flipping the sign rather than reducing the magnitude, curious why your model is different.I wrote a bit more in the linked shortform.
FWIW I'm also skeptical of naive ex ante differences of >~2 orders of magnitude between causes, after accounting for meta-EA effects. That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.* But I don't feel too strongly, main point of the shortform was just that I talk to some people who are disillusioned because they feel like EA tells them that their jobs are less important than other jobs, and I'm just like, whoa, that's just such a weird impression on... (read more)
I agree with this, and also I did try emphasizing that I was only using MIRI as an example. Do you think the post would be better if I replaced MIRI with a hypothetical example? The problem with that is that then the differences would be less visceral.
I think the world either ends or some other form of (implied permanent) x-risk in the next 100 years or it doesn't. And if the world doesn't end in the next 100 years, we eventually will either a) settle the stars or b) ends or drastically curtails at some point >100 years out. I guess I assume b) is pretty low probability with AI, like much less than 99% chance. And 2 orders of magnitude isn't much when all the other numbers are pretty fuzzy and spans that many orders of magnitude.(A lot of this is pretty fuzzy).
I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that "you are more likely to get future work if you write truthful things, even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe," but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms.
Thanks for the detailed response!
The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client... but to me that's resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
Hmm, on reflection maybe the issue isn't as particular to consulting, like I think the issue here isn't that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in t... (read more)