Red teaming papers as an EA training exercise?
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with a decent science or social science degree.
I think this is good career building for various reasons:
One additional risk: if done poorly, harsh criticism of someone else's blog post from several years ago could be pretty unpleasant and make the EA community seem less friendly.
I'm actually super excited about this idea though - let's set some courtesy norms around contacting the author privately before red-teaming their paper and then get going!
Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval. If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into things that are genuinely hard judgment calls (papers/blog posts/claims accepted by current elite EA consensus). But maybe I'm just remaking the Calling Bullshit course but with a higher endpoint.___(I also think it's plausible/likely that my original program of just giving somebody an EA-approved paper + say 2 weeks to try their best to Red Team it will produce interesting results, even without all these training wheels).
Here are some things I've learned from spending the better part of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors.
Note that I assume that anybody reading this already has familiarity with Phillip Tetlock's work on (super)forecasting, particularly Tetlock's 10 commandments for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this.
2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here for a discu... (read more)
Consider making this a top-level post! That way, I can give it the "Forecasting" tag so that people will find it more often later, which would make me happy, because I like this post.
General suspicion of the move away from expected-value calculations and cost-effectiveness analyses.
This is a portion taken from a (forthcoming) post about some potential biases and mistakes in effective altruism that I've analyzed via looking at cost-effectiveness analysis. Here, I argue that the general move (at least outside of human and animal neartermism) away from Fermi estimates, expected values, and other calculations just makes those biases harder to see, rather than fix the original biases.
I may delete this section from the actual post as this point might be a distraction from the overall point.
I’m sure there are very good reasons (some stated, some unstated) for moving away from cost-effectiveness analysis. But I’m overall pretty suspicious of the general move, for a similar reason that I’d be suspicious of non-EAs telling me that we shouldn’t use cost-effectiveness analyses to judge their work, in favor of say systematic approaches, good intuitions, and specific contexts like lived experiences (cf. Beware Isolated Demands for Rigor):
I’m sure you have specific arguments for why in your case quantitative approaches aren’t very necessary and useful, because your uncert
cross-posted from Facebook.
Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable , I think the framing is broadly wrong.In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience  so far!!! It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history.For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/"grand projects... (read more)
Recently I was asked for tips on how to be less captured by motivated reasoning and related biases, a goal/quest I've slowly made progress on for the last 6+ years. I don't think I'm very good at this, but I do think I'm likely above average, and it's also something I aspire to be better at. So here is a non-exhaustive and somewhat overlapping list of things that I think are helpful:
While talking to my manager (Peter Hurford), I made a realization that by default when "life" gets in the way (concretely, last week a fair amount of hours were taken up by management training seminars I wanted to attend before I get my first interns, this week I'm losing ~3 hours the covid vaccination appointment and in expectation will lose ~5 more from side effects), research (ie the most important thing on my agenda that I'm explicitly being paid to do) is the first to go. This seems like a bad state of affairs.I suspect that this is more prominent in me than most people, but also suspect this is normal for others as well. More explicitly, I don't have much "normal" busywork like paperwork or writing grants and I try to limit my life maintenance tasks (of course I don't commute and there's other stuff in that general direction). So all the things I do are either at least moderately useful or entertaining. Eg, EA/work stuff like reviewing/commenting on other's papers, meetings, mentorship stuff, slack messages, reading research and doing research, as well as personal entertainment stuff like social media, memes, videogames etc (which I do much more than I'm willing to admi... (read more)
I liked this, thanks.I hear that this similar to a common problem for many entrepreneurs; they spend much of their time on the urgent/small tasks, and not the really important ones. One solution recommended by Matt Mochary is to dedicate 2 hours per day of the most productive time to work on the the most important problems.
I've occasionally followed this, and mean to more.
Thanks salius! I agree with what you said. In addition,
Catalyst (biosecurity conference funded by the Long-Term Future Fund) was incredibly educational and fun.
Random scattered takeaways:
1. I knew going in that everybody there will be much more knowledgeable about bio than I was. I was right. (Maybe more than half the people there had PhDs?)
2. Nonetheless, I felt like most conversations were very approachable and informative for me, from Chris Bakerlee explaining the very basics of genetics to me, to asking Anders Sandberg about some research he did that was relevant to my interests, to Tara Kirk Sell detailing recent advances in technological solutions in biosecurity, to random workshops where novel ideas were proposed...
3. There's a strong sense of energy and excitement from everybody at the conference, much more than other conferences I've been in (including EA Global).
4. From casual conversations in EA-land, I get the general sense that work in biosecurity was fraught with landmines and information hazards, so it was oddly refreshing to hear so many people talk openly about exciting new possibilities to de-risk biological threats and promote a healthier future, while still being fully cognizant ... (read more)
Publication bias alert: Not everybody liked the conference as much as I did. Someone I know and respect thought some of the talks weren't very good (I agreed with them about the specific examples, but didn't think it mattered because really good ideas/conversations/networking at an event + gestalt feel is much more important for whether an event is worthwhile to me than a few duds).
That said, on a meta level, you might expect that people who really liked (or hated, I suppose) a conference/event/book to write detailed notes about it than people who were lukewarm about it.
Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value).
Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP).
I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with.
It mostly did not.
The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders.
However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are rel... (read more)
Over a year ago, someone asked the EA community whether it’s valuable to become world-class at an unspecified non-EA niche or field. Our Forum’s own Aaron Gertler responded in a post, saying basically that there’s a bunch of intangible advantages for our community to have many world-class people, even if it’s in fields/niches that are extremely unlikely to be directly EA-relevant.
Since then, Aaron became (entirely in his spare time, while working 1.5 jobs) a world-class Magic the Gathering player, recently winning the DreamHack MtGA tournament and getting $30,000 in prize monies, half of which he donated to Givewell.
I didn’t find his arguments overwhelmingly persuasive at the time, and I still don’t. But it’s exciting to see other EAs come up with unusual theories of change, actually executing on them, and then being wildly successful.
Something that came up with a discussion with a coworker recently is that often internet writers want some (thoughtful) comments, but not too many, since too many comments can be overwhelming. Or at the very least, the marginal value of additional comments is usually lower for authors when there are more comments. However, the incentives for commentators is very different: by default people want to comment on the most exciting/cool/wrong thing, so internet posts can easily by default either attract many comments or none. (I think) very little self-policing is done, if anything a post with many comments make it more attractive to generate secondary or tertiary comments, rather than less.Meanwhile, internet writers who do great work often do not get the desired feedback. As evidence: For ~ a month, I was the only person who commented on What Helped the Voiceless? Historical Case Studies (which later won the EA Forum Prize). This will be less of a problem if internet communication is primarily about idle speculations and cat pictures. But of course this is not the primary way I and many others on the Forum engage with the internet. Frequently, the primary publication v... (read more)
Very instructive anecdote on motivated reasoning in research (in cost-effectiveness analyses, even!):
Back in the 90’s I did some consulting work for a startup that was developing a new medical device. They were honest people–they never pressured me. My contract stipulated that I did not have to submit my publications to them for prior review. But they paid me handsomely, wined and dined me, and gave me travel opportunities to nice places. About a decade after that relationship came to an end, amicably, I had occasion to review the article I had published about the work I did for them. It was a cost-effectiveness analysis. Cost-effectiveness analyses have highly ramified gardens of forking paths that biomedical and clinical researchers cannot even begin to imagine. I saw that at virtually every decision point in designing the study and in estimating parameters, I had shaded things in favor of the device. Not by a large amount in any case, but slightly at almost every opportunity. The result was that my “base case analysis” was, in reality, something more like a “best case” analysis. Peer review did not discover any of this during the publication process, because each individual esti
Do people have advice on how to be more emotionally resilient in the face of disaster?
I spent some time this year thinking about things that are likely to be personally bad in the near-future (most salient to me right now is the possibility of a contested election + riots, but this is also applicable to the ongoing Bay Area fires/smoke and to a lesser extent the ongoing pandemic right now, as well as future events like climate disasters and wars). My guess is that, after a modicum of precaution, the direct objective risk isn't very high, but it'll *feel* like a really big deal all the time.
In other words, being perfectly honest about my own personality/emotional capacity, there's a high chance that if the street outside my house is rioting, I just won't be productive at all (even if I did the calculations and the objective risk is relatively low).
So I'm interested in anticipating this phenomenon and building emotional resilience ahead of time so such issues won't affect me as much.
I'm most interested in advice for building emotional resilience for disaster/macro-level setbacks. I think it'd also be useful to build resilience for more personal setbacks (eg career/relationship/impact), but I naively suspect that this is less tractable.
I think it might be interesting/valuable for someone to create "list of numbers every EA should know", in a similar vein to Latency Numbers Every Programmer Should Know and Key Numbers for Cell Biologists.
One obvious reason against this is that maybe EA is too broad and the numbers we actually care about are too domain specific to specific queries/interests, but nonetheless I still think it's worth investigating.
I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.
Consider the following hypothetical situations:
New Project/Org Idea: JEPSEN for EA research or EA org Impact Assessments
Note: This is an updated version of something I wrote for “Submit grant suggestions to EA Funds”
What is your grant suggestion?
An org or team of people dedicated to Red Teaming EA research. Can include checks for both factual errors and conceptual ones. Like JEPSEN but for research from/within EA orgs. Maybe start with one trusted person and then expand outwards.
After demonstrating impact/accuracy for say 6 months, can become a "security" consultancy for either a) EA orgs interested in testing the validity of their own research or b) an external impact consultancy for the EA community/EA donors interested in testing or even doing the impact assessments of specific EA orgs. For a), I imagine Rethink Priorities may want to become a customer (speaking for myself, not the org).
Potentially good starting places:
- Carefully comb every chapter of The Precipice
- Go through ML/AI Safety papers and (after filtering on something like prestige or citation count) pick some papers at random to Red Team
- All of Tetlock's research on forecasting, particularly the ones with factoids most frequently cited in EA circle... (read more)
Edit: By figuring out ethics I mean both right and wrong in the abstract but also what the world empirically looks like so you know what is right and wrong in the particulars of a situation, with an emphasis on the latter.I think a lot about ethics. Specifically, I think a lot about "how do I take the best action (morally), given the set of resources (including information) and constraints (including motivation) that I have." I understand that in philosophical terminology this is only a small subsection of applied ethics, and yet I spend a lot of time thinking about it.
One thing I learned from my involvement in EA for some years is that ethics is hard. Specifically, I think ethics is hard in the way that researching a difficult question or maintaining a complicated relationship or raising a child well is hard, rather than hard in the way that regularly going to the gym is hard.
When I first got introduced to EA, I believed almost the opposite (this article presents something close to my past views well): that the hardness of living ethically is a matter of execution and will, rather than that of constantly making tradeoffs in a difficult-to-navigate domain.
I still ... (read more)
I've started trying my best to consistently address people on the EA Forum by username whenever I remember to do so, even when the username clearly reflects their real name (eg Habryka). I'm not sure this is the right move, but overall I think this creates slightly better cultural norms since it pushes us (slightly) towards pseudonymous commenting/"Old Internet" norms, which I think is slightly better for pushing us towards truth-seeking and judging arguments by the quality of the arguments rather than be too conscious of status-y/social monkey effects. (It's possible I'm more sensitive to this than most people).
I think some years ago there used to be a belief that people will be less vicious (in the mean/dunking way) and more welcoming if we used Real Name policies, but I think reality has mostly falsified this hypothesis.
I've finally read the Huw Hughes review of the CE Delft Techno-Economic Analyses (our summary here) of cultured meat and thought it was interesting commentary on the CE Delft analysis, though less informative on the overall question of cultured meat scaleups than I hoped. Overall their position on CE Delft's analysis was similar to ours, except maybe more bluntly worded. They were more critical in some parts and less critical in others.Things I liked about the Hughes review:
What will a company/organization that has a really important secondary mandate to focus on general career development of employees actually look like? How would trainings be structured, what would growth trajectories look like, etc?
When I was at Google, I got the distinct impression that while "career development" and "growth" were common buzzwords, most of the actual programs on offer were more focused on employee satisfaction/retention than growth. (For example, I've essentially never gotten any feedback on my selection of training courses or books that I bought with company money, which at the time I thought was awesome flexibility, but in retrospect was not a great sign of caring about growth on the part of the company).
Edit: Upon a reread I should mention that there are other ways for employees to grow within the company, eg by having some degree of autonomy over what projects they want to work on.
I think there are theoretical reasons for employee career growth being underinvested by default. Namely, that the costs of career growth are borne approximately equally between the employer and the employee (obviously this varies from case to case), whil... (read more)
I'm worried about a potential future dynamic where an emphasis on forecasting/quantification in EA (especially if it has significant social or career implications) will have adverse effects on making people bias towards silence/vagueness in areas where they don't feel ready to commit to a probability forecast.
I think it's good that we appear to be moving in the direction of greater quantification and being accountable for probability estimates, but I think there's the very real risk that people see this and then become scared of committing their loose thoughts/intuitive probability estimates on record. This may result in us getting overall worse group epistemics because people hedge too much and are unwilling to commit to public probabilities.
See analogy to Jeff Kaufman's arguments on responsible transparency consumption:
I'm pretty confused about the question of standards in EA. Specifically, how high should it be? How do we trade off extremely high evidential standards against quantity, either by asking people/ourselves to sacrifice quality for quantity or by scaling up the number of people doing work by accepting lower quality? My current thinking:
1. There are clear, simple, robust-seeming arguments for why more quantity* is desirable, in far mode.
2. Deference to more senior EAs seems to point pretty heavily towards focusing on quality over quantity. 3. When I look at specific interventions/grant-making opportunities in near mode, I'm less convinced they are a good idea, and lean towards earlier high-quality work is necessary before scaling.
The conflict between the very different levels of considerations in #1 vs #2 and #3 makes me fairly confused about where the imbalance is, but still maybe worth considering further given just how huge a problem a potential imbalance could be (in either direction).
*Note that there was a bit of slippage in my phrasing, while at the frontiers there's a clear quantity vs average quality tradeoff at the output level, the function that translates inp... (read more)
Malaria kills a lot more people >age 5 than I would have guessed (Still more deaths <=5 than >5, but a much smaller ratio than I intuitively believed). See C70-C72 of GiveWell's cost-effectiveness estimates for AMF, which itself comes from the Global Burden of Disease Study.
I've previously cached the thought that malaria primarily kills people who are very young, but this is wrong.
I think the intuition slip here is that malaria is a lot more fatal for young people. However, there are more older people than younger people.
Should there be a new EA book, written by somebody both trusted by the community and (less importantly) potentially externally respected/camera-friendly?Kinda a shower thought based on the thinking around maybe Doing Good Better is a bit old right now for the intended use-case of conveying EA ideas to newcomers.I think the 80,000 hours and EA handbooks were maybe trying to do this, but for whatever reason didn't get a lot of traction?I suspect that the issue is something like not having a sufficiently strong "voice"/editorial line, and what you want for a book that's a)bestselling and b) does not sacrifice nuance too much is one final author + 1-3 RAs/ghostwriters.
In the Precipice, Toby Ord very roughly estimates that the risk of extinction from supervolcanoes this century is 1/10,000 (as opposed to 1/10,000 from natural pandemics, 1/1,000 from nuclear war, 1/30 from engineered pandemics and 1/10 from AGI). Should more longtermist resources be put into measuring and averting the worst consequences of supervolcanic eruption?
More concretely, I know a PhD geologist who's interested in doing an EA/longtermist career and is currently thinking of re-skilling for AI policy. Given that (AFAICT) literally zero people in our community currently works on supervolcanoes, should I instead convince him to investigate supervolcanoes at least for a few weeks/months?
If he hasn't seriously considered working on supervolcanoes before, then it definitely seems worth raising the idea with him.
I know almost nothing about supervolcanoes, but, assuming Toby's estimate is reasonable, I wouldn't be too surprised if going from zero to one longtermist researcher in this area is more valuable than adding an additional AI policy researcher.
Clarification on my own commenting norms:
If I explicitly disagreed with a subpoint in your post/comment, you should assume that I'm only disagreeing with that subpoint; you should NOT assume that I disagree with the rest of the comment and are only being polite. Similarly, if I reply with disagreement to a comment or post overall, you should NOT assume I disagree with your other comments or posts, and certainly I'm almost never trying to admonish you as a person. Conversely, agreements with subpoints should not be treated as agreements with your overall point, agreements with the overall point of an article should not be treated as an endorsement of your actions/your organization, and so forth.
I welcome both public and private feedback on my own comments and posts, especially points that note if I say untrue things. I try to only say true things, but we all mess up sometimes. I expect to mess up in this regard more often than most people, because I'm more public with my output than most people.
Regardless of overarching opinions you may or may not have about the unilateralist's curse, I think Petrov Day is a uniquely bad time to lambast the foibles of being a well-intentioned unilateralist.
I worry that people are updating in exactly the wrong way from Petrov's actions, possibly to fit preconceived ideas of what's correct.
Possibly dumb question, but does anybody actually care if climate change (or related issues like biodiversity) will be good or bad for wild animal welfare?
I feel like a lot of people argue this as a given, but the actual answer relies on getting the right call on some pretty hard empirical questions. I think answering or at least getting some clarity on this question is not impossible, but I don't know if anybody actually cares in a decision-relevant way (like I don't think WAW people will switch to climate change if we're pretty sure climate change is bad... (read more)
I'm interested in a collection of backchaining posts by EA organizations and individuals, that traces back from what we want -- an optimal, safe, world -- back to specific actions that individuals and groups can take.Can be any level of granularity, though the more precise, the better.
Interested in this for any of the following categories:
A corollary of background EA beliefs is that everything we do is incredibly important.
This is covered elsewhere in the forum, but I think an important corollary of many background EA + longtermist beliefs is that everything we do is (on an absolute scale) very important, rather than useless.
I know some EAs who are dispirited because they donate a few thousand dollars a year when other EAs are able to donate millions. So on a relative scale, this makes sense -- other people are able to achieve >1000x the impact through their donations as you ... (read more)
I don't know, but my best guess is that "janitor at MIRI"-type examples reinforce a certain vibe people don't like — the notion that even "lower-status" jobs at certain orgs are in some way elevated compared to other jobs, and the implication (however unintended) that someone should be happy to drop some more fulfilling/interesting job outside of EA to become MIRI's janitor (if they'd be good).
I think your example would hold for someone donating a few hundred dollars to MIRI (which buys roughly 10^-4 additional researchers), without triggering the same ideas. Same goes for "contributing three useful LessWrong comments on posts about AI", "giving Superintelligence to one friend", etc. These examples are nice in that they also work for people who don't want to live in the Bay, are happy in their current jobs, etc.
Anyway, that's just a guess, which doubles as a critique of the shortform post. But I did upvote the post, because I liked this bit:
But the "correct" framing (I claim) would look at the absolute scale, and consider stuff like we are a) among the first 100 billion or so people and we hope there will one day be quadrillions b) (most) EAs are unusually well-placed within this already very privileged set and c) within that even smaller subset again, we try unusually hard to have a long term impact, so that also counts for something.
I know this is a really mainstream opinion, but I recently watched a recording of the musical Hamilton and I really liked it. I think Hamilton (the character, not the historical figure which I know very little about) has many key flaws (most notably selfishness, pride, and misogyny(?)) but also virtues/attitudes that are useful to emulate.I especially found the Non-stop song(lyrics) highly relatable/aspirational, at least for a subset of EA research that looks more like "reading lots and synthesize many thoughts quickly" and less like "think ver... (read more)
I'd appreciate a 128kb square version of the lightbulb/heart EA icon with a transparent background, as a Slack emoji.
I continue to be fairly skeptical that the all-things-considered impact of EA altruistic interventions differ by multiple ( say >2) orders of magnitude ex ante (though I think it's plausible ex post). My main crux here is that I believe general meta concerns start dominating once the object-level impacts are small enough.This is all in terms of absolute value of impact. I think it's quite possible that some interventions have large (or moderately sized) negative impact, and I don't know how the language of impact in terms of multiplication best deals with this.
Minor UI note: I missed the EAIF AMA multiple times (even after people told me it existed) because my eyes automatically glaze over pinned tweets. I may be unusual in this regard, but thought it worth flagging anyway.
Do people have thoughts on what the policy should be on upvoting posts by coworkers? Obviously telling coworkers (or worse, employees!) to upvote your posts should be verboten, and having a EA forum policy that you can't upvote posts by coworkers is too draconian (and also hard to enforce). But I think there's a lot of room in between to form a situation like "where on average posts by people who work at EA orgs will have more karma than posts of equivalent semi-objective quality." Concretely, 2 mechanisms in which this could happen (and almost c... (read more)
crossposted from LessWrong
There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.
I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member there.
For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be cl... (read more)
Are there any EAA researchers carefully tracking the potential of huge cost-effectiveness gains in the ag industry from genetic engineering advances of factory farmed animals? Or (less plausibly) advances from better knowledge/practice/lore from classical artificial selection? As someone pretty far away from the field, a priori the massive gains made in biology/genetics in the last few decades seems like something that we plausibly have not priced in in. So it'd be sad if EAAs get blindsided by animal meat becoming a lot cheaper in the next few decades (if this is indeed viable, which it may not be).
I'm now pretty confused about whether normative claims can be used as evidence in empirical disputes. I generally believed no, with the caveat that for humans, moral beliefs are built on a scaffolding of facts, and sometimes it's easier to respond to an absurd empirical claim with the moral claim that has the gestalt sense of empirical beliefs if there isn't an immediately accessible empirical claim.
I talked to a philosopher who disagreed, and roughly believed that strong normative claims can be used as evidence against more confused/less c... (read more)
Updated version on https://docs.google.com/document/d/1BDm_fcxzmdwuGK4NQw0L3fzYLGGJH19ksUZrRloOzt8/edit?usp=sharing
Cute theoretical argument for #flattenthecurve at any point in the distribution
I think it's really easy to get into heated philosophical discussions about whether EAs overall use too much or too little jargon. Rather than try to answer this broadly for EA as a whole, it might be helpful for individuals to conduct a few quick polls to decide for themselves whether they ought to change their lexicon. Here's my Twitter poll as one example.
Economic benefits of mediocre local human preferences modeling.
Epistemic status: Half-baked, probably dumb.
Note: writing is mediocre because it's half-baked.
Some vague brainstorming of economic benefits from mediocre human preferences models.
Many AI Safety proposals include understanding human preferences as one of its subcomponents . While this is not obviously good, human modeling seems at least plausibly relevant and good.
Short-term economic benefits often spur additional funding and research interest [citation not given]. So a possible quest... (read more)
I find it quite hard to do multiple quote-blocks in the same comment on the forum. For example, this comment took one 5-10 tries to get right.
On the forum, it appears to have gotten harder for me to do multiple quote blocks in the same comment. I now often have to edit a post multiple times so quoted sentences are correctly in quote blocks, and unquoted sections are not. Whereas in the past I do not recall having this problem?
Cross-posted from Facebook
On the meta-level, I want to think hard about the level of rigor I want to have in research or research-adjacent projects.
I want to say that the target level of rigor I should have is substantially higher than for typical FB or Twitter posts, and way lower than research papers.
But there's a very wide gulf! I'm not sure exactly what I want to do, but here are some gestures at the thing:
- More rigor/thought/data collection should be put into it than 5-10 minutes typical of a FB/Twitter post, but much less than a hundred or... (read more)