This is a special post for quick takes by Thomas Kwa. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's
easy to set up-- ideally a single monthly donation equivalent to the animal product consumption of the average American, which I can scale up a bit to make sure I'm net positive
based on well-founded impact estimates
affects a wide variety of animals reflecting my actual diet-- at a minimum my donation would be split among separate nonprofits improving the welfare of mammals, birds, fish, and invertebrates, and ideally it would closely track the suffering created by each animal product within that category
includes all animal products, not just meat.
I know I could potentially have higher impact just betting on saving 10 million shrimp or whatever, but I have enough moral uncertainty that I would highly value this kind of offset package. My guess is there are lots of people for whom going vegan is not possible or desirable, who would be in the same boat.
Thanks, I've started donating $33/month to the FarmKind bonus fund, which is double the calculator estimate for my diet. [1] I will probably donate ~$10k of stocks in 2025 to offset my lifetime diet impact-- is there any reason not to do this? I've already looked at the non-counterfactual matching argument, which I don't find convincing.
[1] I basically never eat chicken, substituting it with other meats, so I reduced the poultry category by 2/3 and allocated that proportionally between the beef and pork categories.
3
Ariel Simnegar 🔸
One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capital’s optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.
Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that it’d be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.
That’s awesome. That probably gets you 90% of the way there already, even if there were no offset!
Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.
Assumptions:
Not sure how to define "EA community", but some groups that should definitely be included are the entire staff of OpenPhil and CEA, anyone who dedicates their career choices or donates more than 10% along EA principles, and anyone with >5k EA forum karma.
EAs have the same proportion of the population as they do now, as well as the same relative levels of wealth, political power, intelligence, and drive.
EAs forget all our post-1776 historical knowledge, including the historical paths to abolition.
EA attention is split among other top causes of the day, like infectious disease and crop yields. I can't think of a reason why antislavery would be totally ignored by EAs though, as it seems huge in scope and highly morally salient to people like Bentham.
I'm also interested in speculating on other causes, I've just been thinking about abolition recently due to the 80k podcast with Prof. Christopher Brown.
I suspect the net impact would be pretty low. Most of the really compelling consequentialist arguments like "if we don't agree to this there will be a massive civil war in future" and "an Industrial Revolution will leave everyone far richer anyway" are future knowledge that your thought experiment strips people of. It didn't take complex utility calculations to persuade people that slaves experienced welfare loss; it took deontological arguments versed [mainly] in religious belief to convince people that slaves were actually people whose needs deserved attention. And Jeremy Bentham was already there to offer utilitarian arguments, to the extent people were willing to listen to them.
And I suspect that whilst a poll of Oxford-educated utilitarian pragmatists with a futurist mindset transported back to 1776 would near-unanimously agree that slavery wasn't a good thing, they'd probably devote far more of their time and money to stuff they saw as more tractable like infectious diseases and crop yields, writing some neat Benthamite literature and maybe a bit of wondering whether Newcomen engines and canals made the apocalypse more likely.
I can't imagine the messy political compromise tha... (read more)
I disagree with a few points, especially paragraph 1. Are you saying that people were worried about abolition slowing down economic growth and lowering standards of living? I haven't heard this as a significant concern-- free labor was perfectly capable of producing cotton at a small premium, and there were significant British boycotts of slave-produced products like cotton and sugar.
As for utilitarian arguments, that's not the main way I imagine EAs would help. EA pragmatists would prioritize the cause for utilitarian reasons and do whatever is best to achieve their policy goals, much as we are already doing for animal welfare. The success of EAs in animal welfare, or indeed anywhere other than x-risk, is in implementation of things like corporate campaigns rather than mass spreading of arguments. Even in x-risk, an alliance with natsec people has effected concrete policy outcomes like compute export controls.
To paragraph 2, the number of philosophers is pretty low in contemporary EA. We just hear about them more. And while abolition might have been relatively intractable in the US, my guess is the UK could have been sped up.
I basically agree with paragraph 3, though I would hope if it came to it we would find something more economical than directly freeing slaves.
Overall thanks for the thoughtful response! I wouldn't mind discussing this more.
1
David T
Absolutely slaveholders and those dependent on them were worried about their own standard of living (and more importantly, specifically not interested in significantly improving the standard of living of plantation slaves, and not because they'd never heard anyone put forward the idea that all people were equal. I mean, some of them were on first name terms with Thomas Paine and signed the Declaration of Independence and still didn't release their slaves!). I'm sure most people who were sympathetic to EA ideas would have strongly disagreed with this prioritisation decision, just like the Quakers or Jeremy Bentham. I just don't think they'd have been more influential than the Quakers or Jeremy Bentham, or indeed the deeply religious abolitionists lead by William Wilberforce.
I agree the number of philosophers in EA is quite low, but I'm assuming the influence centre would be similar, possibly even more Oxford-centric, in a pre-internet, status-obsessed social environment where discourse is more centred on physical places and elite institutions[1]. For related reasons I think they'd be influential in the sort of place where abolitionist arguments were already getting a fair hearing, and of little consequence in slaveowning towns in the Deep South. In the UK, I think the political process was held up by the amount of vested interests in keeping it going in Parliament and beliefs that slavery was "the natural order" rather than any lack of zeal or arguments or resources on the abolitionist side (though I'm sure they'd have been grateful for press baron Moskovitz's donations!). I think you could make the argument that slave trade abolition in the UK was actually pretty early considering the revenues it generated, who benefited, and the generally deeply inegalitarian social values and assumption of racial superiority of British society at the time.
I agree this is probably the main way that EAs would try to help, I just don't think abolitionism is an area where this
Worth noting that if there are like 10,000 EAs today in the world with a population of 8,000,000,000, the percentage of EAs globally is 0.000125 percent.
If we keep the same proportion and apply that to the world population in 1776, there would be about 1,000 EAs globally and about 3 EAs in the United States. If they were overrepresented in the United States by a factor of ten, there would be about 30.
I was imagining a split similar to the present, in which over half of EAs were American or British.
4
Ozzie Gooen
I guess on one hand, if this were the case, then EAs would be well-represented in America, given that it's population in 1776 was just 2.5M, vs. the UK population of 8M.
On the other hand, I'd assume that if they were distributed across the US, many would have been farmers / low-income workers / slaves, so wouldn't have been able to contribute much. There is an interesting question on how much labor mobility or inequality there was at the time.
Also, it seems like EAs got incredibly lucky with Dustin Moskovitz + Good Ventures. It's hard to picture just how lucky we were with that, and what the corresponding scenarios would have been like in 1776.
Could make for neat historical fiction.
7
NickLaing
On the positive front, some surprisingly EA adjacent people were part of the movement which did get slavery banned.
I also think the heavy EA bent against activism and politics wouldn't have helped, as both of those routes were key parts of the pathway to ban slavery in the UK at least (I don't know much about the US)
Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them.
People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.
Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures.
Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Explora
So napkin math suggests that the per-post cost of a contest post is something like 1% of the per-post cost of a RP publication. A typical RP publication is probably much higher quality. But maybe sometimes getting a lot of shallow explorations quickly is what's desired. (Disclaimer: I haven't been reading the forum much, didn't read many contest posts, and don't have an opinion about their quality. But I did notice the organizers of the ELK contest were "surprised by the number and quality of submissions".)
A related point re: quality is that smaller prize pools presumably select for people with lower opportunity costs. If I'm a talented professional who commands a high hourly rate, I might do the expected value math o... (read more)
Hey just want to weigh in here that you can't divide our FTE by our total publication count, since that doesn't include a large amount of work we've produced that is not able to be made public or is not yet public but will be. Right now I think a majority of our output is not public right now for one reason or another, though we're working on finding routes to make more of it public.
I do think your general point though that the per-post cost of a contest post is less / much less than an RP post is accurate though.
Thanks for the correction!
BTW, I hope it doesn't seem like it was picking on you -- it just occurred to me that I could do math for Rethink Priorities because your salaries are public. I have no reason to believe a cost-per-public-report estimate would be different for any other randomly chosen EA research organization in either direction. And of course most EA organizations correctly focus on making a positive impact rather than maximizing publication count.
We also seem to get a fair number of posts that make basically the same point as an earlier article, but the author presumably either didn't read the earlier one or wanted to re-iterate it.
I think there are many people who have very high bars for how good something should be to post on the forum. Thus the forum becomes dominated by a few people (often people who aren't aware of or care about forum norms) who have much lower bars to posting.
This is a plausible mechanism for explaining why content is of lower quality than one would otherwise expect, but it doesn't explain differences in quality over time (and specifically quality decline), unless you add extra assumptions such that the proportion of people with low bars to posting has increased recently. (Cf. Ryan's comment)
often people who aren't aware of or care about forum norms
EA has grown a lot recently, so I think there are more people recently who aren't aware of or care about the "high bar" norm. This is in part due to others explicitly saying the bar should be lower, which (as others here have noted) has a stronger effect on some than on others.
Edit: I don't have time to do this right now, but I would be interested to plot the proportion of posts on the EA forum from people who have been on the forum for less than a year over time. I suspect that it would be trending upwards (but could be wrong). This would be a way to empirically verify part of my claim.
I'm interested in learning how plausible people find each of these mechanisms, so I created a short (anonymous) survey. I'll release the results in a few days [ETA: see below]. Estimated completion time is ~90 seconds.
Re 3, 'There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.' - this makes me wince. A language point is that I think talking about how 'good quality' people are overall is unkind and leads to people feeling bad about themselves for not having such-and-such an attribute. An object level point is I don't think there is an anticorrelation - I think being a busy EA org person does make it more likely that they'll have valuable takes, but not being a busy-EA-org-person doesn't make it less likely - there aren't that many busy-EA-org-person jobs, and some people aren't a good fit for busy jobs (eg because of their health or family commitments) but they still have interesting ideas.
Re 7: I'm literally working on a post with someone about how lots of people feel too intimidated to post on the Forum because of its perceived high standards! So I think though the Forum team are trying to make people feel welcome, it's not true that it's (yet) optimized for this, imo.
There's a kind of general problem whereby any messaging or mechanism that's designed to dissuade people from posting low-qual... (read more)
I think it's fairly clear which of these are the main factors, and which are not. Explanations (3-5) and (7) do not account for the recent decline, because they have always been true. Also, (6) is a weak explanation, because the quality wasn't substantially worse than an average post.
On the other hand, (1-2) +/- (8) fit perfectly with the fact that volume has increased over the last 18 months, over the same period as community-building has happened on a large scale. And I can't think of any major contributors outside of (1-8), so I think the main causes are simply community dilution + a flood of newbies.
Though the other factors could still partially explain why the level (as opposed to the trend) isn't better, and arguably the level is what we're ultimately interested in.
I wouldn't be quick to dismiss (3-5) and (7) as factors we should pay attention to. These sorts of memetic pressures are present in many communities, and yet communities vary dramatically in quality. This is because things like (3-5) and (7) can be modulated by other facts about the community:
* How intrinsically susceptible are people to clickbait?
* Have they been taught things like politics is the mind-killer and the dangers of platforms where controversial ideas outcompete broadly good ones?
* What is the variance in how busy people are?
* To what degree do people feel like they can weigh in on meta? To what degree can they weigh in on cause areas that are not their own?
* Are the people on EA Forum mostly trying for impact, or to feel like they're part of a community (including instrumentally towards impact)?
So even if they cannot be solely reponsible for changes, they could have been necessary to produce any declines in quality we've observed, and be important for the future.
2
RyanCarey
I agree that (4) could be modulated by the character of the community. The same is true for (3,5), except that, the direction is wrong. Old-timers are more likely to be professional EAs, and know more about the community, so their decreased prevalence should reduce problems from (3,5). And (7) seems more like an effect of the changing nature of the forum, rather than a cause of it.
7
Gavin
My comment got detached, woops
6
Chris Leong
Eternal September is a slightly different hypothesis that those listed. It's that if new people come into the community then there is an erosion of norms that make the community distinctive.
5
HaydnBelfield
So as I see it the main phenomenon is that there's just much more being posted on the forum. I think there's two factors behind that 1) community growth and 2) strong encouragement to post on the Forum. Eg there's lots of encouragement to post on the forum from: the undergraduate introductory/onboarding fellowships, the AGI/etc 'Fundamentals' courses, the SERI/CERI/etc Summer Fellowships, or this or this (h/t John below).
The main phenomenon is that there is a lot more posted on the forum, mostly from newer/more junior people. It could well be the case that the average quality of posts has gone down. However, I'm not so sure that the quality of the best posts has gone down, and I'm not so sure that there are fewer of the best posts every month. Nevertheless, spotting the signal from the noise has become harder.
But then the forum serves several purposes. To take two of them: One (which is the one commenters here are most focussed on) is "signal" - producing really high-quality content - and its certainly got harder to find that. But another purpose is more instrumental - its for more junior people to demonstrate their writing/reasoning ability to potential employees. Or its to act as an incentive/endgoal for them to do some research - where the benefit is more that they see whether its a fit for them or not, but they wouldn't actually do the work if it wasn't structured towards writing something public.
So the main thing that those of us who are looking for "signal" need to do is find better/new ways to do so. The curated posts are a postive step in this direction, as are the weekly summaries and the monthly summaries.
4
Quadratic Reciprocity
Are there examples of typical bad takes you've seen newer EAs post?
3
ChanaMessinger
Small formatting thought: making these numbered instead of bulleted will make it easier to have conversations about them
3
Thomas Kwa
Done
-1
DirectedEvolution
I’d reframe this slightly, though I agree with all your key points. EA forum is finding a new comparative advantage. There are other platforms for deep, impact-focused research. Some of the best research has crystallized into founding efforts.
There will always be the need for an onboarding site and watering hole, and EA forum is filling that niche.
There are other platforms for deep, impact-focused research.
Could you name them? I'm not sure which ones are out there, other than LW and Alignment Forum for AI alignment research.
E.g. I'm not sure where else is a better place to post research on forecasting, research on EA community building, research on animal welfare, or new project proposals. There are private groups and slacks, but sometimes what you want is public or community engagement.
I was thinking about our biggest institutions, OpenPhil, 80k, that sort of thing - the work produced by their on-staff researchers. It sounds like you're wanting a space that's like the EA forum, but has a higher concentration of impact-focused research especially by independent researchers? Or maybe that you'd like to see the new work other orgs are doing get aggregated in one place?
Not sure how to post these two thoughts so I might as well combine them.
In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.
However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:
Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard.
Ambition and working really hard as success multipliers in entrepreneurship.
A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others.
It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1] (But fraud is still bad, of course.)
Just because SBF stole billions of dollars does not mean he has fewer virtuous personalit... (read more)
Watch team backup: I think we should be incredibly careful about saying things like, "it is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating". I'm sure you mean something reasonable when you say this, similar to what's expressed here, but I still wanted to flag it.
I'm surprised by the disagree votes. Is this because people think I'm saying, 'in the case of whether it is ever OK to take a harmful job in order to do more good, one ought not to say what one truly believes'?
To clarify, that's not what I'm trying to say. I'm saying we should have nuanced thoughts about whether it is ever OK to take a harmful job in order to do more good, and we should make sure we're expressing those thoughts in a nuanced fashion (similar to the 80k article I linked). If you disagree with this I'd be very interested in hearing your reasoning!
I noticed this a while ago. I don't see large numbers of low-quality low-karma posts as a big problem though (except that it has some reputation cost for people finding the Forum for the first time). What really worries me is the fraction of high-karma posts that neither original, rigorous, or useful. I suggested some server-side fixes for this.
PS: #3 has always been true, unless you're claiming that more of their output is private these days.
Should the EA Forum team stop optimizing for engagement? I heard that the EA forum team tries to optimize the forum for engagement (tests features to see if they improve engagement). There are positives to this, but on net it worries me. Taken to the extreme, this is a destructive practice, as it would
normalize and encourage clickbait;
cause thoughtful comments to be replaced by louder and more abundant voices (for a constant time spent thinking, you can post either 1 thoughtful comment or several hasty comments. Measuring session length fixes this but adds more problems);
cause people with important jobs to spend more time on EA Forum than is optimal;
avoid community members and "EA" itself from keeping their identities small, as politics is an endless source of engagement;
distract from other possible directions of improvement, like giving topics proportionate attention, adding epistemic technology like polls and prediction market integration, improving moderation, and generally increasing quality of discussion.
I'm not confident that EA Forum is getting worse, or that tracking engagement is currently net negative, but we should at least avoid failing this exercise in Goodhart's Law.
Thanks for this shortform! I'd like to quickly clarify a bit about our strategy. TL;DR: I don't think the Forum team optimizes for engagement.
We do track engagement, and engagement is important to us, since we think a lot of the ways in which the Forum has an impact are diffuse or hard to measure, and they'd roughly grow or diminish with engagement.
But we definitely don't optimize for it, and we're very aware of worries about Goodharting.
Besides engagement, we try to track estimates for a number of other things we care about (like how good the discussions have been, how many people have gotten jobs as a result of the Forum, etc), and we're actively working on doing that more carefully.
And for what it's worth, I think that none of our major projects in the near future (like developing subforums) are aimed at increasing engagement, and neither have been our recent projects (like promoting impactful jobs).
I wasn't counting that as a major project, but Draft Amnesty Day also wasn't aimed at optimizing engagement (and I'd be surprised[1] if it helped or hurt engagement in a significant way). That was motivated by a desire to get people to publish drafts (which could have cool ideas) that they've been sitting on for a while. :)
1. ^
Edit: can confirm that at a glance, engagement on Friday/this weekend looks normal
I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:
There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these.
most people don't think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world) or be absurd (placing higher moral weight on beings that are physically closer to you).
Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says "each age gets the thought it needs".
However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with ... (read more)
I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.
I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).
That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much w... (read more)
I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.
My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.
My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.
If you would like to set up a call sometime to discuss further, please PM!
Is there a reason this isn't being done? Is it just too expensive?
From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:
1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?) 2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new. 3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.
On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.
I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.
Terminology proposal: a class-n (or tier-n) megaproject reduces x-risk by between 10^-n and 10^-(n+1). This is intended as a short way to talk about the scale of longtermist megaprojects, inspired by 80k's scale-scale but a bit cleaner because people can actually remember how to use it.
Class-0 project: reduces x-risk by >10%, e.g. creating 1,000 new AI safety researchers as good as Paul Christiano
Class-1 project: reduces x-risk by 1-10%, e.g. reducing pandemic risk to zero
Class-2 project: reduces x-risk by 0.1-1%, e.g. the Anthropic interpretability team
Class-3 project: reduces x-risk by 0.01-0.1%, e.g. most of these, though some make it into class 2
The classes could also be non-integer for extra precision, so if I thought creating 1,000 Paul Christianos reduced x-risk by 20%, I could call it a -log10(20%) = class-0.70 megaproject.
I'm still not sure about some details, so leave a comment if you have opinions:
"class" vs "tier"
I originally thought of having the percentages be absolute, but perhaps one could also make the case for relative percentages.
should class-n be between 10^-n and 10^-(n+1), or between 10^-(n-1) and 10^-n?
Are we evaluating outcomes or projects? What should th
I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill.
If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.
With all the scandals in the last year or two, has anyone looked at which recruitment sources are least likely to produce someone extremely net negative in direct impact or to the community (i.e. a justified scandal)? Maybe this should inform outreach efforts.
Women in longtermism and EA are consistently better in respects of character, responsibility and diligence (there are outliers in animal welfare, who have been power-seeking for ideological and been destructive, implicated in ACE's fate, but that's probably because of the demographics).
Women do not engage in as much power-seeking as much or interact as poorly with the social fictions/status/funding dynamics that produce bad outcomes in EA (they tend to do more real things).
As we will see, even Caroline did the "least crime". In the non-linear case, my guess is that Kat Woods was more self-involved and highly unqualified as a manager, with less tones of systemic malice that Emerson gives off.
4
Thomas Kwa
Is there any evidence for this claim? One can speculate about how average personality gender differences would affect p(scandal), but you've just cited two cases where women caused huge harms, which seems to argue neutrally or against you.
2
Defacto
In both cases, the examples of women have an explicit favorable comparison to their male counterparts.
4
Thomas Kwa
But with no evidence, just your guesses. IMO we should wait until things shake out and even then the evidence will require lots of careful interpretation. Also EA is 2/3 male, which means that even minor contributions of women to scandals could mean they cause proportionate harms.
I might want to become a billionaire for roughly the reasons in this post [1] (tl;dr EV is tens of millions per year and might be the highest EV thing I can do), and crypto seems like one particularly promising way. I have other possible career paths, but my current plan is to
- accumulate a list of ~25 problems in crypto that could be worth $1B if solved
- hire a research assistant to look over the list for ~100 hours and compile basic stats like "how crowded is this" and estimating market size
- talk with experts about the most promising ones
- if one is particularly promising, do the standard startup things (hire smart contract and front-end devs, get funding somehow) and potentially drop out of school
Does this sound reasonable? Can you think of improvements to this plan? Are there people I should talk to?
How much equity does SBF actually have in FTX? Posts like this imply he has 90%, but the first article I found said that he actually had 90% equity in Alameda (which is owned by FTX or something?) and nothing I can find gives a percentage equity in FTX. Also, FTX keepsraising money, so even if he had 90% at once point, surely much of that has been sold.
It's common for people to make tradeoffs between their selfish and altruistic goals with a rule of thumb or pledge like "I want to donate X% of my income to EA causes" or "I spend X% of my time doing EA direct work" where X is whatever they're comfortable with. But among more dedicated EAs where X>>50, maybe a more useful mantra is "I want to produce at least Y% of the expected altruistic impact that I would if I totally optimized my life for impact". Some reasons why this might be good:
Impact is ultimately what we care about, not sacrifice. The new framing shifts people out of a mindset of zero-sum tradeoffs between a selfish and altruistic part.
In particular, this promotes ambition. Thoughts similar to this have helped me realize that by being more ambitious, I can double my impact without sacrificing much personal well-being. This is a much better thing to do than working 70 hours a week at an "EA job" because I think my commitment level is X=85% or something.
It also helps me not stress about small things. My current diet is to avoid chicken, eat fewer eggs, and offset any eggs I eat. Some people around me are vegan, and some people think offsetting is antithetical to EA.
I think I like the thinking that's in this general direction, but just to list some additional counter-considerations:
* almost all of the predictable difference in your realized impact from your theoretical maximum would be due to contigent factors outside of your control.
* You can try to solve this problem somewhat by saying Y% of your ex ante expected value
* But it's hard (but not impossible) to avoid problems with evidential updates here (like there'll be situations where your policy prevents you from seeking evidential updates)
* the toy example that comes to mind is that unless you're careful, this policy would in theory prevent you from learning about much more ambitious things you could've done, because that'd be evidence that your theoretical maximum is much higher than you've previously thought!
* The subjectivity of Y is problematic not just for interpersonal dynamics, but from a motivational perspective.Because it's so hard to know both the numerator and especially denominator, the figures may be too noisy to optimize for/have a clean target to aim at.
2
Thomas Kwa
In practice I think this hasn't been too much of a problem for me, and I can easily switch from honest evaluation mode to execution mode. Curious if other people have different experiences.
If I'm capable of running an AI safety reading group (at my school, and I learn that someone else is doing it, I might be jealous that my impact is "being taken".
If I want to maximize total impact, I don't endorse this feeling. But what feeling does make sense from an impact maximization perspective? Based on Shapley values, you should
update downwards on the impact they get (because they're replaceable)
update downwards on the impact you get, if you thought this was your comparative advantage (because you're replaceable).
Ah... this is something I struggle with. Especially since I've had the same goals for years. It would be a hard transition, I've done it before. I like to think of it as the next thing I find will be better in ways I didn't expect, as long as I'm putting effort in.
What's the right way to interact with people whose time is extremely valuable, equivalent to $10,000-$1M per hour of OpenPhil's last dollar? How afraid should we be of taking up their time? Some thoughts:
Sometimes people conflate time cost with status, and the resulting shyness/fear can prevent you from meeting someone-- this seems unwarranted because introducing yourself only takes like 20 seconds.
The time cost should nevertheless not be ignored; how prepared you are for a 1-hour meeting might be the largest source of variance in the impact you produce/de
I personally have a difficult time with this. Usually in these conversations they are looking for something specific, and it's hard to know what I have to say that would be helpful for them. For me sometimes I see the big picture too much and it's hard to find something meaningful out of that. Ex. I want to solve systemic issues that the person I'm talking to can't change. I don't know how to balance what's realistic and helpful with what's needed.
Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.
I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.
Probably the easiest way to do this is to give to a donor-advised fund, and then instruct the fund to give to the EA Fund. Even for charities that can accept stock, my experience has been that donating through a donor-advised fund is much easier (it requires less paperwork).
1
Thomas Kwa
To clarify, you mean a donor-advised fund I have an account with (say Fidelity, Vanguard, etc.) which I manage myself?
I think there are currently too few infosec people and people trying to become billionaires.
Infosec: this seems really helpful for AI safety and biosecurity in a lot of worlds, and I'm guessing it's just much less sexy / popular than technical research. Maybe I'm wrong about the number of people here, but from attendance at an EAGxSF event it didn't seem like we would be saturated.
Entrepreneurship: I think the basic argument for making tens of billions of dollars still holds. Just because many longtermist orgs are well-funded now doesn't mean they will be
What percent of Solana is held by EAs? I've heard FTX holds some, but unknown how much. This is important because if I do a large crypto project on Solana, much of the value might come from increasing the value of the Solana ecosystem, and thus other EAs' investments.
A lot of EAs I know have had strong intuitions towards scope sensitivity, but I also remember having strong intuitions towards moral obligation, e.g. I remember being slightly angry at Michael Phelps' first retirement, thinking I would never do this and that top athletes should have a duty to maximize their excellence over their career. Curious how common this is.
Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.
Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.
Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?
1
Jørgen Ljønes🔸
One argument against the effectiveness from mega charities who does a bunch of different, unrelated interventions is that from the Central Limit Theorem (https://en.m.wikipedia.org/wiki/Central_limit_theorem) the average effectiveness of a large sample of interventions is apriori more likely to be close to the population mean effectiveness - that is the mean effectiveness of all relevant interventions. In other words, it's hard to be one of the very best if you are doing lots of different stuff. Even if some of the interventions you do are really effective, your average effectiveness will be dragged down by the other interventions.
I agree with these points. I've worked in the space for a few years (most notably for IOHK working on Cardano) and am happy to offer some advice. Saying that, I would much rather work on something directly valuable (climate change or food security) than earning to give at the moment...
I want to skill up in pandas/numpy/data science over the next few months. Where can I find a data science project that is relevant to EA? Some rough requirements:
Takes between 1 and 3 months of full-time work
Helps me (a pretty strong CS undergrad) become fluent in pandas quickly, and maybe use some machine learning techniques I've studied in class
About as open-ended as a research internship
Feels meaningful
Should be important enough that I enjoy doing it, but it's okay if it has e.g. 5% as much direct benefit as the highest-impact thing I could be doing
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's
I know I could potentially have higher impact just betting on saving 10 million shrimp or whatever, but I have enough moral uncertainty that I would highly value this kind of offset package. My guess is there are lots of people for whom going vegan is not possible or desirable, who would be in the same boat.
Have you seen farmkind? Seems like they are trying to provide a donation product for precisely this problem.
Specifically this.
Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.
Assumptions:
Note... (read more)
I suspect the net impact would be pretty low. Most of the really compelling consequentialist arguments like "if we don't agree to this there will be a massive civil war in future" and "an Industrial Revolution will leave everyone far richer anyway" are future knowledge that your thought experiment strips people of. It didn't take complex utility calculations to persuade people that slaves experienced welfare loss; it took deontological arguments versed [mainly] in religious belief to convince people that slaves were actually people whose needs deserved attention. And Jeremy Bentham was already there to offer utilitarian arguments, to the extent people were willing to listen to them.
And I suspect that whilst a poll of Oxford-educated utilitarian pragmatists with a futurist mindset transported back to 1776 would near-unanimously agree that slavery wasn't a good thing, they'd probably devote far more of their time and money to stuff they saw as more tractable like infectious diseases and crop yields, writing some neat Benthamite literature and maybe a bit of wondering whether Newcomen engines and canals made the apocalypse more likely.
I can't imagine the messy political compromise tha... (read more)
Worth noting that if there are like 10,000 EAs today in the world with a population of 8,000,000,000, the percentage of EAs globally is 0.000125 percent.
If we keep the same proportion and apply that to the world population in 1776, there would be about 1,000 EAs globally and about 3 EAs in the United States. If they were overrepresented in the United States by a factor of ten, there would be about 30.
EA forum content might be declining in quality. Here are some possible mechanisms:
- Newer EAs have worse takes on average, because the current processes of recruitment and outreach produce a worse distribution than the old ones
- Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them.
- People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.
- Controversial content, rather than good content, gets the most engagement.
- Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures.
- Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Explora
... (read more)Another possible mechanism is forum leadership encouraging people to be less intimidated and write more off-the-cuff posts -- see e.g. this or this.
Side note: It seems like a small amount of prize money goes a long way.
E.g. Rethink Priorities makes their salaries public: they pay senior researchers $105,000 – $115,000 per year.
Their headcount near the end of 2021 was 24.75 full-time equivalents.
And their publications page lists 30 publications in 2021.
So napkin math suggests that the per-post cost of a contest post is something like 1% of the per-post cost of a RP publication. A typical RP publication is probably much higher quality. But maybe sometimes getting a lot of shallow explorations quickly is what's desired. (Disclaimer: I haven't been reading the forum much, didn't read many contest posts, and don't have an opinion about their quality. But I did notice the organizers of the ELK contest were "surprised by the number and quality of submissions".)
A related point re: quality is that smaller prize pools presumably select for people with lower opportunity costs. If I'm a talented professional who commands a high hourly rate, I might do the expected value math o... (read more)
Hey just want to weigh in here that you can't divide our FTE by our total publication count, since that doesn't include a large amount of work we've produced that is not able to be made public or is not yet public but will be. Right now I think a majority of our output is not public right now for one reason or another, though we're working on finding routes to make more of it public.
I do think your general point though that the per-post cost of a contest post is less / much less than an RP post is accurate though.
-Peter (Co-CEO of Rethink Priorities)
We also seem to get a fair number of posts that make basically the same point as an earlier article, but the author presumably either didn't read the earlier one or wanted to re-iterate it.
I'll add another mechanism:
I think there are many people who have very high bars for how good something should be to post on the forum. Thus the forum becomes dominated by a few people (often people who aren't aware of or care about forum norms) who have much lower bars to posting.
This is a plausible mechanism for explaining why content is of lower quality than one would otherwise expect, but it doesn't explain differences in quality over time (and specifically quality decline), unless you add extra assumptions such that the proportion of people with low bars to posting has increased recently. (Cf. Ryan's comment)
You're quite right, it was left too implicit.
EA has grown a lot recently, so I think there are more people recently who aren't aware of or care about the "high bar" norm. This is in part due to others explicitly saying the bar should be lower, which (as others here have noted) has a stronger effect on some than on others.
Edit: I don't have time to do this right now, but I would be interested to plot the proportion of posts on the EA forum from people who have been on the forum for less than a year over time. I suspect that it would be trending upwards (but could be wrong). This would be a way to empirically verify part of my claim.
I'm interested in learning how plausible people find each of these mechanisms, so I created a short (anonymous) survey. I'll release the results in a few days [ETA: see below]. Estimated completion time is ~90 seconds.
I broadly agree with 5 and 6.
Re 3, 'There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.' - this makes me wince. A language point is that I think talking about how 'good quality' people are overall is unkind and leads to people feeling bad about themselves for not having such-and-such an attribute. An object level point is I don't think there is an anticorrelation - I think being a busy EA org person does make it more likely that they'll have valuable takes, but not being a busy-EA-org-person doesn't make it less likely - there aren't that many busy-EA-org-person jobs, and some people aren't a good fit for busy jobs (eg because of their health or family commitments) but they still have interesting ideas.
Re 7: I'm literally working on a post with someone about how lots of people feel too intimidated to post on the Forum because of its perceived high standards! So I think though the Forum team are trying to make people feel welcome, it's not true that it's (yet) optimized for this, imo.
There's a kind of general problem whereby any messaging or mechanism that's designed to dissuade people from posting low-qual... (read more)
I think it's fairly clear which of these are the main factors, and which are not. Explanations (3-5) and (7) do not account for the recent decline, because they have always been true. Also, (6) is a weak explanation, because the quality wasn't substantially worse than an average post.
On the other hand, (1-2) +/- (8) fit perfectly with the fact that volume has increased over the last 18 months, over the same period as community-building has happened on a large scale. And I can't think of any major contributors outside of (1-8), so I think the main causes are simply community dilution + a flood of newbies.
Though the other factors could still partially explain why the level (as opposed to the trend) isn't better, and arguably the level is what we're ultimately interested in.
Could you name them? I'm not sure which ones are out there, other than LW and Alignment Forum for AI alignment research.
E.g. I'm not sure where else is a better place to post research on forecasting, research on EA community building, research on animal welfare, or new project proposals. There are private groups and slacks, but sometimes what you want is public or community engagement.
Not sure how to post these two thoughts so I might as well combine them.
In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.
However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:
Just because SBF stole billions of dollars does not mean he has fewer virtuous personalit... (read more)
Watch team backup: I think we should be incredibly careful about saying things like, "it is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating". I'm sure you mean something reasonable when you say this, similar to what's expressed here, but I still wanted to flag it.
I noticed this a while ago. I don't see large numbers of low-quality low-karma posts as a big problem though (except that it has some reputation cost for people finding the Forum for the first time). What really worries me is the fraction of high-karma posts that neither original, rigorous, or useful. I suggested some server-side fixes for this.
PS: #3 has always been true, unless you're claiming that more of their output is private these days.
Should the EA Forum team stop optimizing for engagement?
I heard that the EA forum team tries to optimize the forum for engagement (tests features to see if they improve engagement). There are positives to this, but on net it worries me. Taken to the extreme, this is a destructive practice, as it would
I'm not confident that EA Forum is getting worse, or that tracking engagement is currently net negative, but we should at least avoid failing this exercise in Goodhart's Law.
Thanks for this shortform! I'd like to quickly clarify a bit about our strategy. TL;DR: I don't think the Forum team optimizes for engagement.
We do track engagement, and engagement is important to us, since we think a lot of the ways in which the Forum has an impact are diffuse or hard to measure, and they'd roughly grow or diminish with engagement.
But we definitely don't optimize for it, and we're very aware of worries about Goodharting.
Besides engagement, we try to track estimates for a number of other things we care about (like how good the discussions have been, how many people have gotten jobs as a result of the Forum, etc), and we're actively working on doing that more carefully.
And for what it's worth, I think that none of our major projects in the near future (like developing subforums) are aimed at increasing engagement, and neither have been our recent projects (like promoting impactful jobs).
I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:
However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with ... (read more)
Thanks for writing this.
I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.
I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).
That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much w... (read more)
I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.
My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.
My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.
If you would like to set up a call sometime to discuss further, please PM!
First, neat idea, and thanks for suggesting it!
From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:
1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.
On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.
Terminology proposal: a class-n (or tier-n) megaproject reduces x-risk by between 10^-n and 10^-(n+1). This is intended as a short way to talk about the scale of longtermist megaprojects, inspired by 80k's scale-scale but a bit cleaner because people can actually remember how to use it.
Class-0 project: reduces x-risk by >10%, e.g. creating 1,000 new AI safety researchers as good as Paul Christiano
Class-1 project: reduces x-risk by 1-10%, e.g. reducing pandemic risk to zero
Class-2 project: reduces x-risk by 0.1-1%, e.g. the Anthropic interpretability team
Class-3 project: reduces x-risk by 0.01-0.1%, e.g. most of these, though some make it into class 2
The classes could also be non-integer for extra precision, so if I thought creating 1,000 Paul Christianos reduced x-risk by 20%, I could call it a -log10(20%) = class-0.70 megaproject.
I'm still not sure about some details, so leave a comment if you have opinions:
- "class" vs "tier"
- I originally thought of having the percentages be absolute, but perhaps one could also make the case for relative percentages.
- should class-n be between 10^-n and 10^-(n+1), or between 10^-(n-1) and 10^-n?
- Are we evaluating outcomes or projects? What should th
... (read more)I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill.
If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.
Who tends to be clean?
With all the scandals in the last year or two, has anyone looked at which recruitment sources are least likely to produce someone extremely net negative in direct impact or to the community (i.e. a justified scandal)? Maybe this should inform outreach efforts.
I might want to become a billionaire for roughly the reasons in this post [1] (tl;dr EV is tens of millions per year and might be the highest EV thing I can do), and crypto seems like one particularly promising way. I have other possible career paths, but my current plan is to
- accumulate a list of ~25 problems in crypto that could be worth $1B if solved
- hire a research assistant to look over the list for ~100 hours and compile basic stats like "how crowded is this" and estimating market size
- talk with experts about the most promising ones
- if one is particularly promising, do the standard startup things (hire smart contract and front-end devs, get funding somehow) and potentially drop out of school
Does this sound reasonable? Can you think of improvements to this plan? Are there people I should talk to?
[1]: https://forum.effectivealtruism.org/.../an-update-in...
How much equity does SBF actually have in FTX? Posts like this imply he has 90%, but the first article I found said that he actually had 90% equity in Alameda (which is owned by FTX or something?) and nothing I can find gives a percentage equity in FTX. Also, FTX keeps raising money, so even if he had 90% at once point, surely much of that has been sold.
It's common for people to make tradeoffs between their selfish and altruistic goals with a rule of thumb or pledge like "I want to donate X% of my income to EA causes" or "I spend X% of my time doing EA direct work" where X is whatever they're comfortable with. But among more dedicated EAs where X>>50, maybe a more useful mantra is "I want to produce at least Y% of the expected altruistic impact that I would if I totally optimized my life for impact". Some reasons why this might be good:
- Impact is ultimately what we care about, not sacrifice. The new framing shifts people out of a mindset of zero-sum tradeoffs between a selfish and altruistic part.
- In particular, this promotes ambition. Thoughts similar to this have helped me realize that by being more ambitious, I can double my impact without sacrificing much personal well-being. This is a much better thing to do than working 70 hours a week at an "EA job" because I think my commitment level is X=85% or something.
- It also helps me not stress about small things. My current diet is to avoid chicken, eat fewer eggs, and offset any eggs I eat. Some people around me are vegan, and some people think offsetting is antithetical to EA.
... (read more)Epistemic status: showerthought
If I'm capable of running an AI safety reading group (at my school, and I learn that someone else is doing it, I might be jealous that my impact is "being taken".
If I want to maximize total impact, I don't endorse this feeling. But what feeling does make sense from an impact maximization perspective? Based on Shapley values, you should
- update downwards on the impact they get (because they're replaceable)
- update downwards on the impact you get, if you thought this was your comparative advantage (because you're replaceable).
- want
... (read more)What's the right way to interact with people whose time is extremely valuable, equivalent to $10,000-$1M per hour of OpenPhil's last dollar? How afraid should we be of taking up their time? Some thoughts:
- Sometimes people conflate time cost with status, and the resulting shyness/fear can prevent you from meeting someone-- this seems unwarranted because introducing yourself only takes like 20 seconds.
- The time cost should nevertheless not be ignored; how prepared you are for a 1-hour meeting might be the largest source of variance in the impact you produce/de
... (read more)Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.
I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.
edit: DAF = donor-advised fund
I think there are currently too few infosec people and people trying to become billionaires.
- Infosec: this seems really helpful for AI safety and biosecurity in a lot of worlds, and I'm guessing it's just much less sexy / popular than technical research. Maybe I'm wrong about the number of people here, but from attendance at an EAGxSF event it didn't seem like we would be saturated.
- Entrepreneurship: I think the basic argument for making tens of billions of dollars still holds. Just because many longtermist orgs are well-funded now doesn't mean they will be
... (read more)What percent of Solana is held by EAs? I've heard FTX holds some, but unknown how much. This is important because if I do a large crypto project on Solana, much of the value might come from increasing the value of the Solana ecosystem, and thus other EAs' investments.
A lot of EAs I know have had strong intuitions towards scope sensitivity, but I also remember having strong intuitions towards moral obligation, e.g. I remember being slightly angry at Michael Phelps' first retirement, thinking I would never do this and that top athletes should have a duty to maximize their excellence over their career. Curious how common this is.
Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.
Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.
I agree with these points. I've worked in the space for a few years (most notably for IOHK working on Cardano) and am happy to offer some advice. Saying that, I would much rather work on something directly valuable (climate change or food security) than earning to give at the moment...
I want to skill up in pandas/numpy/data science over the next few months. Where can I find a data science project that is relevant to EA? Some rough requirements:
- Takes between 1 and 3 months of full-time work
- Helps me (a pretty strong CS undergrad) become fluent in pandas quickly, and maybe use some machine learning techniques I've studied in class
- About as open-ended as a research internship
- Feels meaningful
- Should be important enough that I enjoy doing it, but it's okay if it has e.g. 5% as much direct benefit as the highest-impact thing I could be doing
- I'm
... (read more)