H

Holocron

42 karmaJoined Mar 2019

Posts
1

Sorted by New

Comments
10

Thanks Aaron, I'm happy to see this is an actively enforced community norm. I must admit that not knowing that it was a norm, in I did downvote comments in response, then subsequently propose a monitoring system (or even a system warning against this as it's being done). Will undo (or feel free to undo on my behalf) downvotes.

If you have feedback on making comments more great, please let me know!

Consider protections to prevent "pattern downvoting," that is, a single forum user downvoting every one of another user's comments and articles, particularly in articles and comments that have received few views.

Usually it's not the case that every comment made by a single person contains zero value and is detrimental to online discourse. However, some people seem to have a tendency to want to do so. Rather than allowing one person to suppress the views of another's right after publication, it would be better for other users to vote to ensure the content is being correctly suppressed.

See this thread as an example.

I firmly believe that such "suppression," especially if done unilaterally by a single person, is exceptionally likely to be harmful. I strongly condemn such actions.

Furthermore, this is an ineffective strategy given that I can simply (and probably should) write up additional top-level posts that contain other informed views on EA.

Hi Charles,

I am very sorry you feel that way. I hope that by starting off my comment with "My apologies for the extended delay in response! I appreciate the engagement" I'm not indicating I'm "toxic" and "borderline abusive."

It's very concerning to see continual inaction on these matters since I care about the future of this movement and the world, so when bringing up long unaddressed problems (which you seem to be implicitly recognizing as somewhat valid) I don't think it's unreasonable to take a more critical tone. I'm fairly confident I can find a lot of content that is written in much more severe a manner on this forum and most certainly on LessWrong.

How exactly am I "raiding EA norms?" In my communication style and terminology? Doesn't seem like a problem to me even if that was the case.

I literally just don't want Aaron G. or someone else to spend time on this (Aaron's comment was lavish and I think other eyeballs and thoughts will spend time on this).

You wish for people to ignore legitimate feedback, and to suppress it with downvoting? That doesn't sound like a way for a movement to improve. I do appreciate Aaron's engagement. While I think he may have misunderstood certain points I was trying to make, his information was legitimately helpful, nonetheless as indicated by the other commenter on this thread.

While this may not get much engagement, as you seem to recognize, there are very legitimate issues here, and a complete lack of action on multiple fronts. My views most certainly mirror those in the community, even recent EA forum posts alluding at similar ideas.

I think that pointing out legitimate areas of potential improvement should be valued in communities, and it should be acceptable to take somewhat critical tones as long as the intent is to not cause any emotional harm.

Unfortunately I don't have an unlimited amount of time every day to refine my tone and write detailed writeups on the lack of progress happening on several key fronts, given my low confidence that this is sufficient to induce change, as evidenced by all of the "existing criticism" that you are talking about.

My apologies for the extended delay in response! I appreciate the engagement.

I recommend this below, but I'll also say it here: If you have questions or uncertainties about something in EA (for example, how EA funders model the potential impact of donations), try asking questions!

Contrary to your assumption, I have a lot of information on EA, and I'm aware that the problems I'm pointing out aren't being implemented. There is likely a gap in understanding that is common in written communication.

This communication gap would be less of a problem if the broader issue "EA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the community" was addressed. As a specific example, where are the exact criticisms of longtermism or the precise strategies and tactics of specific EA organizations laid out? There should be a publicly available argument map for this, as a rudimentary example of what such a proposed system should look like. There's a severe gap in coordination and collective intelligence software in EA.

It's easy to cherry-pick from among the world's tens of thousands of charities and find a few that seem to have better models than GiveWell's recommendations. The relevant questions are:

That was just an example of a charity that would have failed to be funded due to a combination of poor assumptions about the cause area/intervention type not being effective in 100% of instances and just presumably on average, failure to consider revenue-positive models, etc.

EA misses out an entire classes of strategies that are likely pretty good. Revenue-positive models, for one, for the most part. That's not to say there's not a single EA doing it, but there's a high lack of interest and support which is just as bad, since the community's mission is to advance high-impact efforts.

Could we have predicted Samasource's success ahead of time and helped it scale faster? If so, how? Overall, job/skills-training programs haven't had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), it's understandable that they'd focus on areas that were more promising overall.

That's assuming that 100% of jobs/skills training programs are doomed to failure... kind of like assuming 100% of charities are doomed to be ineffective. But if we used that logic, EA wouldn't exist. Doing this analysis on a cause level and intervention type could be fundamentally problematic.
 

Could someone in EA found a program as successful as Samasource? If so, how? A strategy of "take the best thing you can find and copy it" doesn't obviously seem stronger than "take an area that seems promising and try to found an unusually good charity within that area", which people in EA are already doing.

Yes, EAs could certainly do such a thing, which would be easier if entrepreneurship was more encouraged. With the philosophy of only a few career areas being identified as promising at any given time (they do shift with time, very annoyingly, there should be much less confidence about this) it is hard for people to even pursue this strategy, let alone get funding.

There's a lack of evaluation resources available for assessing new models that don't align with what's already been identified, which is a huge problem.

Also, have you heard of Wave? It's a for-profit startup co-founded by a member of the EA community, and it has at least a few EA-aligned staffers. They provide cheap remittances to help poor people lift their families out of poverty faster, and as far as I know, they haven't had to take any donations to do so. That's the closest thing to an EA Samasource I can think of.

The existence of  something within EA (a minority, for example) does not mean that it is adequately represented.

(If you have ideas for other self-sustaining projects you think could be very impactful, please post about them on the Forum!)

Using a Forum is a terrible mechanism for collective intelligence and evaluation (not to say this issue is unique to EA).

Under these circumstances, projects like "founding the next Samasource" seem a lot less safe, and it's hard to fault early adopters for choosing "save a couple of lives every year, reliably, while holding down a steady job and building career capital for future moves".

The method by which strategy shifts percolate is rather problematic. A few people at 80K change their minds and the entire community shifts within a few years, causing many people to lose career capital, effort spent specializing in certain fields, etc. This is likely to continue in the future. Ignoring the problem that shifts cause, the current career strategies being advocated are likely not optimal at all and will shift. The fix is to both reduce the cultural reliance on such top-down guidance as well as completely rethink the mechanism by which career strategy changes are made.

There are quite a few people in EA who work full-time on donor relations and donor advisory. As a result of this work, I know of at least three billionaires who have made substantial contributions to EA projects, and there are probably more that I don't know of (not to mention many more donors at lower but still-stratospheric levels of wealth).

Again, the presence of some individuals and teams working on this does not mean it's the optimal allocation.

Also, earning to give has outcomes beyond "money goes to EA charities". People working at high-paid jobs in prestigious companies can get promoted to executive-level positions, influence corporate giving, influence colleagues, etc.

Those direct and indirect consequences should all be factored into a quantitative model for impact for certain careers, which 80K doesn't do at all, they merely have a few bubbles for the "estimated earnings" of a job, no public-facing holistic comparison methodology. Surprising given how quantitative the community is.
 

For example, employees of Google Boston organize a GiveWell fundraiser that brings in hundreds of thousands of dollars each year on top of their normal jobs (I'd guess this requires a few hundred hours of work at most).

Saying E2E has benefits does not mean it's the best course of action...

Another example: in his first week on the job, the person who co-founded EA Epic with me walked up to the CEO after her standard speech to new employees and handed her a copy of a Peter Singer book. The next Monday, he got a friendly email from the head of Epic's corporate giving team, who told him the CEO had enjoyed the book and asked her to get in touch. While his meeting with the corporate giving head didn't lead to any concrete results, the CEO was beginning to work on her foundation this year, and it's possible that some of her donations may eventually be EA-aligned. Things like that won't happen unless people in EA put themselves in a position to talk to rich/powerful people, and not all of those people use philanthropic advisory firms.

Yep, I mentioned influence in my post, but the question is whether this is the most optimal way to do that (along with all of the other things individual EAs could be doing)
 

This isn't to say that we couldn't have had a greater focus on reaching high-net-worth advisory offices earlier on in the movement, but it didn't take EA very long to move in that direction.

Again, allocation objection. The fact that these keep springing up is rather remarkable. Are we far from diminishing returns, or past that already? Should all be part of movement strategy considerations done by a team and ideally dedicated resources combined with collective software.

It's also worth mentioning that 80K does list philanthropic advising as one of their priority paths. My guess is that there aren't many jobs in that area, and that existing jobs may require luck/connections to get, but I'd love to be proven wrong, because I've thought for a long time that this is a promising area. (I myself advise a small family foundation on their giving, and it's been a rewarding experience.)

Evidently jobs and the organizations that create them can be created if the movement so chooses.

There is some EA research on the psychology of giving (the researchers I know of here are Stefan Schubert and Lucius Caviola), but this is an area I think we could scale if anyone were interested in the subject -- maybe this is a genuine gap in EA?

I'd be interested to see you follow up on this specific topic.

Analysis of the various gaps and merits of such could be done, as a terrible example, with some sort of voting mechanism (not quite a prediction market) for various gaps.

Which activities? If you point out an opportunity and make a compelling case for it, there's a good chance that you'll attract funding and interested people; this has happened many times already in the brief history of EA. But so far, EA projects that tried to scale quickly with help from people who weren't closely aligned generally haven't done well (as far as I know; I may be forgetting or not know of more successful projects).

I do not think this is true, given observations as well as people generally leaning away from novel ideas, smaller projects, etc.

This is true, but considering that the movement literally started from scratch ten years ago, and is built around some of the least marketable ideas in the world (don't yield to emotion! Give away your money! Read long articles!), it has gained strength at an incredible pace.

I have concerns over movement growth, but anyways, the point was what it could have been, instead of what it is.

(As I noted above, though, I think you're right that we could have paid more attention to certain ideas early on.)

A problem that continues to this day, a complete lack of investment in exploratory research (charity entrepreneurship has recommended this as an area), new promising ideas and causes popping up all the time that go ignored, unfunded, etc due to many factors including lack of interest in whatever isn't being peddled by key influencers, etc.

A lack of specificity (a problem is noted, but no solution is proposed, or a solution is proposed with very little detail / no modeling of any kind)

Fix is to have a centralized system storing both potential problems and solutions in a manner anyone can contribute to or vote on. Maybe in 5–10 years the EA Forum will look like this, or maybe not, given the slow (but steady) pace of useful feature development.
 

A lack of knowledge of the full scope of the present-day movement (it's easy to reduce EA to consisting of GiveWell, Open Phil, 80K, and CEA, but there's a lot more going on than that; I often see people propose ideas that are already being implemented)

This itself is the problem, and years later still no effort to fix collective knowledge and coordination issues.


"Someone should do X" syndrome (an idea is proposed which could go very well, but then no one ever follows up with a more detailed proposal or a grant application). In theory, EA orgs could pick up these ideas and fund people to work on them, but if your idea doesn't fit the focus of any particular organization, some individual will have to pick it up and run with it.

Answer: pay people, but then no interest in doing so. Most funding decisions being made by a very small number of biased evaluators, biased in favor of certain theories of change, causes, larger organizations,  existing organizations, interventions, etc. Thus this is the consequence, lots of people agreeing something should be done, no mechanism for this knowledge to become a funding and resourcing decision.


2. The apt comparison is not "funding Samasource vs. funding GiveDirectly". The apt comparison is "funding the average early-stage Samasource-like thing vs. funding GiveDirectly". Most of the money put into Samasource-like things probably won't have nearly as much impact as money given directly to poor people. We might hit on some kind of fantastically successful program and get great returns, but that isn't guaranteed or even necessarily likely.

For all we know revenue-generating poverty alleviation models are vastly better... no existing thoughts to date exploring this idea which is one of many thousands of strategic possibilities.

While charity entrepreneurship is doing this at a problem/solution level, this isn't sufficient in the slightest, huge assumptions getting made. Sufficient resources exist for this to be done more robustly, no appetite to fund it.

We will definitely find that some organizations have been more effective than top EA charities, but as I've said already, this cherry-picking won't help us unless we learn general lessons that help us make future funding decisions. Open Phil does some of this already with their History of Philanthropy work.

No, cherry picking wasn't the point at all. This is a way to identify potential strategic opportunities getting missed out on (replicable models could succeed), backtest evaluation criteria (like was it truly impossible to identify this would have worked), etc.

(Also, EA funding goes well beyond "top charities" at this point: GiveWell's research is expanding to cover a lot more ground, and the latest grant recommendations from the Long-Term Future Fund included a lot of experimental research and ideas.)

Again, the fact that something is happening doesn't mean we've reached the optimal level of exploration. We're probably investing thousands of times less resources than we should.

Did you write to any funding entities before writing this post to ask about their models?

No comment, but regardless, very biased based on certain evaluator opinions, lots of groupthink.

Generally, these organizations are happy to share at least the basics of their approach, and I think this post would have benefited from having concrete models to comment on (rather than guesses about how Fermi estimates and decision analysis might compare to whatever funders are doing).

Hearing about what they say they're doing is pretty useless, does not map to what's actually happening.

No EA organization in the world will try to stop you from "strategically thinking about career impact". 80K's process explicitly calls on individuals to consider their options carefully, with a lot of self-reflection, before making big decisions. I'm not sure what you think is missing from the "standard" EA career decision process (if such a thing even exists).

People just listen to whatever 80K tells them, Holden, etc, rather than independently reaching career decisions. I don't care what 80K tells you do to, they tell people to do a very large number of things, we need to look at what's actually happening.

The higher-EV option in this scenario is Career B, and it isn't close.

You're missing the point, you are using reasoning about averages. Your average officer likely doesn't save many counterfactual lives. What matters is the EV of your specific strategy, not the overall career. With Petrov, this wasn't predictable, but it could've been with that specific officer strategy. Whether the officer in question did that calculation in advance is irrelevant (they probably didn't think about that at all), the fact is that it could've been foreseeable.

I agree! This is one of the reasons I'm enthusiastic about earning-to-give: if people in EA enter a variety of influential/wealthy fields and keep their wits about them, they may notice opportunities to create change. On the other hand, studying these professions and trying to change them from the outside seems less promising.

Lots of EAs actually have these ideas and they're not listed anywhere. Right, outside analysis should be used to foresee these opportunities but not to change fields. No effort getting spent on doing this outside analysis for career areas considered "low impact." No awareness of or attempt to evaluate things like dental interventions that are vastly more effective than existing interventions for example. There are so many options, and they could all benefit from EV calculations.

It would require a big shift for EA to start doing these EV calculations for many areas. Again, a lot of work, but the people and money are here.

There needs to be this paradigm shift, followed by resource allocation, probably never gonna happen.

Remember also that problems must be tractable as well as large-scale. Taking your example of "accounting", one could save Americans tens of millions of hours per year by fighting for tax simplification. But in the process, you'd need to:

Tractability calculations should be case by case, not generalizable. And that's just one of many possibilities within accounting, none of which have any awareness or EV calculations. But yeah, that one might be hard.

Disregarding academic evidence on a cause level (x-risk) is not the same as doing so on an intervention level.

There most certainly is a substantial gap in evaluations of programs, particularly novel ideas, which has gone unaddressed throughout the entire existence of the EA community. Those efforts are crippled by the dominant views that only a few careers and courses of action are appropriate (see how 80K thinks about careers, for example), so that very much reduces the interest in evaluating other programs.

Along the lines of "EA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the community" this is exemplary of the lack of community-level coordination and knowledge management in EA. Efforts like the EA Wiki, which received no substantial support from the community whatsoever, have failed. The entire area has minimal interest and attention allocated to it. The EA Forum lacks some extremely obvious features—a registry of everyone in EA, for starters, edit: which was relegated to the volunteer-run and poorly resourced EA Hub for years.

better to critic specific points rather than something broad like ‘all strategy of EA affiliated orgs’.

I'm mentioning broad concerns I have about the movement's strategy, primarily a potential underemphasis on acquiring resources and an overemphasis on established courses of impact. How exactly would I critic specific points? I mention potential examples of problems and associated optimizations, such as relying more on decision analysis than RCTs.

generally, if it seems like a large number of really smart people in EA appear to be missing something, you should have a strong prior that you are the one missing something. Took me a long time to accept this. It’s not wrong to shine a light on things of course, but a touch more humility in your writing would go a long way.

I don't claim to be correct, just wanted to document my thoughts and see if anyone had other views.

reasoning and evidence aren’t exclusive things, evidence is part of reasoning.

I separated the two for rhetorical effect, using evidence to refer more towards established routes of impact and reasoning to refer to reasoning about unproven routes of impact. I agree evidence and reasoning are linked, and that reasoning should use both academic evidence and other factual data.

this said, I don’t think the criticism of “too evidence based” sticks anyway, have you read much academic ea research recently? Maybe in poverty.. but that’s a very busy area with loads of evidence where most approaches don’t work so it would be pretty crazy not to put heavy weight on evidence in that cause area.

Why exactly do you not think this sticks? My point is there may be research on, say, the effect of ads on animal protein consumption, but there are many courses of action that do not have supporting evidence that may be much higher impact than courses of action with supporting evidence. For instance, starting Impossible Foods to create good tasting alternatives. Why is that not considered EA? Seems pretty high impact to me.

Jude’s spends 2.1m a day but given the differences between the impact p dollar of projects easily gets into the order of 100s-1000s this isn’t very relevant.

I completely agree that EA may spend money more effectively than Jude's by a significant amount. My main point is that the movement could be influence constrained, it may lack the influence to actually affect the long-term or make a significant dent in global poverty, but a change in strategy (perhaps in a direction of directly or indirectly acquiring more resources) may increase the likelihood of creating significant impact.

OpenPhil could spend that. There are complex reasons why it doesn’t but the main thing to note that total spend is a terrible terrible signal.

It cannot spend that, because it would run out of money. St Jude's has a revenue stream from its fundraising branch that enables it to continually spend much more than the EA movement has in its entire lifetime. I understand OPP is, among other reasons, waiting to have more epistemic certainty on what causes/interventions are most impactful. That may be great, but distributing 0.5% of $100 billion could be much better than distributing 0.5% of $10 billion a year, particularly given the urgency of some cause areas and the theoretically compounding returns of altruism.

for profit models have been explored numerous times, while still promising, little really great stuff has been found. People are working on it but it’s not a slam dunk.

Is that true? This seems like an opinion—there are certainly many financially self-sustaining/for-profit models that have enormous positive impacts on the world. I mentioned Impossible Foods earlier, and within companies, the impact of projects like Apple introducing blue light reduction in iPhones affects hundreds of millions of people.

earning to give is a great way to build career capital and do good.

Is it? This is an opinion. What if it's exceptionally low impact compared to other possible career courses of action? Or what if it is a good idea, but more emphasis should be placed on career strategy in addition to donating money because both have expected impacts in the same range?

advocacy and philanthropic advisory is really hard. People in that area are going as fast as they sensibly can.

I'm not necessarily suggesting the EA movement actually focus on acquiring more HNW individuals or actually pursue these tactics. These were example possibilities to consider to emphasize the point that movement strategy can have big effects on movement impact, and that EA may not currently be pursing the most optimal strategy.

Also, I think this objection is rather broad. Lots of things can be considered really hard, and something seeming hard doesn't mean it's lower EV than something seeming easy.

it takes a long time to become a chief of staff at a powerful org

I think there are easier ways to come into contact with ultra-high-net-worth individuals. Again, just an idea, not a recommendation.

policy / lobbying approaches are really hard, and people are again working on it as fast as they can.

Allocating more resources to these approaches would have some sort of impact, whether positive or negative. How do we know our current allocation is optimal?

This reasoning makes sense to me. I think it's difficult to measure the net impact of the global advertising industry, but that might not be relevant. Thinking counterfactually, if we assume you are purely executing a plan that others at Google created with programming skills that Google could hire other engineers to replace, the marginal impact of doing software engineering for Google Ads is essentially zero. I would be more concerned about the impact of your work if you were making high level business strategy or product decisions that could affect millions of people or the state of the ad industry and Google's role in it.

One interesting consideration is that while digital advertising might be net positive, it is net negative compared to other advertising models that could otherwise exist. For example, a hypothetical "ethical ads" business that recommends products and services that actually improve people's lives would be both profitable for advertisers and beneficial to society. The current advertising model involves things like advertising e-cigarettes to smokers and teenagers alike, which could be extremely positive for smokers to switch to to extend their lifespan but negative for teenagers to switch to. I would personally be interested in the expected value of pursing an ethical advertising venture.

I am personally not a fan of the strong upvote and strong downvote system. I think problems with that system may be coming into play here. I'm not sure how the algorithm actually works, but it seems like a small number of voters can dramatically reduce the total vote count of a comment or post, and that scenario reflects that minority's opinion much more than it may reflect overall perceptions. Highly penalizing posts that are generally perceived as fine by many but perceived as problematic by a few is a serious concern to take into account.

I liked the old system better where votes were weighted equally, and the proportion of positive and negative votes was transparently disclosed to everyone. Anyone who disagrees strongly with a position can simply write a comment, and if that comment is more upvoted than the original post, that typically reflects the strength of the opposing argument. Strong downvotes might reduce the incentive to have informed discussion in favor of blind disagreement.

Load more