All of tyleralterman's Comments + Replies

This dialogue with my optimizer-self, showing it evidence that it was undermining its own values by ignoring my other values, was very helpful for me too.

4
Brad West
2y
I am a moral realist believing agents should act to create the greatest net well-being (utility).

Just want to say that even though this is my endorsed position, it often goes out the window when I encounter tangible cases of extreme suffering. Here in Berlin, there was a woman I saw on the subway the other day who walked around dazedly with an open wound, seemingly not in touch with her surroundings, walking barefoot, and wearing an expression that looked like utter hopelessness. I don't speak German, so I wasn't able to interact with her well.

When I run into a case like this, what the "preliminary answer" I wrote above is hard to keep in mind, especially when I think of the millions who might be suffering in similar ways, yet invisibly.

Hi Braxton – I feel moved by what you wrote here and want to respond later in full.

For now, I just want to thank you for your phrase here: "How could I possibly waste my time on leisure activities once I’ve seen the dark world?" I think this might deserve its own essay. I think it poses a challenge even for anti-realists who aren't confused about their values, in the way Spencer Greenberg talks about here (worth a read).

Through the analogy I use in my essay, it might be nonsensical, in the conditions of a functional society, to say something like "It's mor... (read more)

4
tyleralterman
2y
Just want to say that even though this is my endorsed position, it often goes out the window when I encounter tangible cases of extreme suffering. Here in Berlin, there was a woman I saw on the subway the other day who walked around dazedly with an open wound, seemingly not in touch with her surroundings, walking barefoot, and wearing an expression that looked like utter hopelessness. I don't speak German, so I wasn't able to interact with her well. When I run into a case like this, what the "preliminary answer" I wrote above is hard to keep in mind, especially when I think of the millions who might be suffering in similar ways, yet invisibly.

Agree so much with the antidote of silliness! I’m happy to see that EA Twitter is embracing it.

Excited to read the links you shared, they sound very relevant.

Thank you, Oliver. May your fire bum into the distance.

Thank you, David! I also worry about this:

When we model to the rest of the world that “Effective” “Altruism” looks like brilliant (and especially young) people burn themselves out in despair, we are also creating a second order effect where we broadcast the image of Altruism as one tiled with suffering for the greater good. This is not quite inviting or promising for long term engagement on the world's most pressing problems.

Of course, if one believes that AGI is coming within a few years, then you might not care about these potential second order effects.... (read more)

Ah, got it. My current theory is that maximizing caused suffering (stress) which caused gut motility problems which caused bacterial overgrowth which caused suffering, or some other crazy feedback phenomenon like that.

Sometimes positive feedback loops are anything but. 😓

Moreover: 

It's not obvious to me that severe sacrifice and tradeoffs are necessary. I think their seeming necessary might be the byproduct of our lack of cultural infrastructure for minimizing tradeoffs. That's why I wrote this analogy:

To say that [my other ends] were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.

Once, the material requirements of life were in competition: If we spent time building shelter i

... (read more)

If you, believe, as I do, that there aren't sturdy philosophical grounds for de-valuing my other ends, then life becomes a puzzle of how to fulfill all of your ends, including – for me – both my EA ends and making art for actually for its own sake (i.e., not primarily for the sake of instrumentally useful psychological health so I can do the greater good).

Hi Brad, I appreciate this reply. I wonder if we might have a fundamental disagreement!

I personally don't regard my non-EA ends as "beastly" – or, if I do, my valuing of EA ends is just as beastly as my valuing of other ends. I can adopt a moral or cultural framework that disagrees with my pre-existing "value function," and what it deems valuable. But something about this is a bit awkward: Wasn't it my pre-existing value function that deemed EA ends to be valuable?

2
Brad West
2y
Yeah we probably do have a fundamental disagreement. I think you were essentially correct when you were in the dark night. The weight you put to your own conscious experiences should not exceed the weight you put on that of other beings throughout space and time. Thus, the wonders and joys of your own conscious experience have intrinsic value, but it is not clear that satisfaction of these joys is the most effective use of the resources you as an agent have in an obvious sense (i.e. it seems like you can enable greater net experiences by privileging other entities). I think there is a nonobvious reason to (seemingly) privilege yourself sometimes as an Effective Altruist in that concessions to your own psychological desires can facilitate your most effective operation and minimize likelihood that you will abandon or weaken your commitment to maximize well-being. This is what I mean by feeding the beast. Your seeming reconciliation is value pluralism, which appears to, in this case, simply mean placing the value of some of your own conscious experiences in a superpriority above those of the conscious experiences of others. I would think your framing, an elevation of your own conscious experience, makes less sense than mine. Other beings' conscious experiences are no less important than my own. I would make concessions which seemingly prioritize me, but ultimately, if I am acting morally, this preference is only illusory.
5
tyleralterman
2y
Moreover:  It's not obvious to me that severe sacrifice and tradeoffs are necessary. I think their seeming necessary might be the byproduct of our lack of cultural infrastructure for minimizing tradeoffs. That's why I wrote this analogy: I believe it's possible to find and build synergies that reduce tradeoffs. For instance, as a lone ancient human in the wilderness, time building shelter might jeopardize daylight that could have been spent foraging for food. However, if you joined a well-functioning tribe, you're no longer forced to choose between [shelter-building] and [foraging]. If you forage, the food you find will power the muscles of your tribesmate to build shelter. Similarly, your tribesmate's shelter will give you the good night's rest you need to go out and forage. Unless there's a pressing emergency, it would be a mistake for the tribe to allocate everyone only to foraging or only to shelf-building.  I think we're in a similar place with our EA ends. They seem like they demand the sacrifice of our other ends. But I think that's just because we haven't set up the right cultural infrastructure to create synergies and minimize tradeoffs. In the essay, I suggest one example piece of infrastructure that might help with this: a fractal altruist community. But I'm excited to see what other people come up with. Maybe you'll be one of them.
2
tyleralterman
2y
If you, believe, as I do, that there aren't sturdy philosophical grounds for de-valuing my other ends, then life becomes a puzzle of how to fulfill all of your ends, including – for me – both my EA ends and making art for actually for its own sake (i.e., not primarily for the sake of instrumentally useful psychological health so I can do the greater good).

I'm not sure the right way to address this. My burnout and disabling depression predated my gut condition by something like 9 months. My doctors have told me that inciting events for gut symptoms like mine very often include a period of severe stress/burnout. Given that I was quite fit and healthy, without any history of chronic illness, it seems likely the causality of what you're suggesting was reversed. However, it's true that once my gut symptoms started this made my subsequent suffering much worse. =(

3
JakubK
2y
That makes sense. Btw, I was suggesting that maximizing caused suffering and gut problems caused suffering, not that maximizing couldn't lead to gut problems.

Has anything else been written on this topic?

I'm curious to hear more about how critiques have been processed historically by the EA movement. Shortform post here: https://forum.effectivealtruism.org/posts/boYH7XH4xE9iugxWi/tyleralterman-s-shortform?commentId=RJYzym2mwrnXP9amn

Apropos of the "Criticism and Red Teaming Contest," I am curious about how critiques have historically shaped EA:

a. What critiques have resulted in large and tangible changes in the movement?

b. What were the means by which these critiques were "metabolized?" Eg did they require a prestigious champion? Was there a highly shared article that changed people's minds? etc

5
Gavin
2y
I have no proof it mattered, but a few years before the big pivot to longtermism, 80k debated some leftists who emphasised the sheer scope of systemic change and measurability bias. And we moved.

Hmmm I think it’s actually really hard to critique EA in a way that EAs will find convincing. I wrote about this below. Curious for feedback: https://twitter.com/tyleralterman/status/1511364183840989194?s=21&t=n_isE2vL3UIJsassqyLs8w

2
Sophia
2y
Your description of practical critiques being difficult to steelman with only anecdata available feels like the classic challenge of balancing type I and type II error when reality is underpowered.  In the context of a contest encouraging effective altruism critiques, I think we maybe want to have a much higher tolerance than usual for type I error in order to get less type II error (I am thinking of the null hypothesis as “critique is false”, so the type I error would be accepting a false critique and type II error would be rejecting a true critique).  Obviously, there needs to be some chance that the critique holds. However, it seems very valuable to encourage critiques that would be a big deal if true even if we’re very uncertain about the assumptions, especially if the assumptions are clear and possible to test with some amount of further investment (eg. by adding a question to next year’s EA survey or getting some local groups to ask their new attendees to fill out an anonymous survey on their impressions of the group). This makes me think that maybe a good format for EA critiques is a list of assumptions (maybe even with the authors’ credences that they all hold and their reasoning), and then the outlined critique if those assumptions are true. If criticisms clearly lay out their assumptions, even if,  say, we guess that there is a, say, 70% chance that the assumptions don’t hold, in the 30% of possible worlds where they do hold up (assuming our guess was well-calibrated :P), having the hypothetical implications written up still seems very valuable (to help us work out if it's worth investigating these assumptions further/to get us to pay more attention to evidence for and against the hypothesis that we live in that 30% world/to get us to think about whether there are low-cost actions we can take just in case we live in that 30% world). 
4
Sophia
2y
“Not being easy to criticise even if the criticism is valid” seems like an excellent critique of effective altruism.  

I would expect there to be higher quality submissions if the team running this were willing to compile a list of (what they consider to be) all the high quality critiques of EA thus far, from both the EA Forum and beyond. Otherwise I expect you’ll get submissions rehashing the same or similar points.

I think a list like this might be useful for other purposes too:

  • raising the profile of critiques that matter amongst EAs, thus hopefully improving people's thinking
  • signalling that criticism genuinely is welcome and seen as useful

Fwiw “… Tyler Alterman who don't glom with EA” Just to clarify, I glom very much with broad EA philosophy, but I don’t glom with many cultural tendencies inside the movement, which I believe make the movement a non-sufficient vehicle to implement the philosophy. There seem to be an increasing amount of former hardcore EA movement folks with the same stance. (Though this is what you might expect as movements grow, change, and/or ossify.)

(I used to do EA movement-building full time. Now I think of myself as an EA who collaborated with the movement from the outside, rather than the inside.)

Planning to write up my critique and some suggested solutions soon.

5
Milan_Griffes
2y
This orientation resonates with me too fwiw. 
8
Gavin
2y
Yeah that's what I meant. Looking forward to reading it!

+1

Though I suspect it will be difficult to get to a sufficient threshold of EAs using LinkedIn as their social network without something similar to a marketing campaign. Any takers?

3
MichaelDickens
8y
Why would it be difficult? LinkedIn is already quite popular, and the groups Ben named have lots of members.
1
Benjamin_Todd
8y
Yes, though if people just join the group it's already very useful, since then you're searchable. The group doesn't need to be highly active to be useful.
3
melissa
8y
LinkedIn addict here but somewhat new to effective altruism. Please let me know how I can help!

I agree with Owen's comments and the others. The basic message of my post, however, seems to be something like, "Make sure you compare your plans to reality" while emphasizing the failure mode I see more often in EA (that people overestimate the difficulty of launching their own project).

Would it be correct to say that your comments don't disagree with the underlying message, but rather believe that my framing will have net harmful effects because you predict that many people reading this forum will be incited to take unwise actions?

7
RyanCarey
8y
I agree that people should try to reason about the factors actually constraining the success of their project and try to unblock those. But I don't agree that most people need to spend less time qualifying themselves to complete their projects. Sure, people should question whether their success is constrained by their knowledge and qualifications, but often it is. I'd rather say: * Do you need a degree to emigrate? * Do you want to work for an existing policy organisation? * Do you want to work as an academic? * Do you want to do targeted movement-building with academics? * Do you want to network with many researchers? If so, your success will probably be limited if you don't get an undergraduate degree. You may need graduate education also. I guess you'd agree. Most of what's done by leading EAs like Bostrom/Ord/MacAskill needs (or can be greatly helped) one or more degrees. Even for people like Musk or Yudkowsky who are seemingly completing their own projects, degrees can help in many ways. People who are devoted enough to drop out of their degree at a young age based on reading complex online arguments are the same sorts of people whose influence in policy I'd want to preserve. These people need to be encouraged to make a 5+ year long plan, and to stay the course. Sure, there are plenty of people who reach their thirties and forties and never quite manage to pull the trigger and start their own project, or move to that nonprofit job. And for that audience, you need to deliver your kind of message. Sure, if you have extensive transferable skills, and want to perform activities like grassroots outreach and fundraising or starting a company of a less technical variety, then going in with few qualifications is appropriate. But that's a hard way and it's not the only way. Another way of thinking about it is that there are intrapreneurs (people who pursue influence within an organisation) and entrepreneurs (people who go off and start their own thing). Peo
-1
Gleb_T
8y
I can only speak for myself, but I would say your message could have been delivered much more optimally and made a significant net positive impact by doing so. Rather than telling people "go start stuff up," it could have said "go learn the skills and knowledge necessary to start stuff up, which are much less extensive than you think them to be, while also ensuring you build up the emotional and cognitive tools needed to deal with failure and updating your beliefs about your project." My thoughts, in other words, are not a rejection of the fundamental message, simply a desire for a more optimal delivery of the message.

Fascinating - this ranks as both my most downvoted and most shared post of all time.

1
MalcolmOcean
8y
= controversy sells.

Yup, this is an important thing to keep in the background of expert assessment.

I'm glad you think it's nonsense, since - in some strange state of affairs - a certain unnamed person has been crushing on the communal Pom sheet lately. =P

Well-observed! Here's my guess on where I rank on the various conditions above:

  • P - Process: Medium. I think my explicit process is still fairly decent, but my implicit processes still need work. E.g., I might perform well at identifying an expert if you gave me a decent amount of time to check markers with my framework, but I'm not fluent enough in my explicit models to do expertise assessments on the fly very well, Sherlock Holmes-style.
  • I - Interaction: Medium. I've spent dozens of hours interacting with expertise assessment tasks, as mentioned in the
... (read more)

Potential improvement: Rather than a binary pass fail for experts we should like a metric that grades the material they present.

Agreed. I tried to make it binary for the sake of generating good examples, but the world is much more messy. In the spreadsheet version I use, I try to assign each marker a rating from "none" to "high."

The Cambridge Handbook of Expertise

How worthwhile do you think it would be for someone to read the handbook?

0
RomeoStevens
8y
I think a skim/outline is worthwhile. It includes lots of object level data which isn't a great use of time.

Issue: It seems like the model might have trouble filtering people who have detailed but wrong models.

100%. The model above is only good for assessing necessary conditions, not sufficient ones. I.e., someone can pass all four conditions above and still not be an expert.

I imagine there is another class of experts who have decades of experience, rich implicit models and impressive achievements, but who would struggle to present concise, detailed answers if you asked them to share their wisdom. I suspect that quiet observation of such a person in their work environment, rather than asking them questions, would yield a better measure of their level of expertise, but this requires considerable skill on the part of the observer.

Indeed: tacit experts. The way I assess this now is basically by looking at indirect signs around... (read more)

0
Gleb_T
8y
One idea for learning the skills of tacit experts that I found works is to copy their behaviors regarding the domain, without necessarily understanding the reasons behind their behaviors. It sounds strange to us as people who are very intellectually-oriented and seek to understand the reasons behind why something works. I know it did to me when I first tried to do it. Moreover, there is a danger of copying behaviors that are incidental and do not lead to the desired outcome. Still, given that tacit experts often don't know themselves why they do well at what they do, simply copying their behaviors seems to work.

I tested my predictions against the experts by rating applications for the top 5 candidates myself, then getting the domain expert to rank them and compare scores, watching them doing so.

Ah! This sounds like a great feedback mechanism for one's expert assessment abilities. I'm going to steal this. =)

Tyler’s model seems somewhat helpful here, and adding the components from John’s model improves it again.

+1 - you definitely want to use more signs than the ones I mentioned above to be confident that you have identified sufficient marker of expertise. The ones listed above are only intended to be necessary markers. A good way of generating markers beyond the necessary ones: think about a few people who you can confidently say are experts. What do they have in common? (Please send me any cool markers you've come up with! My own list has over 30 now, and it doesn't seem like ceiling has been hit.)

While it seems possible to make some progress on the problem of independently assessing expertise, I want to stress that we should still expect to fail if we proceed to do so entirely independently, without consulting a domain expert

Right, I should have mentioned this. Your job is much, much easier if you can identify a solid "seed" expert in the domain with a few caveats:

  • If the seed expert becomes your primary input to expertise identification, you should be confident that their expertise checks are good. I'm tempted to think that the skill
... (read more)

"Check to see whether the field has tangible external accomplishments."

This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:

  • Forecasters: very low baserate, high variance
  • Doctors: high baserate, low-medium variance
  • Normal car repairpeople: medium baserate, low-medium variance (In this case, there is a more salient and practical ceiling to ex
... (read more)

It will come from CEA's EA Outreach budget. Winners may choose to re-donate to CEA if they think that we're the best target of funds, or donate somewhere else they think is a better target. That said, we think the main reason why someone would be motivated to enter the contest would be to have the 1000s of future people being introduced to EA be introduced by the best content.

Just changed it to a Creative Commons Attribution 4.0 International License, so posting it elsewhere is fine (or even encouraged).

4
AlasdairGives
8y
I think this is the way to go - but a CC attribtuion license is very different from an assignment of intellectual property (in a good way!) - you will need to provide attribtution on the about page and any subsequent usage of the entry for one thing (the whole point of an attribution licenese is to protect those moral rights!). so you should update your faq to reflect this.

Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.

0
Jess_Riedel
8y
Update: the Good Judgment Project has just launched Good Judgement Open. https://www.gjopen.com/

What about the following simple argument? "If you look at many many (most?) movements or organizations, you see mission creep or Goodharting."

Do you think there is anything that puts us in a different reference class?

1
Stefan_Schubert
8y
I agree that lots of movements have changed target over time. But they haven't necessarily changed in the down-watering direction. Some of turned more extreme (cf Stalinism, IS), others have become altogether different. The EA movement is a highly intellectual and idealistic movement. My hunch is that such movements have normally run a higher risk of turning too extreme than becoming too watered down. (I haven't conducted any detailed historical studies of these issues, but think such studies should be carried out.)

Hi Julia - I wholeheartedly agree with your semantic point: the words "hardcore" and "softcore" seem potentially harmful.

However, I wonder if the stronger thesis is true: "Having strictly defined categories of involvement doesn’t seem likely to help."

It seems plausible, but I can think of worlds in which categories of involvement actually do play an important role. (For instance, there is a reason galas will do things like sort donors into silver, gold, and platinum levels based on their level of contribution.) Since one coul... (read more)

-1
Julia_Wise
8y
I've thought about this question for two days, and in the end I feel sure that "hardcore" and "softcore" are not the terms we want, but not sure about whether using category words for EAs is helpful or not. People seem to self-sort pretty well even without the words. In any in-person group, even when all the people have the same official title (“member,” “parishioner,” etc), everybody knows who just shows up sometimes and who writes the newsletter, serves on all the committees, etc. Because so much of EA happens outside of face-to-face communities, perhaps we struggle more to figure out who is who.

I was chatting with Julia Wise about this post. It seems plausible the types of people we prioritize recruiting isn't such a black-and-white issue. For instance, it seems likely that EA can better take advantage of network effects with some mass-movement-style tactics.

That said, it seems likely that there might be a lot of neglected low-hanging fruit in terms of outreach to people with extreme influence, talent or net worth.

3
Diego_Caleiro
8y
I'm not claiming this is optimal, but I might be claiming what I'm about to say may be more optimal than anything else that 98% of EAs are actually doing. There are a couple thousand billionaires on the planet. There are also about as many EAs. Let's say 500 billionaires are EA friendly under some set of conditions. Then it may well be that the best use of the top 500 EAs is to minutiously study single individual billionaires. Understand their values, where they come from, what makes them tick. Draw their CT-chart, find out their attachment style, personality disorder, and childhood nostalgia. Then approach them to help them, and while solving many of their problems faster than they can even see it, also show them the great chance they have of helping the world. Ready set go: http://www.forbes.com/billionaires/list/

EA Ventures would be very interested in hearing ideas for donor coordination. Feel free to email us about it at tyler@centreforeffectivealtruism.org.

It's a pretty tricky problem that probably requires the team solving it to have a good understanding of social dynamics from having solved similar issues in the past, so the ideal solution would factor this in.

0
Dawn Drescher
8y
I’ll do that. Starting around February, I’ll be looking for a piece of software that I can implement with a team of another three or four CS students as part of my master’s degree. If the design has progressed far enough at that point, we can take on donor coordination.

+1 I'd avoid over-associating EA with just effective giving. E.g., startup-founding, political advocacy, and scientific research can all be undertaken with EA ideas in mind.

I would place quite a bit of emphasis on epistemic tools, since valuing (and ideally exercising) reason and evidence is the primary thing which differentiates EA and unites people across different causes.

Things to be covered might include:

  • Prioritization

  • Building models about relevant parts of the world

  • Epistemic humility (being open to changing your mind, steelmanning other people's arguments, etc)

People to contact for these things:

... (read more)
0
Sindre Tuset
9y
This is good feedback, Tyler. Thanks!

Thanks for the comments, all! I pretty much agree with the bulk of them so far, and have added an edit to the post above.

Thoughts on how favorably or unfavorably pursuing movement-building compares to other EA career paths?

Yearly salary range (helpful for getting sponsorships in the future of EA events if the average yearly salary turns out to be high)

The difference between this and vegan flyering is that you're already targeting groups that have already self-selected for one aspect of EA. That said, I could definitely see a much lower than .1% rate being the case. Though the cost-effectiveness still seems competitive even at a conversion rate of .01% or even .001%. That's 10 days and 100 days, respectively, of work for a year of earn-to-give.

That said, as Peter alluded earn-to-give still seems competitive if, e.g., you're funding that much more of this work happens. Unless, by doing the work, you're recruiting EtGers that will fund the work. Unless... [mind explodes]

0
jayd
9y
Do vegan leafleters ever try to target groups they think'd be responsive? Does anyone (e.g. Peter Hurford) know what conversion rate do they get from those, on average?

Peter Buckley attempted to hire some virtual assistants from ODesk. They were way too slow. My guess would be that EAs have a much better sense of what types of groups to look for and where to find them. The task also requires a decent amount of research, which is a comparative advantage of many EAs.

Would love to get tons of VAs on this though if you can think of a better way to use them.

1
tomstocker
9y
Have you tried hiring a temp and an Oxford student and putting them in the same room and seeing who can get the most good entries each day - checking a sample of 15 or something at the end of each day - for some kind of reward?
Load more