1James Montavon6hNational Year of Service for Free College as an EA Idea
This is mostly anecdotal and n of 1; interested to hear the community's
thoughts.
1. During high school, I went on mission trips through my church. One I credit
with starting me thinking about EA type trade offs and values- we went to
Guatemala, and spent the first week working with La Casa del Alfarero
working with the absolute poorest of the poor, people living and working
inside of the Guatemala City garbage dump. This organization did real
research into what interventions they could do to most help these people,
and we did things like build stoves with chimneys so that they wouldn't
inhale plastic fumes in their house and gave kids free healthy meals if they
came to school. They also explicitly taught us that they took time out of
their more effective work to babysit us rich white kids because some of us
would be moved to donate and that would pay off in the end, and that what
they really needed was more funding to hire local poor people who were
better at these tasks anyways and then we would be helping someone with a
job, too. The next week we spent with a missionary named Bob who lived in a
beautiful villa overlooking a rainforest canyon and had really good food and
people come play classical Spanish guitar and we also occasionally went to
an orphanage and sang some songs and took lots of pictures. The contrast was
so stark that it affected all of us, and we did not return to Bob's in
future years.
2. Currently, I am about to graduate from my undergrad degree and will enter
graduate school for my MPH in Epidemiology in the fall. I am on the GI bill,
and the VA is also paying for my grad school. This funding of my college
means a) I can plan to work in a lower paying but more effective field like
public health because I don't have to worry about college debt and b) I have
been able to take exactly the research
3DanielFilan1dSounds like if you could cheaply get rid of anti-money-laundering laws, this
would be pretty effective altruism:
> Necessarily applying a broad brush, the current anti-money laundering policy
prescription helps authorities intercept about $3 billion of an estimated $3
trillion in criminal funds generated annually (0.1 percent success rate), and
costs banks and other businesses more than $300 billion in compliance costs,
more than a hundred times the amounts recovered from criminals.
Found at this Marginal Revolution post.
[https://marginalrevolution.com/marginalrevolution/2021/01/the-anti-money-laundering-fraud.html]
1Prabhat Soni1dSOCRATES' CASE AGAINST DEMOCRACY
https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it
[https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it]
Socrates makes the following argument:
1. Just like we only allow skilled pilots to fly airplanes, licensed doctors to
operate on patients or trained firefighters to use fire enignes, similarly
we should only allow informed voters to vote in elections.
2. "The best argument against democracy is a five minute conversation with the
average voter". Half of American adults don’t know that each state gets two
senators and two thirds don’t know what the FDA does
[http://www.latimes.com/opinion/opinion-la/la-oe-goldberg31jul31-column.html]
.
3. (Whether a voter is informed can be evaluated by a short test on the basics
of elections, for example.)
Pros: better quality of candidates elected, would give uninformed voters a
strong incentive to learn aout elections.
Cons: would be crazy unpopular, possibility of the small group of informed
voters acting acting in self-interest -- which would worsen inequality.
(I did a shallow search and couldn't find something like this on the EA Forum or
Center for Election Science [https://electionscience.org/].)
7Chi3dI just wondered whether there is systematic bias in how much advice there is in
EA for people who tend to be underconfident and people who tend to be
appropriately or overconfident. Anecdotally, when I think of Memes/norms in
effective altruism that I feel at least conflicted about, that's mostly because
they seem to be harmful for underconfident people to hear.
Way in which this could be true and bad: people tend to post advice that would
be helpful to themselves, and underconfident people tend to not post
advice/things in general.
Way in which this could be true but unclear in sign: people tend to post advice
that would be helpful to themselves, and they are more appropriately or
overconfident people in the community than underconfident ones.
Way in which this could be true but appropriate: advice that would be harmful
when overconfident people internalize it tends to be more harmful than advice
that's harmful to underconfident people. Hence, people post proportionally less
of the first.
(I don't think the vast space of possible advice just has more advice that's
harmful for underconfident people to hear than advice that's harmful for
overconfident people to hear.)
Maybe memes/norms that might be helpful for underconfident for people to hear or
their properties that could be harmful for underconfident people are also just
more salient to me.
5WilliamKiely3d#DonationRegret #Mistakes
Something it occurred to me it might be useful to tell others about that I
haven't yet said anywhere:
The only donation I've really regretted making was one of the first significant
donations I made: On May 23, 2017, I donated $3,181.00 to Against Malaria
Foundation. It was my largest donation to date and my first donation after
taking the GWWC pledge (in December 2016).
I primarily regretted and regret making this donation not because I later
updated my view toward realizing/believing that I could have done more good by
donating the money elsewhere (although that too is a genuine reason to feel
regret about making a donation, and I have indeed since updated my view toward
thinking other donation opportunities are better). Rather, I primarily regretted
making the donation because six months after donating the money I learned that
if I had saved that money and donated it instead on Giving Tuesday 2017, I could
have gotten the money counter-factually matched by Facebook
[https://www.eagivingtuesday.org/#h.md45fep1oihk], thereby directing twice as
much money toward the effective charity of my choice and doing almost twice as
much good. (I say 'almost' as much good because I think a smaller but nontrivial
amount of good would have been done by Facebook's money had it gone to other
nonprofits instead). (I in fact donated $4,000 on Giving Tuesday 2017 and got it
all matched. I got all my donations matched in 2018 and 2019 too, and probably
most of my donations in 2020, though matches have yet to be announced by
Facebook. Other mistakes around this will go in a separate comment sometime.)
Reflecting on this more: Since I think marginal donations to some organizations
do more than twice as much good as donations to other organizations (including
AMF) in expectation, there is a sense in which missing a counterfactual matching
opportunity was not as significant of a mistake as giving to the wrong giving
opportunity / cause area. Yet on the other
3finm2dI think it can be useful to motivate longtermism by drawing an analogy to the
prudential case — swapping out the entire future for your future, and only
considering what would make your life go best.
Suppose that one day you learned that your ageing process had stopped. Maybe
scientists identified the gene for ageing, and found that your ageing gene was
missing. This amounts to learning that you now have much more control over how
long you live than previously, because there's no longer a process imposed on
you from outside that puts a guaranteed ceiling on your lifespan. If you die in
the next few centuries, it'll most likely be due to an avoidable, and likely
self-imposed, accident. What should you do?
To begin with, you might try a bit harder to avoid those avoidable risks to your
life. If previously you had adopted a laissez faire attitude to wearing
seatbelts and helmets, now could be time to reconsider. You might also being to
spend more time and resources on things which compound their benefits over the
long-run. If you'd been putting off investing because of the hassle, you now
have a much stronger reason to get round to it. 5% returns for 30 years
multiplies your original investment just over fourfold. 5% returns for 1,000
years works out at a significantly more attractive multiplier of more than
1,000,000,000,000,000,000,000. If keeping up your smoking habit is likely to
lead to lingering lung problems which are very hard or costly to cure, you might
care much more about kicking that habit soon. And you might begin to care more
about 'meta' skills, like learning how to learn. While previously such skills
seemed frivolous, now it's clear there's time for them to pay dividends.
Finally, you might want to set up checks against some slide into madness,
boredom, or destructive behaviour which living so long could make more likely.
So you think carefully about your closest-held values, and write them down as a
guide. You draw up plans for quickly kicking an ad
2flowo2dI can also highly recommend Deep Work by Cal Newport, his main thesis is that
'real' work only happens/productivity is high when you're doing it for a few
hours at a time instead of 15min blocks with constant interruptions. Edit:
should have read the linked post first haha, so see this as another vote for Cal
Newport
1antimonyanthony2dCrosspost: "Tranquilism Respects Individual Desires
[https://tobeanythingatallblog.wordpress.com/2021/01/10/tranquilism-respects-individual-desires/]
"
I wrote a defense of an axiology [https://longtermrisk.org/tranquilism/] on
which an experience is perfectly good to the extent that it is absent of craving
for change. This defense follows in part from a reductionist view of personal
identity, which is usually considered in EA circles to be in support of total
symmetric utilitarianism, but I argue that this view lends support to a form of
negative utilitarianism.
23Chi3dObservation about EA culture and my journey to develop self-confidence:
Today I noticed an eerie similarity between things I'm trying to work on to
become more confident and effective altruism culture. For example, I am trying
to reduce my excessive use of qualifiers. At the same time, qualifiers are very
popular in effective altruism. It was very enlightening when a book asked me to
guess whether the following piece of dialogue was from a man or woman:
'I just had a thought, I don't know if it's worth mentioning...I just had a
thought about [X] on this one, and I know it might not be the right time to pop
it on the table, but I just thought I'd mention it in case it's useful.'
and I just immediately thought 'No, that's an effective altruist'. I think what
the community actually endorses is communicating the degree of epistemic
certainty and making it easy to disagree, while the above quote is anxious
social signalling. I do think the community does a lot of the latter though, and
it's partly rewarded because of confounding with the first. (In the above
example it's obvious, but I think anxious social signaling is also often the
place where 'I'm uncertain about this', 'I haven't thought much about this', and
'I might be wrong' (of course you might be wrong) come from. That's certainly
the case for me.) Tangentially, there is also a strong emphasis on deference and
a somewhat conservative approach to not causing harm, esp. with new projects.
Overall, I am worried that this communication norm and the two memes I mentioned
foster under-confidence, a tendency to keep yourself small, and the feeling that
you need permission to work on important problems or to think through important
questions. The communication norm and memes I mentioned also have upsides, esp.
when targeted at overconfident people, and I haven't figured out yet what my
overall take on them is. I just thought it was an interesting observation that
certain things I'm trying to decrease are particularl
18Chi3dShould we interview people with high status in the effective altruism community
(or make other content) featuring their (personal) story, how they have overcome
challenges, and live into their values?
Background: I think it's no secret that effective altruism has some problems
with community health. (This is not to belittle the great work that is done in
this space.) Posts that talk about personal struggles, for example related to
self-esteem and impact, usually get highly upvoted. While many people agree that
we should reward dedication and that the thing that really matters is to try
your best given your resources, I think that, within EA, the main thing that
gives you status, that many people admire, desire, and tie their self-esteem to
is being smart.
Other altruistic communities seem to do a better job at making people feel
included. I think this has already been discussed a lot, and there seem to be
some reasons for why this is just inherently harder for effective altruism to
do. But one specific thing I noticed is what I associate with leaders of
different altruistic communities.
When I think of most high status people in effective altruism, I don't think of
their altruistic (or other personal) virtues, I think 'Wow, they're smart.' Not
because of a lack of altruistic virtues - I assume -, but because smartness is
just more salient to me. On the other hand, when I think of other people, for
example Michelle Obama or Melinda Gates or even Alicia Keys for that matter, I
do think "Wow, these people are so badass. They really live into their values."
I wouldn't want to use them as role models for how to have impact, but I do use
them as role models for what kind of person I would like to be. I admire them as
people, and they inspire me to work on myself to become like them in relevant
respects, and they make me think it's possible. I am worried that people look at
high status people in effective altruism for what kind of person they would like
to be, but the
4vaidehi_agarwalla4dReasons for/against Facebook & plans to migrate the community out of there
Epistemitc Status: My very rough thoughts. I am confident of the reasons
for/against, but the last section is mostly speculation so I won't attempt to
clarify my certainty levels
Reasons for moving away from Facebook
* Facebook promotes bad discussion norms (see Point 4 here
[https://forum.effectivealtruism.org/posts/p7EWkqa8TogNskXu5/suggestions-for-online-ea-discussion-norms#Being_an_active_bystander_]
)
* Poor movement knowledge retention
* Irritating to navigate: It's easy to not be aware that certain groups exist
(since there are dozens) and it's annoying to filter through all the other
stuff in Facebook to get to them
Reasons against
* Extremely high switching costs * start-up costs (see Neels' comment)
* harder to pay attention to new platform
* easier to integrate with existing scoial
media
* Offputting/intimidating to newer members
* Past attempts haven't taken off (e.g. the EA London Discussion Board
[https://forum.effectivealtruism.org/posts/H3nmq4M46W4k7qwDY/ea-directory-and-groups-discussion-board-1]
, but that was also not promoted super hard)
* Existing online space (the Forum) is a bit too formal/initimidating
How would we make the switch?In order of increasing speculativeness
* One subcommunity at a time. It seems like most EA groups are already more
active in their spaces other than Facebook, but it would be interesting to
see this replicated on the cause area level by understanding what the
community members' needs are and seeing if there's a way to have
alternatives.
* Moving certain services found on Facebook to other sites: having a good
opportunities board so people go to another place for ea jobs & volunteer
opportunities, moving the editing & review group to the forum (?), making it
easier for people to reach out to each other (e.g. EA Hub Community
directory). Then it may be easier to mov
6Aidan O'Gara5dThree Scenarios for AI Progress
How will AI develop over the next few centuries? Three scenarios seem
particularly likely to me:
* "Solving Intelligence": Within the next 50 years, a top AI lab like Deepmind
or OpenAI builds a superintelligent AI system, by using massive compute
within our current ML paradigm.
* "Comprehensive AI Systems": Over the next century or few, computers keep
getting better at a bunch of different domains. No one AI system is
incredible at everything, each new job requires fine-tuning and domain
knowledge and human-in-the-loop supervision, but soon enough we hit annual
GDP growth of 25%.
* "No takeoff": Looks qualitatively similar to the above, except growth remains
steady around 2% for at least several centuries. We remain in the economic
paradigm of the Industrial Revolution, and AI makes an economic contribution
similar to that of electricity or oil without launching us into a new period
of human history. Progress continues as usual.
For clarify my beliefs about AI timelines, I found it helpful to flesh out these
concrete "scenarios" by answering a set of closely related questions about how
transformative AI might develop:
* When do we achieve TAI? AGI? Superintelligence? How fast is takeoff? Who
builds it? How much compute does it require? How much does that cost? Agent
or Tool? Is machine learning the paradigm, or do we have another fundamental
shift in research direction? What are the key AI Safety challenges? Who is
best positioned to contribute?
The potentially useful insight here is that answering one of these questions
helps you answer the others. If massive compute is necessary, then TAI will be
built by a few powerful governments or corporations, not by a diverse ecosystem
of small startups. If TAI isn't achieved for another century, that affects which
research agendas are most important today. Follow this exercise for awhile, and
you might end up with a handful of distinct scena
1Awah Eric5dUpdated question (slight update in wording):
I am a Christian pastor for several rural communities of the West Region of
Cameroon. I regularly come across opportunities to counterfactually (persons are
dying since I do not have the funds to help) save lives with few hundreds of
USD. I can select these opportunities.
I am looking for Christian donations. Either otherwise not pledged or used for
lower utility. Please let me know if you know of a local Christian church that
could be interested in cost-effective global health interventions.
I will be very happy to schedule a Skype call with someone from the church. My
Skype is (live:.cid.4e972b7084b621) and my e-mail is (awaheric001@yahoo.com
[awaheric001@yahoo.com]). You can also call my phone (+237 676367876). I am much
better in explaining in person, so please call or schedule one.
9SiebeRozendal9dThis is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years
ago. I can elaborate in the comments!
I believed doing a PhD in risk analysis would teach me a lot of useful skills to
apply to existential risks, and it might allow me to direectly work on important
topics. I worked as a Research Associate on the qualitative ide of systemic risk
for half a year. I ended up not doing the PhD because I could not find a
suitable place, nor do I think pure research is the best fit for me. However, I
still believe more EAs should study something along the lines of risk analysis,
and its an especially valuable career path for people with an engineering
background.
Why I think risk analysis is useful:
EA researchers rely a lot on quantification, but use a limited range of methods
(simple Excel sheets or Guesstimate models). My impression is also that most EAs
don't understand these methods enough to judge when they are useful or not (my
past self included). Risk analysis expands this toolkit tremendously, and
teaches stuff like the proper use of priors, underlying assumptions of different
models, and common mistakes in risk models.
The field of Risk Analysis
Risk analysis is a pretty small field, and most is focused on risks of limited
scope and risks that are easier to quantify than the risks EAs commonly look at.
There is a Society of Risk Analysis (SRA), which manages the Journal of Risk
Analysis (the main journal of this field). I found most of their study topics
not so interesting, but it was useful to get an overview of the field, and there
were some useful contacts to make (1). The EA-aligned org GCRI is active and
well-established in SRA, but no other EA orgs are.
Topics & advisers
I hoped to work on GCR/X-risk directly, which substantially reduced my options.
It would have been useful to just invest in learning a method very well, but I
was not motivated to research something not directly relevant. I think it's
generally difficult to make an ac