In this comment
[https://forum.effectivealtruism.org/posts/QkaexL2tcxMQMeNGq/evolutionary-debunking-arguments-about-human-moral?commentId=TjtuFXEzzyfcyPNgq]
I was going to quote the following from R. M. Hare:
I remember this being quoted in Mackie's Ethics during my undergraduate degree,
and it's always stuck with me as a powerful argument against moral
non-naturalism and a close approximation of my thoughts on moral philosophy and
meta-ethics.
But after some Google-Fu I couldn't actually track down the original quote. Most
people think it comes from the Essay Nothing Matters in Hare's Applications of
Moral Philosophy. While this definitely seems to be in the same spirit of the
quote, the online scanned pdf version of Nothing Matters that I found doesn't
contain this quote at all. I don't have access to any of the academic
institutions to check other versions of the paper or book.
Maybe I just missed the quote by skimreading too quickly? Are there multiple
versions of the article? Is it possible that this is a case of citogenesis
[https://xkcd.com/978/]? Perhaps Mackie misquoted what R.M. Hare said, or
perhaps misattributed it and and it actually came from somewhere else? Maybe it
was Mackie's quote all along?
Help me EA Forum, you're my only hope! I'm placing a £50 bounty to a charity of
your choice for anyone who can find the original source of this quote, in R. M.
Hare's work or otherwise, as long as I can verify it (e.g. a screenshot of the
quote if it's from a journal/book I don't have access to).
THE NETHERLANDS PASSED A LAW THAT WOULD BAN FACTORY FARMING.
[HTTPS://WWW.RVO.NL/ONDERWERPEN/DIERENWELZIJN/WET-DIEREN]
It was introduced by the Party for the Animals
[https://en.wikipedia.org/wiki/Party_for_the_Animals] and passed in 2021.
However, it only passed because the government had just fallen and the senate
was distracted by passing covid laws, which meant they were very busy and didn't
have a debate about it
[https://www.trouw.nl/duurzaamheid-economie/onrust-over-de-gewijzigde-dierenwet-heeft-een-dier-recht-op-geluk~bc616929/?referrer=https%3A%2F%2Fwww.google.com%2F]
. Since the law is rather vague there's a good chance it wouldn't have passed
without the covid crisis.
It was supposed to start this year, but the minister of agriculture has decided
he will straight up ignore the law
[https://www.nrc.nl/nieuws/2022/11/10/minister-adema-schoffeert-de-kamer-vindt-de-partij-voor-de-dieren-a4147957]
. The current government is not in favor of this law and so they're looking at
ways to circumvent it.
It's very unusual for the Dutch government to ignore laws, so they might get
sued by animal rights activists. I expect they will introduce a new law rather
quickly that repeals this ban, but the fact that it passed at all and that this
will now become a big issue in the news is very promising for the 116 million
[https://longreads.cbs.nl/the-netherlands-in-numbers-2021/how-many-farm-animals-are-there-in-the-netherlands/#:~:text=In%202021%2C%20the%20total%20pig,grew%20slightly%20to%20482%20thousand.]
Dutch farm animals.
8Jason2d
Today's shower thought: When strong downvoting a post, one should be required to
specify a reason for the strong downvote (either from a list, or through text
entry if no pre-made reason fits). These reasons should be publicly displayed,
but not linked to individual voters. [Alternative: They should be displayed to
the commenter who is being downvoted only.] This is not intended to apply to
strong disagreevotes.
Getting downvoted isn't fun, and I've seen a number of follow-up comments
recently along the lines of "why am I being downvoted for this?" Right now, we
generally don't provide any meaningful feedback for people who are being
downvoted. In some cases (including some where I didn't vote at all), I've tried
to provide feedback -- e.g., that particular language could be seen as
"premature and unfriendly without allowing [an organization] time for a
response" -- which I hope has been sometimes helpful. But I'm wondering whether
there is a broader way to give people some feedback.
The other reason that I think a reasons-giving requirement might make sense is
that it serves as a tiny stop-and-think moment. It interrupts the all-too-human
tendency to reach for the strong-downvote icon when one viscerally disagrees
with a post, and reminds the user what the generally appropriate reason for
strong downvotes are.
1
3Eleni_A2d
My upskilling study plan:
1. Math
i) Calculus (derivatives, integrals, Taylor series)
ii) Linear Algebra (this video series
[https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab])
iii) Probability Theory
2. Decision Theory
3. Microeconomics
i) Optimization of individual preferences
4. Computational Complexity
5. Information Theory [https://www.youtube.com/watch?v=bkLHszLlH34]
6. Machine Learning theory with a focus on deep neural networks
[https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi]
7. The Alignment Forum [https://www.alignmentforum.org/]
8. Arbital [https://arbital.greaterwrong.com/explore/Arbital/]
3Nathan Young2d
I did a podcast where we talked about EA, would be great to hear your criticisms
of it. https://pca.st/i0rovrat [https://pca.st/i0rovrat]
Should I do more podcasts?
1
1Eleni_A3d
"Find where the difficult thing hides, in its difficult cave, in the difficult
dark." Iain S. Thomas
I notice I am pretty skeptical of much longtermist work and the idea that we can
make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction
work, we can find 10s of billions of dollars of work that isn't busywork and
shouldn't be spent attempting to learn how to get eg nations out of poverty.
3Eleni_A4d
The Collingridge dilemma: it is difficult to predict the future impact of a
technology. However, once the technology has been implemented, it becomes
difficult to manage.
2Nathan Young4d
Why do some shortforms have agree voting and others don't?
Hello! I'm an EA in university, currently studying engineering. I've previously
worked at CEA, done the in-depth fellowship, and am currently founding a startup
alongside my studies.
I'm looking for a mentor or partner in EA space to meet with weekly, briefly, to
help with setting goals and following up, as I think I'd really benefit from
support with weekly prioritisation. A founder or engineering background is a
plus but not necessary! Happy to talk more or be referred. Just send a private
message.
I know this short form might be a shot in the dark but wanted to put it out
there. Thanks!
People complained about how the Centre for Effective Altruism (CEA) had said
they were trying not to be like the "government of Effective Altruism" but then
they kept acting exactly like they were the Government of EA for years and
years.
Yet that's wrong. The CEA was more like the police force of effective altruism.
The de facto government of effective altruism was for the longest time, maybe
from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the
rise of FTX. All of that changed again with the fall of FTX.
I've put everything above in the past tense because that was the state of things
before 2022. There's no such thing as a "government of effective altruism"
anymore, regardless of whether anyone wants one or not. Neither the CEA, Open
Philanthropy, nor Good Ventures could fill that role, regardless of whether
anyone would want it or not.
We can't go back. We can only go forward. There is no backup plan anyone in
effective altruism had waiting in the wings to roll out in case of a
movement-wide leadership crisis. It's just us. It's just you. It's just me. It's
just left to everyone who is still sticking around in this movement together. We
only have each other.
8Ramiro6d
An objection to the non-identity problem: shouldn't disregarding the welfare of
non-existent people preclude most interventions on child mortality and
education?
One objection against favoring the long-term future is that we don't have duties
towards people who still don't exist. However, I believe that, when someone
presents a claim like that, probably what they want to state is that we should
discount future benefits (for some reason), or that we don't have a duty towards
people who will only exist in the far future. But it turns out that such a claim
apparently proves too much; it proves that, for instance, we have no obligation
to invest on reducing the mortality of infants less than one year old in the
next two years
The most effective interventions in saving lives often do so by saving young
children. Now, imagine you deploy an intervention similar to those of Against
Malaria Foundation - i.e., distributing bednets to reduce contagion. At the
beggining, you spend months studying, then preparing, then you go to the field
and distribute bednets, and then one or two years later you evaluate how many
malaria cases were prevented in comparison to a baseline. It turns out that most
cases of averted deaths (and disabilities and years of life gained) correspond
to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform
today, they will only succeed in enacting it in some years - thus we can expect
that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of
their positive impact will affect people who are not born yet. If there's any
value in positivel influencing these children, most of it will happen to people
who are not yet born)
This means that, at the beggining of this project, most of the impact
corresponded to people who didn't exist yet - so you were under no moral
obligation to help them.
1
6ChanaMessinger5d
Been flagging more often lately that decision-relevant conversations work poorly
if only A is sayable (including "yes we should have this meeting") and not-A
isn't.
At the same time I've been noticing the skill of saying not-A with grace and
consideration, breezily and not with "I know this is going to be unpopular,
but..." energy and it's an extremely useful skill.
2Aaron Bergman6d
EVENTS AS EVIDENCE VS. SPOTLIGHTS
Note: inspired by the FTX+Bostrom fiascos and associated discourse. May
(hopefully) develop into longform by explicitly connecting this taxonomy to
those recent events (but my base rate of completing actual posts cautions
humility)
EVENT AS EVIDENCE
* The default: normal old Bayesian evidence * The realm of "updates," "priors,"
and "credences"
* Pseudo-definition: Induces[1] [#fnslgvhmfq4dn]a change to or within a model
(of whatever the model's user is trying to understand)
* Corresponds to models that are (as is often assumed): 1. Well-defined (i.e. specific, complete, and without latent or hidden
information)
2. Stable except in
response to 'surprising' new information
EVENT AS SPOTLIGHT
* Pseudo-definition: Alters the how a person views, understands, or interacts
with a model, just as a spotlight changes how an audience views what's on
stage * In particular, spotlights change the salience of some part of a model
* This can take place both/either: * At an individual level (think spotlight
before an audience of one); and/or
* To a community's shared model (think
spotlight before an audience of many)
* They can also which information latent in a model is functionally available
to a person or community, just as restricting one's field of vision increases
the resolution of whichever part of the image shines through
EXAMPLE
1. You're hiking a bit of the Appalachian Trail with two friends, going north,
using the following of a map (the "external model")
2. An hour in, your mental/internal model probably looks like this:
3. Event: the collapse of a financial institution you hear traffic 1. As evidence, this causes you to change where you think you are—namely, a
bit south of the first road you were expecting to cross
2. As spotlight, this causes the three of you to stare at the same map as
before model but in such a
1Pat Myron6d
Is it possible to re-collapse a shortform after expanding it on /allPosts? If
so, how? If not, feature request :)
Hot take: For posts not involving breaking news that may benefit from cooler
reflection, the Forum should trial allowing a poster to petition the mods for a
quiet period of 1 to 7 days before comments are allowed. This would need to be
done pre-posting and a header would need to appear at the top of the post. For
example, I think discussion on the Doing EA Better post would have benefitted
from a quiet period.
I'm hesitant to extend the trial to breaking news because the risk of the delay
being viewed as mods cutting off discussion is higher, and because I think
people have a stronger interest in being able to promptly discuss stuff that
happens externally to the Forum. Finally, no one "owns" breaking news, and the
requirement for both the post author + a mod to concur is a safeguard against
erroneous imposition of quiet periods.
1
10Nathan Young7d
Unbalanced karma is good actually. it means that the moderators have to do less.
I like the takes of the top users more than the median user and I want them to
have more but not total influence.
Appeals to fairness don't interest me - why should voting be fair?
I have more time for transparency.
7Nathan Young7d
I would like to see posts give you more karma than comments (which would hit me
hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted
comments on that post, but it's pretty often the latter gives more karma than
the former.
9
4Nathan Young7d
* It is unclear to me that if we chose cause areas again, we would choose
global developement
* The lack of a focus on global development would make me sad
* This issue should probably be investigated and mediated to avoid a huge
community breakdown - it is naïve to think that we can just swan through this
without careful and kind discussion