All of nil's Comments + Replies

Also, If there's sentient life on reachable planets or a chance of it emerging in the future, some NUs might also argue that the chance of human descendants ending/preventing suffering on such planets might be worth the risk of spreading suffering. (Cf. David Pearce's "cosmic rescue mission".)

4
Linch
1y
Right, this is what I was alluding to with  but I agree that in theory we can reduce suffering for aliens who are morally better than humans but less technologically capable.  That said, the NU case for it doesn't necessarily seem very strong because natural suffering isn't as astronomical in scale as s-risks.

I'd recommend Reflections on Intelligence (the revised edition) by Magnus Vinding. It's a short e-book/long essay which is rather critical about the notion of "superintelligent" AI specifically.

My minimum commitment for the upcoming meetup is to 1) keep researching options for my potential full-time volunteering for an EA org that cannot visa-sponsor yet, and 2) share the opportunity to propose a roundtable discussion at the UK's ARIA w/ relevant orgs (e.g. nonprofits in animal testing alternatives and in cell-ag).

Looking forward to meeting up w/ everyone )

Answer by nilAug 26, 20224
0
0

[Updated w/ an expression of interest]

  • Location: UK
  • Remote: Yes (NZ, UK, or US visa sponsorship required)
  • Willing to relocate: within NZ, UK, or US
  • Skills:
    • agile software product development
    • effective learning
    • copy editing & proof reading; UX; process improvement; documentation; onboarding & mentoring; literature review; office & productivity software; troubleshooting; Linux; software development; Scrum; digital photography; etc.
    • aspiring EA since 2014 🖤
  • Résumé/CV/LinkedIn: LinkedIn
  • Email: see here
  • Notes:
    • Expression of interest
      • [urgent] Looking for a UK or
... (read more)

I'm glad to see such research is being done. I especially appreciate the section on proposed resilience measures. Thanks, Matt & Nick!

(I'm currently being torn between staying in the UK or relocating to NZ long-term (I need to decide ASAP, as I have a job offer). So this post is quite timely for me ATM, FWIW.)

Faunalytics may want to add the symposium to the forum's events page so that e.g. it stays discoverable as the date approaches. Thanks :)

2
JLRiedi
2y
Thank you for the tip! Will do :) 

Thanks for linking to GFI’s posts (and for your and Linch’s article in the first place!). GFI’s concerns w/ some of the points made in the TEAs and the Counter article seem sound to me (FWIW, as I have only some basic knowledge of the field at best). I couldn’t find any responses to GFI, unfortunately.

For those who might be interested in checking GFI’s posts, below are some excerpts.

From the overview of “Preliminary review of technical assumptions within the Humbird analysis”:

... [Humbird's] analysis is valuable for identifying and prioritizing areas where

... (read more)

I just got a notification that the livestream starts in an hour. One can still register for it here: https://hopin.com/events/new-harvest-2022. Similar to the past conferences, the livestream may be recorded.

Thanks for the post and the interview, Gaetan!

For any one interested, David Pearce's own written response on EA's longtermism can be found on his website.

Does enhancing one’s mood / increasing one’s hedonic set-point and making one more resistant to suffering fall within your definition of mind enhancement? I think a case can be made that wellbeing can be hugely empowering (an intuition pump: imagine waking up in an extremely good mood, w/ a sense of things to be done…). David Pearce may be the most prominent EA writing on (e.g. one, two) and promoting (and defending) this type of mind enhancement. And then there is one EA-aligned organization working in this area as well, called Invincible Wellbeing.

I’d be... (read more)

2
timfarkas
2y
Great points, thanks! I think the well-being enhancements you describe definitely fit this post's definition of mind enhancement and could in many ways also affect 'Benevolence, Intelligence, Power' (especially 'Power'). This means that in this regard most of the post's considerations would equally apply to well-being enhancements too. However, the aspects I list mostly focus on the instrumental implications of mind enhancements, i.e. how they could increase/decrease effective-altruist impact done by certain actors/society. As the enhancements you describe could be seen as constituting direct impact on QoL/QALY, other considerations would also become important.  E.g. in some cases there could be trade-offs like certain well-being enhancements enhancing subjective quality of life but decreasing 'Benevolence, Intelligence, Power'. In such a case, expected desirability would depend a lot on your set of assumptions regarding the world like existential risk, long-termism,  which could make it much harder to draw any definitive conclusions there. Definitely a very interesting sub-area and probably also very neglected and worthy of thorough EA examination! :)

EA author Magnus Vinding has a blog post on such not-immediately-obvious reasons for avoiding consuming animal "products".

Pearce calls it "full-spectrum" to emphasise the difference w/ Bostrom's "Super-Watson" (using Pearce's words).

... a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it's likely initial goals ...

Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won't notice or won't be able to stop.

Regarding feasibilit

... (read more)

Magnus Vinding, the author of Reflections on Intelligence, recently appeared on the Utilitarian podcast, where he shared his views on intelligence as well (this topic starts at 35:30).

Thanks for the guide, Alex!

You say from the start that most of the advice is applicable to similar tools, but I'd still note that one limitation of (the free version of) Slack is that message history is limited to 10,000 messages (incl. private messages). So one cannot search and view messages made 10,000 messages before.

Discord (as well as Mattermost and self-hosted Zulip), in contrast, have unlimited message histories (paid versions of Slack or Zulip don't have this limitation as well, but the pricing (x$ per user per month) isn't suitable for a public g... (read more)

2
Sasha Berezhnoi
2y
Thanks for bringing this up! CEA helped EA Anywhere get a free Slack pro plan - they might be able to do the same for other groups. Those interested should contact groups@centreforeffectivealtruism.org

> ... perhaps they should be deliberately aimed for?

David Pearce might argue for this if he thought that a "superintelligent" unconscious AGI (implemented on a classical digital computer) were feasible. E.g. from his The Biointelligence Explosion:

Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.

Could there arise "evil" mirror-touch synaesthetes? In one sense, no. You can't go around wa

... (read more)
3
Greg_Colbourn
2y
Interesting. Yes I guess such "full-spectrum superintelligence" might well be good by default, but the main worry from the perspective of the Yudkowsky/Bostrom paradigm is not this - perhaps it's better described as super-optimisation, or super-capability (i.e. a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it's likely initial goals). Regarding feasibility of conscious AGI / Pearce's full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains). Or maybe quantum computers would be enough?

Hi, Greg :)

Thanks for taking your time to read that excerpt and to respond.

First of all, the author’s scepticism in a “superintelligent” AGI (as discussed by Bostrom at least) doesn’t rely on consciousness being required for an AGI: i.e. one may think that consciousness is fully orthogonal to intelligence (both in theory and practice) but still on the whole updating away from the AGI risk based on the author’s other arguments from the book.

Then, while I do share your scepticism about social skills requiring consciousness (once you have data from conscious ... (read more)

2
Greg_Colbourn
2y
Hi nil :) Yes, and to to the orthogonality, but I don't think it needs that much computational power (certainly not unlimited). Good enough generalisations could allow it to accomplish a lot (e.g. convincing a lab tech to mix together some mail order proteins/DNA in order to bootstrap nanotech). How accurate does it need to be? I think human behaviour could be simulated enough to be manipulated with feasible levels of compute. There's no need for consciousness/empathy. Arguably, social media algorithms are already having large effects on human behaviour.

The book that made me significantly update away from “superintelligent” AGI being a realistic threat is Reflections on Intelligence (2016/2020) (free version) by Magnus Vinding. The book criticises several lines of arguments of Nick Bostrom’s Superintelligence and the notion of “intelligence explosion”, talks about an individual’s and a collective’s goal-achieving abilities, whether consciousness is orthogonal or crucial for intelligence, and the (unpredictability of) future intelligence/goal-seeking systems. (Re. “intelligence explosion” see also the auth... (read more)

3
nil
2y
Magnus Vinding, the author of Reflections on Intelligence, recently appeared on the Utilitarian podcast, where he shared his views on intelligence as well (this topic starts at 35:30).
3
Greg_Colbourn
2y
I will also say that I like Vinding's other work, especially You Are Them. A problem for Alignment is that the AGI isn't Us though (as it's default non-conscious). Perhaps it's possible that an AGI could independently work out Valence Realism and Open/Empty Individualism, and even solve the phenomenal binding problem so as to become conscious itself. But I think these are unlikely possibilities a priori. Although perhaps they should be deliberately aimed for? (Is anyone working on this?)
4
Greg_Colbourn
2y
I've not read the whole book, but reading the linked article Consciousness – Orthogonal or Crucial?  I feel like Vinding's case is not very convincing. It was written before GPT-3, and this shows. In GPT-3 we already have a (narrow) AI that can convincingly past the Turing Test in writing. Including writing displaying "social skills" and "general wisdom". And very few people are arguing that GPT-3 is conscious. In general, if you consider that the range of human behaviour is finite, what's to say that it couldn't be recreated simply with a large enough (probabilistic) look-up table? And a large enough ML model trained on human behaviour could in theory create a neural network functionally equivalent to said look-up table. What’s to say that a sufficiently large pile of linear algebra, seeded with a sufficiently large amount of data, and executed on a sufficiently fast computer, could not build an accurate world model, recursively rewrite more efficient versions of itself, reverse engineer human psychology, hide it’s intentions from us, create nanotech in secret, etc etc, on the way to turning the future lightcone into computronium in pursuit of the original goal programmed into it at its instantiation (making paperclips, making a better language model, making money on the stock market, or whatever), all without a single conscious subjective internal experience?
Answer by nilDec 27, 20213
0
0

Have you considered asking for this thesis advice the Effective Thesis? They probably can connect you w/ someone w/ a background in this area.

[Below are my (totally optional) thoughts on this (and I'm only a software engineer working in a genomic research institute w/ no formal biomed background):]
I would think that (ethically approved) studies on humans are more useful in general, as they translate better to other humans. Also, the more we try to reduce and substitute non-human animal experimentations w/ alternatives, the more there is incentive to develo... (read more)

2
Snippyro
2y
Did'nt know about it! thanks you very much for your comments.
  1. For a new investor, I think a simple and good method is getting a Vanguard Lifestrategy ISA with 100% equities - this buys you stocks across lots of different markets.

Does anyone know if there's an ISA (Individual Savings Account) w/ a fund that doesn't invest in meat and dairy companies and companies that test on animals? (I know that I can open an ISA on something like Trade 212 and invest in individual stocks myself. But due to having more important things to work on, I'm looking for a more "invest-and-forget" type of investing.)

Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)

nil
3y16
0
0

(I observed downvotes from 10 to 5. Is there anything that controversial in or about the post?..)

I'm also  a bit surprised, if I'm not mistaken the post had negative karma at one point. People of course downvote for other reasons than controversy, e.g. from the forum's voting norms section:

“I didn’t find this relevant.”
“I think this contains an error.”
“This is technically fine, but annoying to read.”

But I'd be sad if people get the impression that posts like this that reflect on altruistic motivations would not be welcome.

Imagine how it would change humanity's priorities if each day, "just" for a minute, each human adult experienced the worst suffering occurring that day on the planet (w/o going psychotic afterwards somehow). (And, for the reasons outlined in the post, we probably underestimate how much that torturous mind-"broadcasting" would change humanity's lived-out ethics.)

7
Aaron Bergman
3y
Yes, I believe things would change a lot. Hopefully we can find some way to induce this kind of cognitive empathy without making people actually suffer for first hand experience.

The slow (if not revese) progress towards a world without intense suffering is depressing, to say the least. So thank you for writing this inspiring piece.

It aslo reminded me of David Pearce's essay "High-tech Jainism". It outlines a path towards civilization that abolished suffering while also warns about potential pitfalls like forgetting about suffering too soon, before it's prevented for all sentient beings. (In Suffering-Focused Ethics: Defense and Implications (ch. 13) mentioned in the post, Vinding even argues that, given the irreducible uncertainty... (read more)

2
Mary Stowers
3y
I'd definitely like to write more on the concept since I truly believe it could be useful, at the very least as a source of hope. It's all too easy to feel depressed diving into the viewpoint of suffering-focused ethics, but that probably slows motivation that would be more effective otherwise.  The possibility of forgetting suffering to soon is a good point to remember, I'll take a look at the essay linked. Thanks for the response!

The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

FWIW this assessment seems true to me, at least for eating non-human animals, for I don't know enough about the economic drives behind slavery. (If one is interested, there's a report by the Sentience Institute on the topic, titled "Social Movement Lessons F... (read more)

That conclusion doesn't necessarily have to be as pessimistic as you seem to imply ("we do what is most convenient to us"). An alternative hypothesis is that people to some extent do want to do the right thing, and are willing to make sacrifices for it - but not large sacrifices. So when the bar is lowered, we tend to act more on those altruistic preferences. Cf. this recent paper:

[Subjective well-being] mediates the relationship between two objective measures of well-being (wealth and health) and altruism...results indicate that altruism increases when re

... (read more)

I didn't have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :(

What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction text, for example.

4
Pablo
3y
Thanks for the feedback. I have made a note to update the Wiki FAQ, or if necessary create a new document. Feel free to ping me if you don't see any updates within the next week or so. 

Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person.

It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.

5
Pablo
3y
Hi nil, I've edited the FAQ to make our inclusion criteria more explicit.

I think discussion will probably usually be sufficient. Using upvotes and downvotes as info seems useful, but probably not letting them be decisive. 

It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.

This might just be a case where written communication on the internet makes the tone seem off, but "hidden" sounds to me unfair and harsh. That seems to imply Pablo already knew what the inclusion criteria should be, and was set on them, but deliberately withheld them. This seems extremely unlikely. 

I t... (read more)

I should have been more clear about Drexler: I don't dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.

(I think it's weird and probably bad that this comment of nil's has negative karma. nil is just clarifying what they were saying, and what they're saying is within the realm of reason, and this was said politely.)

Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".

First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment should be removed from the wiki. I just disagree that not including Pearce is justified.

Again, I honestly don’t think that it is true that Chalmers and Drexler are “connec... (read more)

4
MichaelA
3y
[Just responding to one specific thing, which isn't central to what you're saying anyway. No need to respond to this.] For what it's worth, I think I agree with you re Chalmers (I think Pearce may be more connected to EA than Chalmers is), but not Drexler. E.g., Drexler has worked at FHI for a while, and the FHI office is also shared by GovAI (part of FHI, but worth listing separately), GPI, CEA, and I think Forethought. So that's pretty EA-y. Plus he originated some ideas that are quite important for a lot of EAs, e.g. related to nanotech, CAIS, and Paretotopia. (I'm writing quickly and thus leaning on acronyms and jargon, sorry.)
8
Pablo
3y
Hey nil, Chalmers was involved with EA in various ways over the years, e.g. by publishing a paper on the intelligence explosion and then discussing it at one of the Singularity Summits, briefly participating in LessWrong discussions, writing about mind uploading, interacting (I believe) with Luke Muehlhauser and Buck Shlegeris about their illusionist account of consciousness, etc. In any case, I agree with you (and Michael) that it may be more productive to consider the underlying reasons for restricting the number of entries on individual people. I generally favor an inclusionist stance, and the main reason for taking an exclusionist line with entries for individuals is that I fear things will get out of control if we adopt a more relaxed approach. I'm happy, for instance, with having entries for basically any proposed organization, as long as there is some reasonable link to EA, but it would look kind of weird if we allowed any EA to have their own entry. An alternative is to take an intermediate position where we require a certain degree of notability, but the bar is set lower, so as to include people like Pearce, de Grey, and others. We could, for instance, automatically accept anyone who already has their own Wikipedia entry, as long as they have a meaningful connection to EA (of roughly the same strength as we currently demand for EA orgs). Pearce would definitely meet this bar. How do others feel about this proposal?

Thank you for appreciating the contribution.

Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.

I'm sorry to hear this, Pablo, as I haven't been convinced that Pearce isn't relevant enough for effective altruism.

Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination?

And perhaps:

After reviewing the discussion, and seeing that no new comments have b

... (read more)

>Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce

I tried to outline some criteria in an earlier comment. Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".  Matthews doesn't fall under any of the categories listed, though he strikes me as someone wor... (read more)

For those who may want to see the deleted entry, I'm posting it below:


David Pearce is a philosopher and writer best known for his 1995 manifesto The Hedonistic Imperative and the associated ideas about abolishing suffering for all sentient life using biotechnology and other technologies.

Pearce argues that it is "technically feasible" and ethically rational to abolish suffering on the planet by replacing Darwinian suffering-based motivational systems with minds animated by "information-sensitive gradients of intelligent bliss" (as opposed to indiscriminate

... (read more)

... I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. ...

This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying!

In this light I still think an entry for Pearce is justifi... (read more)

9
Pablo
3y
I agree with you that a Tomasik entry is clearly warranted. I would say that his entry is as justified as one on Ord or MacAskill; he is one of half a dozen or so people who have made the most important contributions to EA, in my opinion. I will respond to your main comment later, or tomorrow.
5
MichaelA
3y
As noted, I do lean towards Tomasik having an entry, but "co-founder of an EA org"  + "written extensively on many topics highly relevant to EA" + "is an advisor for another EA org", or 1 or 2 of those things plus 1 or 2 similar things, includes a fair few people, including probably like 5 people I know personally and who probably shouldn't have their own entries.  I do think Tomasik has been especially prolific and his writings especially well-regarded and influential, which is a big part of why I lean towards an entry for him, but the criteria and cut offs do seem fuzzy at this stage. 

deleted

I'll propose the tag on that page ...

Done.

David Pearce (the tag will be removed if others think it’s not warranted)

Arguments against:

  • One may see David Pearce much more related to transhumanism (even if to the most altruistic “school” of transhumanism) than to EA (see e.g. Pablo’s comment).
  • Some of Pearce’s ideas goes against certain established notions in EA: e.g. he thinks sentience of classical digital computers is impossible under the known laws of physics, that minimising suffering should take priority over increasing happiness of the already well-off, that environmental interventions alone,
... (read more)
4
Michael Huang
3y
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area. David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives.
6
nil
3y
For those who may want to see the deleted entry, I'm posting it below:
3
Aaron Gertler
3y
As the head of the Forum, I'll second Pablo in thanking you for creating the entry. While I defer to Pablo on deciding what articles belong in the wiki, I thought Pearce was a reasonable candidate. I appreciate the time you took to write out your reasoning (and to acknowledge arguments against including him).
5
Pablo
3y
Thanks again, nil, for taking the time to create this entry and outline your reasoning. After reviewing the discussion, and seeing that no new comments have been posted in the past five days, I've decided to delete the article, for the reasons I outlined previously. Please do not let this dissuade you from posting further content to the Wiki, and if you have any feedback, feel free to leave it below or to message me privately.

Michael is correct that the inclusion criteria for entries of individual people hasn't been made explicit. In deciding whether a person was a fit subject for an article, I haven't followed any conscious procedure, but merely relied on my subjective sense of whether the person deserved a dedicated article. Looking at the list of people I ended up including, a few clusters emerge:

  1. people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)
  2. people who have attained eminence in their fields and who a
... (read more)
4
MichaelA
3y
I'm roughly neutral on this, since I don't have a very clear sense of what the criteria and "bars" are for deciding whether to make an entry about a given person. I think it would be good to have a discussion/policy regarding that.  I think some people like Nick Bostrom and Will MacAskill clearly warrant and entry, and some people like me clearly don't, and there's a big space in between - with Pearce included in it - where I could be convinced either way. (This has to do with relevance and notability in the context of the EA Forum Wiki, not like an overall judgement of these people or a popularity contest.) Some other people who are perhaps in that ambiguous space: * Nick Beckstead (no entry atm) * Elie Hassenfeld (no entry atm, but an entry for GiveWell) * Max Tegmark (no entry atm, but an entry for FLI) * Brian Tomasik (has an entry) * Stuart Russell (has an entry) * Hilary Greaves (has an entry) (I think I'd lean towards each of them having an entry except Hassenfeld and maybe Tegmark. I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. But I'm very tentative with all those views.)

Hi Pablo,

I'll propose the tag on that page, for I do think that a tag for David Pearce is justified (and if it isn't, then I might question some existing tags for EA persons).

1
nil
3y
deleted
Answer by nilMay 15, 20213
0
0

This is not on direct harm, but if AI risks are exaggerated to a degree that the worst scenarios are not even possible, then a lot of EA talent might be wasted.

For those who are skeptical about AI skepticism may be interested in reading Magnus Vinding's "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique".

David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].

It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of "understanding the biological basis of unpleasant experience in order to make suffering physically impossible").

Thanks for sharing, Michael!

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

Relatedly, CRS has an internship opportunity.

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

2
MichaelA
3y
This was intentional, but I think I no longer endorse that decision, so I've now removed the second mention.
3
MichaelA
3y
Thanks for mentioning this - I'll now added it to the "Programs [...]" section :)
2
MichaelA
3y
I definitely think that that list is within-scope for this document, but (or "and relatedly") I've already got it in the Central directory for open research questions that's linked to from here. There are many relevant collections of research questions, and I've already included all the ones I'm aware of in that other post. So I think it doesn't make sense to add any here unless I think the collection is especially worth highlighting to people interested in testing their fit for (longtermism-related) research.  I think the 80k collection fits that bill due to being curated, organised by discipline, and aimed at giving a representative sense of many different areas. I think my "Crucial questions" post fits that bill due to being aimed at overviewing the whole landscape of longtermism in a fairly comprehensive and structured way (though of course, there's probably some bias in my assessment here!).  I think my history topics collection fits that bill, but I'm less sure. So I've now added below it the disclaimer "This is somewhat less noteworthy than the other links". I think my RSP doc doesn't fit that bill, really, so in the process of writing this comment I've decided to move that out of this post and into my Central directory post.  (The fact that this post evolved out of notes I shared with people also helps explain why stuff I wrote has perhaps undue prominence here.)

I saw only this old repo and assumed the Forum wasn't open source any more. Sorry for not looking further.

Has the team considered making the Forum open-source* and accepting code contributions from the community and others? What are the reasons for keeping the code repository private? Thank you!

* As far as I know, the EA Forum is not open-source, although it is based on Less Wrong platform, which is open-source.

9
JP Addison
3y
It is open source! Here’s the repository. We stay up to date with LessWrong, and submit our changes upstream, so I’d encourage any prospective contributors to submit PRs to the LW repository.

Thanks for doing this work!

Are we living in a simulation?

For IMO a cogent argument against the possibility that we live in a (digitally) simulated universe, please consider adding Gordon McCabe's paper "Universe creation on a computer".

There's a new free open-source alternative called Logseq ("inspired by Roam Research, Org Mode, Tiddlywiki, Workflowy and Cuekeeper").

For those who won't read the paper, the phenomenon is called pluralistic ignorance (Wikipedia):

... is a situation in which a majority of group members privately reject a norm, but go along with it because they assume, incorrectly, that most others accept it.

nil
3y20
0
0
  • If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
  • What new charities do you want to be created by EAs?
  • What are the biggest mistakes Rethink Priorities did?

Thank you!

7
MichaelA
3y
I like the answers Marcus and Saulius gave to this question. I'll just add two things those answers didn't explicitly mention. EA movement-building * Rethink has done and plans to do work aimed at improving efforts to build the EA movement and promote EA ideas * E.g., Rethink's work on the EA Survey, or its plans related to: * "Further refining messaging for the EA movement, exploring different ways of talking about EA to improve EA recruitment and increase diversity. * Further work to explore better ways to talk about longtermism to the general public, to help EAs communicate longtermism more persuasively and to increase support for desired longtermist policies in the US and the UK." * And building the EA movement and promoting EA ideas seems like plausibly one of the best interventions for reducing needless/extreme/all suffering * E.g., building the EA movement could increase the flows of talent and funds to existing suffering-focused EA organisations (such as CLR), lead to the creation of new ones, or lead to talented people using their careers to effectively reduce suffering in other ways (e.g., through specific roles in government or AI labs) * E.g., promoting EA ideas (even without "building the EA movement") could lead to a general shift in voting, policies, behaviours towards reducing suffering  Forecasting * Rethink plans to "Use novel econometric methods to better understand our ability to reliably impact the long-term future", as well as to "Improve our ability to forecast the short-term and long-term future." * Improving our ability to forecast events and impacts, and improving our understanding of when and how much to trust forecasts, would presumably be about as useful for reducing suffering as for all other efforts to improve the world. (And I think it'd plausibly be very useful for such efforts.) * This seems especially true in relation to:  1. efforts to reduce suffering in the long-term future, and 2. decisio
9
Marcus_A_Davis
3y
I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains. Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.

Thanks for the questions!

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring... (read more)

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

Contrary to organizations like OPIS, Center for Reducing Suffering, and Center on long-term risk, we don't have reducing extreme suffering set as our only priority. We sometimes work on reducing suffering that may not be classified as extreme (arguably, our work on cage-free hen campaigns fall into this category). And perhaps some other work is not directly about reducin... (read more)

What are the biggest mistakes Rethink Priorities did?

I can’t speak for the entire organization, but I can talk about what I see as my biggest mistakes since I started working at Rethink Priorities:

  1. Writing articles about interventions I think are promising and thinking that my work is done once the article is published. Examples are baitfish (see the comment above), fish stocking, rodents farmed for pet snake food. The way I see things now, if I think that something should be done, I should express that opinion very clearly and with fewer caveats, find
... (read more)

What new charities do you want to be created by EAs?

For me it's a lobbying organization against baitfish farming in the U.S. I wrote about the topic two years ago here. Many people complimented me on it but no one did anything. I talked with some funders who said they would be interested in funding someone suitable pursuing this, but I haven’t found who could this be. The main argument against it used to be that the industry is declining. But the recently released aquaculture census suggests that it is no longer declining (see my more recent thoughts on... (read more)

Answer by nilNov 18, 20204
0
0

In the real world, maybe we're alone. The skies look empty. Cynics might point to the mess on Earth and echo C.S. Lewis: "Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere." Yet our ethical responsibility is to discover whether other suffering sentients exist within our cosmological horizon; establish the theoretical upper bounds of rational agency; and assume responsible stewardship of our Hubble volume. Cosmic responsibility entails full-spectrum superintelligence: to be blissful but not "blissed out" - high-tech J

... (read more)
Answer by nilNov 18, 202012
0
0

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”

-- Simon Knutsson, "The One-Paragraph Case for

... (read more)
Answer by nilNov 18, 20205
0
0

If humanity is to minimize suffering in the future, it must engage with the world, not opt out of it.

-- Magnus Vinding (2015), Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should Not Aim For Extinction

Load more