All of weeatquince's Comments + Replies

Using TikTok to indoctrinate the masses to EA

Nice videos. Well done.

I thought the first one was really good – very impressive!!

On infinite ethics

Going to type and think at the same time – lets see where this goes (sorry if it ends up with a long reply).

 

Well firstly, as long as you still have a non zero chance of the universe not being infinite, then I think you will avoid most of the paradoxes mentioned above (zones of happiness and suffering, locating value and rankings of individuals, etc), But it sounds like you are claiming you still get the "infinite fanatics" problems.

 

I am not sure how true this is. I find it hard to think through what you are saying without a concrete moral dilem... (read more)

On infinite ethics

I would disagree.

Let me try to explain why by reversing your argument in on itself. Imagine with me for a minute we live in a world where the vast majority of physicists believe in a big bounce and/or infinite time etc.

Ok got that, now consider:

The infinite ethics problems still do not arise as long as you have any non-trivial credence in time being finite. For more recent consequences always dominate later ones as long as the later have any probability above 0 of not happeneing.

Moreover, you should have such a non-trivial credence. For example, although w

... (read more)
3[anonymous]25d
That’s a clever response! But I don’t think it works. It does prove that we shouldn’t be indifferent between w1 and w2, but both are infinite in expectation. So if your utility function is unbounded, then you will still prefer any non-zero probability of w1 or w2 to certainty of any finite payoff. (And if it’s bounded then infinite stuff doesn’t matter anyway.)
How to set up a UK organisation (Limited Company version)

Hi. I have a bunch of notes on how to start a UK registered charity that I can share on request if useful to people. Message me if needed.

2james1mo
Cool! Have you considered turning those notes into a post? Could be a great way for more people to see rhem
Longtermist EA needs more Phase 2 work

One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things. 

This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky.  Such responses have helped push me away. And I know I a... (read more)

I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have - particularly in AI Alignment. If it's not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.

For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in  AI Alignment. There are curr... (read more)

My GWWC donations: Switching from long- to near-termist opportunities?

I came to say the same thing. I was (not that long ago) working on longtermist stuff and donating to neartermist stuff (animal welfare). I think this is not uncommon among people I know.

EA can be hard: links for that

Sorry to hear you are struggling. It is difficult. So do look after yourself. I am sure those around you in EA really appreciate and value what you are doing and that you are not at all being net-negative, do talk to the people you know if you feel like that.

Some extra links that might be of use to you:

2smallsilo1mo
Thank you!
How I failed to form views on AI safety

I thought this post was wonderful. Very interestingly written thoughtful and insightful. Thank you for writing. And good luck with your next steps of figuring out this problem. It makes me want to write something similar, I have been in EA circles for a long time now and  to some degree have also failed to form strong views on AI safety. Also I thought your next steps were fantastic and very sensible, I would love to hear your future thoughts on all of those topics.

 

On your next steps, picking up on:  

To evaluate the importance of AI risk ag

... (read more)
1Ada-Maaria Hyvärinen1mo
Thanks! And thank you for the research pointers.
How I failed to form views on AI safety

my default hypothesis is that you're unconvinced by the arguments about AI risk in significant part because you are applying an usually high level of epistemic rigour

This seems plausible to me, based on:

  • The people I know who have thought deeply about AI risk and come away unconvinced often seems to match this pattern.
  • I think some of the people who care most about AI risk apply a lower level of epistemic rigour than I would, e.g. some seem to have much stronger beliefs about how the future will go than I think can be reasonably justified.
Free-spending EA might be a big problem for optics and epistemics

Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)

2Charles He1mo
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post. However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion. It seems more communication is good. It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.
Free-spending EA might be a big problem for optics and epistemics

One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism

Agree.

Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:

  • Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring process
... (read more)

Random but in the early days of YC they said they used to have a "no assholes" rule, which mean they'd try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.

4Charles He1mo
Hey, do you happen to know me in real life and would be willing to talk about these issues offline? I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting. I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all. (Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors). But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language. One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good. Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues. To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Free-spending EA might be a big problem for optics and epistemics

I think for difficult questions it is helpful to form both an inside view (what do I think) and an outside view (what does everyone else think). Pay is an indicator of the outside view. In an altruistic market how good an indicator it is depends on how much you trust a few big grantmakers to be making good decisions. 

Open Philanthropy Shallow Investigation: Civil Conflict Reduction

Hi Lauren, This post was fantastic!!! An incredibly well researched and well written look at a really important topic. I think it is amazing to see things like this on the EA Forum and I am sure it will be useful to people. (For example, talking for myself, reading this and getting a better understanding of the scale of this issue makes it more likely that I will nudge Charity Entrepreneurship (where I work) to look into this area in future.)

In the sprit of trying to provide useful feedback, a suggestion and a question:

 

A suggested intervention

Police ... (read more)

3Lauren Gilbert1mo
Policing reform is a topic near and dear to my heart, so I am happy to talk about this ad nauseam. One of the papers in my now-on-pause dissertation was on policing, and I also RAed on a study on community policing [https://pubmed.ncbi.nlm.nih.gov/34822276/] in the Global South. (It didn't work.) I agree that better policing is desperately needed in the developing world; functionally, there really aren't police in much of the world. But I don't know that the literature is yet mature enough for this kind of overview; policing in the developing world has really only taken off as a research area in the last few years. My wild speculation would be that police reform is really hard - changing incentives for police can be very difficult in under-resourced environments. The field is really growing, though, so I'm excited to see what comes out of that field in the future. Travis Curtice [https://traviscurtice.com/] and Rob Blair [https://robblair.net/writing/] are two of my favorite scholars of policing.
Free-spending EA might be a big problem for optics and epistemics

Extra ideas for the idea list: 

  • Altruistic perks, rather than personal perks. E.g.1. Turn up at this student event and got $10 donated to a charity of your choice. E.g.2. donation matching schemes mentioned in job adverts, perhaps funded by offering maybe slightly lower salaries. Anecdotally I remember the first EAish event I went to had money to charity for each attendee and free wine and it was the money to charity that attracted me to go, and free wine that attracted my friend, and I am still here and they are not involved.
  • Frugality options, like an
... (read more)

I would love frugality options!

Ideal governance (for companies, countries and more)

One challenge you might find with examining the literature in my space is a lack of prioritisation – in particular I think this leads to an overly strong focus on voting mechanisms above other issues.

To me it feels like how animal charities focus mostly on  pets*. Sure pets are the most obvious animals that we engage with in our daily life, but the vast majority of animal suffering happens in farms. Sure voting is the the most obvious part of the system that we engage with in our daily life, but the vast majority of system improvements are m... (read more)

Ideal governance (for companies, countries and more)

For "empirical research" 

The thing I have found most useful is the work of the UK's Institute for Government. Both their reports and podcasts. I often find I pick up useful things on ideal system design like it may well be that a mix of private and public services are better than 100% one or the other as can compare and see which is working better and take best practice from both (this was from their empirical work on prisons). The caveat is that if you are not into UK policy there may be too much context to wade through to reach the interesting concl... (read more)

Making Community Building a more attractive career path

Hello. Ex-community builder here sharing my two cents. Some ideas you might want to consider are:

  • Supporting people leaving the filed to stay on as mentors/advisers/trustees. I stopped full time community building in London in 2017 but have stayed on in an advisory/Trustee capacity for EA London ever since. Boosting the status of this and making it easy and fun for people to do or expecting this of people would then help future community builders have someone to talk to on a regular basis who knows their region/community  and can offer support.
  • Try hiri
... (read more)
3Vilhelm Skoglund1mo
Thank you for the input! I really like the mentoring idea. My intuition is that many would be up for this, if it was easier. Hiring mid-career CBs also seems like a good idea, both because they are likely to stick around longer and have more life experience / career capital and might be able to give more relevant guidance, contacts etc. Though I think it is good to have young people in many contexts. Support with boring tasks would be beneficial and I do think it could be done "centralized", like Markus Amalthea Magnuson is doing with altruistic.agency.
EA and Global Poverty. Let's Gather Evidence

Potentially relevant post here: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps.

Post author makes the claim that there is lots of funding for big global poverty orgs but less for smaller newer innovative orgs. Whereas farmed animal welfare and AI has more funding available for small new projects and individuals.

This could mean that just looking at the total amount of funding available is not complete measure of how prioritised an area is. 

The Vultures Are Circling

I might be in the minority view here but I liked the style this post was written in, emotive language and all. It was flowery language but that made it fun to read it and I did not find it to be alarmist (e.g. it clearly says “this problem has yet to become an actual problem”). 

And more importantly I think the EA Forum is already a daunting place and it is hard enough for newcomers to post here without having to face everyone upvoting criticisms of their tone / writing style / post title. It Is not the perfect post (I think there is a very valid criti... (read more)

I also thought that the post provided no support for its main claim, which is that people think that EAs are giving money away in a reckless fashion. 

 Even if people are new, we should not encourage poor epistemic norms. 

I feel anxious that there is all this money around. Let's talk about it

I think that's a fair point. I normally mean the former (the impact maximising one) but in this context was probably reading it in the context OP used it more like that later (the EA stamp one). Good to clarify what was meant here, sorry for any confusion.

I feel anxious that there is all this money around. Let's talk about it

Sometimes people in EA will target scalability and some will target cost-effectiveness. In some cause areas scalability will matter more and in some cost-effectiveness will matter more. E.g. longtermists seem more focused on scalability of new projects than those working on global health. Where scalability matters more there is more incentive for higher salaries (oh we can pay twice as much and get 105% of the befit – great). As such I expect there to be an imbalance in slaries between cases areas.

This has certainly been my experience with the people fundi... (read more)

5Linch2mo
As an aside, I wonder if you (and OP) mean a different thing by "cause impartial" than I do. I interpret "cause impartial" as "I will do whatever actions maximize the most impact (subject to personal constraints [https://www.britannica.com/topic/ought-implies-can]), regardless of cause area." Whereas I think some people take it to mean a more freeform approach to cause selection that's more like "Oh I don't care what job I do as long as it has the "EA" stamp?" (maybe/probably I'm strawmanning here).
$100 bounty for the best ideas to red team

Red team: There are no easy ways that [EA org*] strategy can be better optimised towards achieving that organisation's stated goals and/or the broader goals of doing the mots good.

Honestly, not the easiest question but practically quite useful if anyone listens.

 

* E.g. CEA, OpenPhil, GiveWell, FTX Future Fund, Charity Entrepreneurship, HLI, etc.

A Landscape Analysis of Institutional Improvement Opportunities

I still expect we would if have some disagreement on how likely it is for this concentrated opportunities hypothesis to be true.

An interesting cheap (but low veracity) test of this hypothesis that could be to list out a handful of institutions (or collections of institutions) that you think would certainly NOT be considered as the "most powerful" but might matter (E.g.: university career services, EA community institutions, the Biological Weapons Convention implementation unit, AI/tech regulatory bodies, top business schools, tech start-ups, etc, etc) and ... (read more)

Awards for the Future Fund’s Project Ideas Competition

Hi Nick, Great work getting so much interest and so many ideas. 

I am super curious to know how much prioritisation and vetting is going on behind the scenes for the ideas on the  FTX Fund project list and how confident you are in the specific ideas listed.

One way to express this would be: Do you see the ideas on your list as likely to be in the top 100 longtermist project ideas or as likely to be in the top 10,000 longtermist project ideas or somewhere in between?*  I think knowing this could be useful for anyone looking to start a... (read more)

I suspect a lot of the "very best" ideas in terms of which things are ex ante the best to do, if we don't look at other things in the space (including things not currently done), will look very similar to each other.

Like 10 extremely similar AI alignment proposals. 

So I'd expect any list to have a lot of regularization for uniqueness/side constraint optimizations, rather than thinking of the FTX project ideas list as a ranked list of the most important x-risk reducing projects on the margin. Arguably, the latter ought to be closer to how altruistic individuals should be optimizing for what projects to do, after adjusting for personal fit 

A Landscape Analysis of Institutional Improvement Opportunities

Hi Ian. Great you have done this. Prioritisation is a crucial and challenging part of EA decision making and It is great to see an exercises dedicated to openly trying to to prioritise where to focus time and attention.

 

I did have one question / doubt about one of your key conclusion. You said:

Our findings suggest that ... opportunities [to drive better world outcomes] are highly concentrated in a fairly small number of organizations .

However as far as I can tell you do not at any point in the post seem to justify this or give the reader a reason to b... (read more)

1IanDavidMoss2mo
Hi Sam, the key passage was in this text from the outset of the "Top Institutions" section: Granted, we are only talking about the 41 organizations that were included in the model, but having spent some time with those numbers, I would be at least somewhat surprised if the 70% figure were to go down by all that much (say, below 55%) if we expanded the analysis to our entire short list of 77 organizations, and so on and so forth. However, I would say that my overall credence in the claim that you quoted is moderately low, which was intended to be conveyed by the use of the verb "suggest" rather than a stronger word. Probably a better way to characterize it is that it's something like a working hypothesis that I believe to be true based on this analysis but am still seeking to confirm in a larger sense. Regarding your specific counterarguments: * More public and powerful organizations being harder to influence -- this assumption is already taken into account in our analysis, albeit with a lot of uncertainty around the relevant variables. * Path dependencies -- I think you are right about the general point, but I don't think it really challenges the claim about concentration of opportunity. The basic case we're talking about is when you invest in one or a few demonstration projects with lower-profile institutions, where the hope is that this will help grease the wheels for spreading the idea or intervention being demonstrated to a higher-profile institution where it matters more. I think if we're serious about that being the goal, it probably makes more sense to view those investments as part of a longer-term strategy targeting the higher-profile institution, and one could even budget for the demonstration projects as part of the same $100M (or whatever amount) philanthropic investment. * Non-targeted institutional reforms -- yes, this is along the lines of what I called the "product-centered" theory of change in the "Insig
EA directory of ideas

I support. I think this would be helpful and would use this. (I work for Charity Entrepreneurship).

When did EA miss a great opportunity to do good?

EA missed:

  • EA community building.
    It might seem odd to say that EA missed EA community building – but even until 2016/16 there was no or minimal support or funding for community building. That is about 4-5 years from EA being a thing to EA community building being widely accepted as valuable. When I talked to people, such as senior CEA staff, about it back at EAG Oxford in 2015 it felt like the key question was: should EA risk trying to outreach to any significant amount of people or just build a very narrow small community of super talented people. To get a
... (read more)
6Linch2mo
Personal take: I strongly(?) agree with the high-level texture of both of these points. The first point seems especially egregious ex post. Though I wouldn't frame your timing quite the same way, feels like 2016(?)-2019(?) feels more dead re: CB than either before or after. For a while, a) many EA orgs didn't believe in scale, and b) entrepreneurship was underemphasized in EA advice (so creating new orgs don't happen as often as they could), which didn't help. I feel like most of the years I've been in EA has been in "keep EA small" mode, and "don't do irreversible growth" memes. I'd be interested in whether people who strongly believed this before think that a) reality has changed vs b) their beliefs have changed vs c) they think the current wave of growth is ill-advised.
How might a herd of interns help with AI or biosecurity research tasks/questions?

Idea: Could take a long list of project ideas and have interns prioritise them. If listed out 200-300 bio or AI or EA meta projects and had 3 interns each do separate 1 day review pieces on each project. Could be done with minimal oversight and listing ideas could be quick and in theory it could create a useful resource.

Of course not sure how well it would match an expert take on the topic and there are lots of challenges and potential problems with unpaid intern labour.

If someone wants to organise this and has an intern army I would be happy to discuss / help.

Update from Open Philanthropy’s Longtermist EA Movement-Building team

Really helpful. Good to get this broader context. Thank you!!

Update from Open Philanthropy’s Longtermist EA Movement-Building team

Hi Claire,

Thank you for the write-up. I have a question I would love to hear your (and other people's) thoughts on. You said:

I should have hired more people, more quickly. And, had a slightly lower bar for hiring in terms of my confidence that someone would be a good fit for the work, with corresponding greater readiness to part ways if it wasn’t a good fit.

This is really interesting as goes against the general tone of advice that I hear that suggests that being cautious about hiring. That said I do feel at times that the EA community is perhaps more cauti... (read more)

So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org). 

Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to ... (read more)

Is transformative AI the biggest existential risk? Why or why not?

Ah, sorry, I misunderstood. Thank you for the explanation :-)

Is transformative AI the biggest existential risk? Why or why not?

I think the answer depends on the timeframe you are asking over. I give some example timeframes you might want to ask the question over and plausible answers to the biggest x-risks. 

  • 1-3 year: nuclear war
    Reasoning: we are not close enough to building TAI that it will happen in the next few years. Nuclear war this year seems possible.
  • 4-20 years: TAI
    Reasoning: Firstly you could say we are a bit closer to TAI than to building x-risk level viruses  (very unsure about that). Secondly the TAI threat is most worrying in scenarios where it happens very qu
... (read more)
Is transformative AI the biggest existential risk? Why or why not?

I am not sure if "all else equal" (by which I think you mean if we are don’t have good likelihood estimates) that "AI alignment is the most impactful object-level x-risk to work on" applies to people without relevant technical skills.

If there is some sense of "all risks are equal" then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.

6Rohin Shah2mo
By "all else equal" I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar. I agree that in practice personal fit will be a huge factor in determining what any individual should do.
Concerns with the Wellbeing of Future Generations Bill

Keen to chat and see what we can come up with between us. At this point I think I have thought about it enough that I would be surprised if we could develop ideas better than the core ideas of the bill – but keen to try.

6John_Myers2mo
I think it is likely that even if you and I can't come up with improvements (although I suspect we can), a broader number of people getting involved could improve on the core ideas – looking forward to working on it together!
Concerns with the Wellbeing of Future Generations Bill

It is very unclear what Quangos have to do with the rest of this whole post and why they get a random mention in the "make EA look bad" section. Looking into it it appears that someone on twitter criticised this WoFG bill and also EA on the grounds that this bill includes a Quango. 

 

Now, I am not sure that anyone denies that Quangos can be useful tools. The authors (Larks and James) do not at any point make a case against Quangos or say they cannot be a useful in the correct circumstance. In fact they make a case for more Quangos (with the three ... (read more)

1John_Myers2mo
I can’t speak for the Twitter author you mention but I think our comment about quangos was primarily intended to add a minor element of humour to lighten a very long piece. I apologize if that was a poor choice on our part. Quangos were extensively joked about in the old UK television series and book Yes Minister. My personal view about the quango (the Commission) suggested in the Bill is that without substantial revision it risks doing net damage. I certainly agree that some forms of quango may be useful, although I think it is often difficult to design them to ensure that the benefits exceed the costs. It would be great if the next quango proposed generates much greater consensus that the proposal is likely to be net beneficial.
Concerns with the Wellbeing of Future Generations Bill

Good point. To clarify I think when I say "give future generations a voice" I was thinking of empowerment – general mechanisms that would allow someone to speak for the future, some sort or representation or consideration across the board (not just specific polices). I think broad empowerment is valuable and we should not give up on any and all general approaches to empowering future generations (e.g. WoFGB) in order to only focus on sector specific policies. (I get the impression you think otherwise.)

 

(Also to clarify, given how ridiculously short term our politics is when is, when I say empower future generations I would include within that the future views of current generations.)

1John_Myers2mo
I have an open mind on that. I think it’s an empirical question and it depends partly on how it is done. I could envisage many circumstances where a mechanism allowing someone purportedly to speak for future generations could in fact harm those future generations.
CE Research Report: Road Traffic Safety

[I work for CE as Director of Research] 

Hi Ramiro – really good point. We will try to get more of our lists of ideas out into the EA realm, keep your eyes on the forum.

(Unfortunately I can't promise a list of ideas from the 2020 work on family planning as that was before my time and the staff who worked on that has now left, but hope to get some other ideas lists up online)

Concerns with the Wellbeing of Future Generations Bill

Hi, me again with another comment.

I have for the first time read Holden's post on "vetocracy" last night which you link to. I thought it was very good.

He basically says that empowerment of any actors will to some degree increase the risk of vetocracy. If you give a voice to a disadvantaged actor it will provide an extra channel for that actor to input and maybe even stop changes in the world. Holden is pretty clear that he thinks this is worth it – that looking over history to date  empowering of people who have been left out of decision making is ove... (read more)

Thank you, I think that’s very constructive.

Where I would slightly disagree is that I don’t agree that every mechanism to give future generations’ interests more of a voice need necessarily result in more costs or red tape for any change. It may be possible to construct mechanisms that give them more of a voice for positive change. (The analogy here would be street votes.) We could see the “three lines of defence” proposals as an example of that. I think it would be good to see if we can find more of those mechanisms.

Concerns with the Wellbeing of Future Generations Bill

Hi John, Thank you for the healthy debate. I am finding it very interesting.

I just want to pick up on one point that is perhaps an important crux where I think my views are quite different from yours. You say:

 

I think that getting the Government as currently formed to select and act on long term plans, unless the subjects of those long term plans are very carefully guided, could be highly damaging

You seem to be saying that the bill would encourage government to do more long-term decision making but government decision making about the future is just s... (read more)

Concerns with the Wellbeing of Future Generations Bill

TL;DR

Lots of good critical points in this post. However I would want readers to note that:

  • None of the criticisms in the post really pertain to the core elements of the bill. The theory of change for the bill is: government doesn’t make long term plans > tell government to make long term plans (i.e. set a long term vision and track progress towards it) > then government will make long term plans. This approach has had research and thought put into it.
  • This draft of the bill makes much more sense when you see it as a campaigning tool, a showcase of idea
... (read more)

Many thanks for the thoughtful and constructive response. I agree with many of your comments.

(First, I note that our published text does not include the sentence you quoted:

We think the base rate is that most efforts to improve governance have ended in failure or worse

Instead it says this:

Ultimately, we think the base rate is that many efforts to improve policy in the last 50 years have ended in failure or worse. 

We agreed with your comments on that and amended accordingly before publishing. Thank you again for giving them.)

Responding to your two main... (read more)

7Nathan Young3mo
I can sympathise with both parties here but I think the EA Comms part of this bill could have been better. It seems consistent with both sides to have done some comms saying "this is a first draft, it is a rallying point, it has some EA involvement but is not an EA bill". I think saying that on Twitter for instance, would have avoided a lot of blowback. Thanks to weeatquince for their work on the bill and the authors of this article for their response.

Really excellent comment - could be its own post, arguably.

I work on government legislation in the civil service and can confirm that private member's bills wouldn't have any access to the Office for Parliamentary Counsel (government specialist lawyers) unless government supports the private member's bill, which is rare. Private member's bills are primarily used as campaigning tools for demonstrating political support for a particular idea and getting it debated in Parliament.

We should consider funding well-known think tanks to do EA policy research

For longtermist stuff directing think tank folk to the SFF would probably be best, and perhaps helping them think of a proposal that is aligned with longtermist ideas. I don’t think LTFF would fund this kind of thing (based on their payout reports etc).

For animal rights stuff probably the Animal Welfare Fund (or Farmed Animal Funders) might be interested in funding think tank folk outside of Europe / North America.

For other topics I have no good ideas.

 

If you have say £250k+ could just put out a proposal (e.g. on the EA Forum) that you would fund think tanks and share it with people in the EA policy space and see who gets in touch.

Modelling Great Power conflict as an existential risk factor

Thank you Stephen – really interesting to read. Keep up the good work.

Some quick thoughts.

 

1.

There was less discussion than I expected of mutual assured destruction type dynamics.

My uninformed intuition suggests that a largest source of risk of an actually existential catastrophe comes from scenarios where some actor has an incentive to design weapons that would be globally destructive and also to persuade the other side that they would use those weapons if attacked in order have a mutually assured destruction deterrent. (The easiest way to persuade y... (read more)

What's your prior probability that "good things are good" (for the long-term future)?

You could of course ask this question the other way round. What is the probability that things that are good for the long run future (for P(utopia)) are good in the short run as well?

For this I would put a very high probability as:

  • Most of what I have read generally about how to effect the long run future suggest you need to have feedback loops to show things are working which suggests short run improvements (e.g. you want your AI interpretability work etc to help in real world cases today)
  • Many but not all of the examples I know of people doing things that
... (read more)
Bounty for your best 2 minute answer to an EA 'frequently asked question'

Imagine you had a time machine. A little box that you could climb inside and use to explore past and future worlds. And so you set off to see what the future may bring. And you find a future of immense wonders: glittering domes, flying cars, and strange planets with iridescent moons. But not all is perfect. On a space station you find a child lost and separated from her family. Under a strange sun you find a soldier lying for days injured on a battlefield with no hope of help. In a virtual world you find an uploaded mind trapped alone in a b... (read more)

1james4mo
Thanks for your submission!
On infinite ethics

My take (think I am less of an expert than djbinder here)

  1. This view allows that.
  2. This view allows that. (Although entirely separately consideration of entropy etc would not allow infinite value.)
  3. No I don’t think identical questions arise. Not sure. Skimming the above post it seems to solve most of the problematic examples you give. At any point a moral agent will exist in a universe with finite space and finite time that will tend infinite going forward. So you cannot have infinite starting points so no zones of suffering etc. Also I think you don’t get prob
... (read more)
On infinite ethics

Agree with djbinder on this, that "infinities should only be treated as 'idealized limits' of finite processes". 


To explain what I mean:

Infinites outside of limiting sequences are not well defined (at least that is how I would describe it). Sure you can do some funky set theory maths on them but from the point of view of physics they don’t work, cannot be used.

(My favorite example (HT Jacob Hilton) is a man throws tennis balls into a room once every 1 second numbered 1,2,3,4,... and you throw them out once every 2 seconds, how many balls are in the ro... (read more)

On infinite ethics

This is a really really well written piece and you go in to great depth and explain things well and it is marvellous that someone (in this case you) has done this work. But before you or other people on this forum put too much  resources into infinite ethics I think it is important to note the extent to which is its, as you say, "sci-fi stuff".

(I do worry you overstate the extent to which infinites might be a thing. For example  you end the section on "maybe infinities are not just a thing" by saying that "modern cosmology says that our actual co... (read more)

3[anonymous]1mo
I know that this is a necro but I just wanted to point out that the problem still arises as long as you have any non-trivial credence in your actions having infinite consequences. For infinite consequences always dominate finite ones as long as the former have any probability above 0. Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to end in a Big Bounce scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other cyclic cosmologies, which do theoretically allow for literally infinite (morally-relevant!) effects from individual actions, even if they’re in the minority.
Introducing Animal Empathy Philippines

This sounds amazing! Well done for starting something :-)

9Ging Geronimo4mo
Thank you for the kind words. :)
What questions relevant to EA could be answered by surveying the public?

Is this question asked with the intention of maybe doing such surveys?

I do plan to do surveys of the public's view of what a good future is and would really appricate support on that. I hope to be able to fund any such work but yet to be confirmed. Would you be invested in collaborating?

Load More