Jan_Kulveit

Comments

Neglected EA Regions

I'm not sure you've read my posts on this topic? (1,2)

In the language used there, I don't think the groups you propose would help people overcome the minimum recommended resources, but are at the risk of creating the appearance some criteria vaguely in that direction are met.

  • e.g., in my view, the founding group must have a deep understanding of effective altruism, and, essentially, the ability to go through the whole effective altruism prioritization framework, taking into account local specifics to reach conclusions valid at their region. This basically impossible to implement as membership requirement in a fb group
  • or strong link(s) to the core of the community ... this is not fulfilled by someone from the core hanging in many fb groups with otherwise unconnected ppl

Overall, I think sometimes small obstacles - such as having to find EAs from your country in the global FB group or on EA hub and by other means - are a good thing!

Neglected EA Regions

FWIW the Why not to rush to translate effective altruism into other languages post was quite influential but is often wrong / misleading / advocating some very strong prior on inaction, in my opinion

Neglected EA Regions

I don't think this is actually neglected

  • in my view, bringing effective altruism into new countries/cultures is in initial phases best understood as a strategy/prioritisation research, not as "community building"
    • importance of this increases with increasing distance (cultural / economic / geographical / ...) from places like Oxford or Bay

(more on the topic here)

  • I doubt the people who are plausibly good founders would actually benefit from such groups, and even less from some vague coordination due to facebook groups
    • actually I think on the margin, if there are people who would move forward with the localization efforts if such fb groups exist and other similar people express interest, and would not do that otherwise, their impact could be easily negative
AI safety scholarships look worth-funding (if other funding is sane)
  • I don't think it's reasonable to think about FHI DPhil scholarships and even less so RSP as a mainly a funding program. (maybe ~15% of the impact comes from the funding)
  • If I understand the funding landscape correctly, both EA funds and LTFF are potentially able to fund single-digit number of PhDs. Actually has someone approached these funders with a request like "I want to work on safety with Marcus Hutter, and the only thing preventing me is funding"? Maybe I'm too optimistic, but I would expect such requests to have decent chance of success.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Sure

a)

For example, CAIS and something like "classical superintelligence in a box picture" disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator - which has (in my view) some "hard core" involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator's values are not "overwritten" by the AI c) you don't want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.

Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called 'daemons' or 'misaligned subagents'. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for "this is what to expect by default"). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a "distillation" proposal targeted at the above described problem in Eric's work from 2015, many connected ideas in Paul's work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.

b)

For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it "the continuity assumption"). This is connected to many agendas - if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.

(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)

There are powerful intuitions going both ways on this.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

I think the picture is somewhat correct, and we surprisingly should not be too concerned about the dynamic.

My model for this is:

1) there are some hard and somewhat nebulous problems "in the world"

2) people try to formalize them using various intuitions/framings/kinds of math; also using some "very deep priors"

3) the resulting agendas look at the surface level extremely different, and create the impression you have

but actually

4) if you understand multiple agendas deep enough, you get a sense

  • how they are sometimes "reflecting" the same underlying problem
  • if they are based on some "deep priors", how deep it is, and how hard to argue it can be
  • how much they are based on "tastes" and "intuitions" ~ one model how to think about it is people having boxes comparable to policy net in AlphaZero: a mental black-box which spits useful predictions, but is not interpretable in language

Overall, given our current state of knowledge, I think running these multiple efforts in parallel is a better approach with higher chance of success that an idea that we should invest a lot in resolving disagreements/prioritizing, and everyone should work on the "best agenda".

This seems to go against some core EA heuristic ("compare the options, take the best") but actually is more in line with what rational allocation of resources in the face of uncertainty.


Update on CEA's EA Grants Program

Re: future of the program & ecosystem influences.

What bad things will happen if the program is just closed

  • for the area overlapping with something "community building-is", CBG will become the sole source of funding, as meta-fund does not fund that. I think at least historically CBG had some problematic influence on global development of effective altruism not because of the direct impact of funding, but because of putting money behind some specific set of advice/evaluation criteria. (To clarify what I mean: I would expect the space would be healthier if exactly the same funding decisions were made, but less specific advice what people should do was associated; the problem is also not necessarily on the program side, but can be thought about as goodharting on the side of grant applicants/grant recipients.)
  • for x-risk, LTFF can become too powerful source of funding for new/small projects. In practice while there are positive impacts of transparency, I would expect some problematic impacts of mainly Oli opinions and advice being associated with a lot of funding. (To clarify: I'm not worried about funding decisions, but about indirect effects of the type "we are paying you so you better listen to us", and people intentionally or unintentionally goodharting on views expressed as grant justification)
  • for various things falling in between the gaps of fund scope, it may be less clear what to do
  • it increases the risks of trying to found something like "EA startups"
  • it can make the case for individual donors funding things stronger

All of that could be somewhat mitigated if rest of the funding ecosystem adapts; e.g. by creating more funds with intentional overlap, or creating others stream of funding going e.g. along geographical structures.


Which Community Building Projects Get Funded?

As a side-note: In case of the Bay area, I'd expect some funding-displacement effects. BERI grant-making is strongly correlated with geography and historically BERI funded some things which could be classified as community building. LTFF is also somewhat Bay-centric, and also there seem to be some LTFF grants which could be hypothetically funded by several orgs. Also some things were likely funded informally by local philantrophists.

To make the model more realistic one should note

  • there is some underlying distribution of "worthy things to fund"
  • some of the good projects could be likely funded from multiple sources; all other things being equal, I would expect the funding to come more likely from the nearest source

EA Hotel Fundraiser 6: Concrete outputs after 17 months

meta: I considered commenting, but instead I'm just flagging that I find it somewhat hard to have an open discussion about the EA hotel on the EA forum in the fundraising context. The feeling part is

  • there is a lot of emotional investment in EA hotel,
  • it seems if the hotel runs out of runway, for some people it could mean basically loosing their home.

Overall my impression is posting critical comments would be somewhat antisocial, posting just positives or endorsements is against good epistemics, so the personally safest thing to do for many is not to say anything.

At the same time it is blatantly obvious there must be some scepticism about both the project and the outputs: the situation when the hotel seems to be almost out of runway repeats. While eg EA funds collect donations basically in millions $ per year, EA hotel struggles to collect low tens of $.

I think this equilibrium where

  • people are mostly silent but also mostly not supporting the hotel, at least financially
  • the the financial situation of the project is somewhat dire
  • talks with EA Grants and the EA Long Term Future Fund are in progress but the funders are not funding the project yet

is not good for anyone, and has some bad effects for the broader community. I'd be interested in ideas how to move out of this state.

Only a few people decide about funding for community builders world-wide

In practice, it's almost never the inly option - e.g. CZEA was able to find some private funding even before CBG existed; several other groups were at least partially professional before CBG. In general it's more like it's better if national-level groups are funded from EA

Load More