All of Jan_Kulveit's Comments + Replies

Impact markets may incentivize predictably net-negative projects

If the main problem you want to solve is "scaling up grantmaking", there are probably many other ways how to do it other than "impact markets". 

(Roughly, you can amplify any "expert panel of judges" evaluations with judgemental forecasting.)

3Denis Drescher2d
We’ve considered a wide range of mechanisms and ended up most optimistic about this one. When it comes to prediction markets on funding decisions, I’ve thought about this in two contexts in the past: 1. During the ideation phase, I found that it was already being done (by Metaculus?) and not as helpful because it doesn’t provide seed funding. 2. In Toward Impact Markets, I describe the “pot” safety mechanism that, I surmised, could be implemented with a set of prediction markets. The implementation that I have in mind that uses prediction markets has important gaps, and I don’t think it’s the right time to set up the pot yet. But the basic idea was to have prediction markets whose payouts are tied to decisions of retro funders to buy a particular certificate. That action resolves the respective market. But the yes votes on the market can only be bought with shares in the respective cert or by people who also hold shares in the respective cert and in proportion to them. (In Toward Impact Markets I favor the product of the value they hold in either as determinant of the payout.) But maybe you’re thinking of yet another setup: Investors buy yes votes on a prediction market (e.g. Polymarket, with real money) about whether a particular project will be funded. Funders watch those prediction markets and participants are encouraged to pitch their purchases to funders. Funders then resolve the markets with their actual grants and do minimal research, mostly trust the markets. Is that what you envisioned? I see some weaknesses in that model. I feel like it’s rather a bit over 10x as good as the status quo vs. our model, which I think is over 100x as good. But it is an interesting mechanism that I’ll bear in mind as a fallback!
On Deference and Yudkowsky's AI Risk Estimates

(i.e. most people who are likely to update downwards on Yudkowsky on the basis of this post, seem to me to be generically too trusting, and I am confident I can write a more compelling post about any other central figure in Effective Altruism that would likely cause you to update downwards even more)


My impression is the post is somewhat unfortunate attempt to "patch" the situation in which many generically too trusting people updated a lot on AGI Ruin: A List of Lethalities  and Death with Dignity  and subsequent deference/update cascades. 

I... (read more)

What’s the theory of change of “Come to the bay over the summer!”?

I. It might be worth reflecting upon how large part of this seem tied to something like "climbing the EA social ladder".

E.g. just from the first part, emphasis mine
 

Coming to Berkeley and, e.g., running into someone impressive  at an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space).
If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in

... (read more)

Yeah, it would probably be good if people redirected this energy to climbing ladders in the government/civil service/military or important powerful corporate institutions. But I guess these ladders underpay you in terms of social credit/inner ringing within EA. Should we praise people aiming for 15y-to-high-impact careers more?

3Atticus Hempel14d
I think the most costly hidden impact is the perception of gatekeeping that occurs with such a system as this. Gatekeeping happens in two ways: for one, those who are less able to travel for reason such as their having to provide for their family or even their being homesick are put at a disadvantage. And two, those who are less able to schmooze (fun word!) and climb that ladder are also put at a disadvantage. I agree, I think this is a problem, but I am not sure if the cost of solving the problem (I.e. replacing the system) is too high? Much like grades in undergraduate institutions, whether one agrees with their ethicality or not, they are a fairly accurate assessment of how one might do in graduate school because they are so similar in nature. Now, disregarding the argument as to whether or not grades should be used in either, what I am trying to say is that the social ladder that exists within EA exists because the skills that are required to climb this social ladder are skills that are valued within EA. Thus, I do not think we need to so much care about the system because I think it is actually solving for an efficiency problem that is addressed above. You brought up specifically the opportunity cost, the essay above said that there are a million projects going on always and not enough people to staff them. I think this opportunity cost is apt in order to weed out the people who aren’t that serious about an idea or who just aren’t yet skilled enough. Furthermore, even if this wasn’t the case I do think that EA people are pretty productive when motivated enough, from experience I can say (I could be wrong on this in general, but for me at least) all you really need to know is one well-connected EA in order to have access to 100 more — and even then you can get access to many many more at online or in person events. You may call this time consuming schmoozing, but if thought about impact fully and effectively (qualities EA wants) I maintain that this could be d
7tamgent15d
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring [https://www.lewissociety.org/innerring/], which I highly recommend. Here is a sentence from it that sums it up pretty well I think: IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
6Lukas_Finnveden16d
This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. "This could have opportunity costs" could be written under every piece of advice for how to spend time. In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder. (A hidden cost that's more tied to ladder climbing is Chana's point that some of this can be at least somewhat zero-sum.)

I agree with you, being "a highly cool and well networked EA" and "do  things which need to be done" are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation,  being "a highly cool and well networked EA" and "do  things which need to be done" are pretty similar. If I wasn't so sociable and network-y, I'd probably still be running my EA reading group with ~6 participants, which is nice but not "doing things which need to be done". For technical alignment researchers, this is probably less the case, though still much more than I would've expected.

(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).

Getting GPT-3 to predict Metaculus questions

Suggested variation, which I'd expect to lead to better results: use raw "completion probabilities" for different answers.

E.g. with prompt "Will Russia invade Ukrainian territory in 2022?" extract completion likelihoods of the next few tokes "Yes" and "No". Normalize

6MathiasKB2mo
man you just blew my mind, will give it a try next time I feel an urge to play around with GPT!
Case for emergency response teams

Also the direction of ALERT is generally more on "doing". Doing seems often very different from forecasting, often needs different people - part of the relevant skills is plausibly even anticorrelated.

2Alex D2mo
Y'all are fully complementary I think. From Linch's proposal:
Emergency response

Crisis response is a broader topic. I would probably suggest creating additional tag for Crises response (most of our recent sequence would fit there)

"Long-Termism" vs. "Existential Risk"

I don't have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.

I. In a "longetermist" framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a "short-termist" view with, say, a discount rate of 5%, it is not nearly so ... (read more)

Off Road: Interviews with EA College Dropouts

Title EA Dropouts  seems a bit confusing, because it can be naturally interpreted as people who dropped out of EA

3Daystar Eld3mo
Woops, good point. Fixed!
What we tried

I had little influence over the 1st wave, credit goes elsewhere. 

What happened in subsequent waves is  complicated.  One sentence version is Czechia changed minister of health 4 times, only some of them were reasonably oriented, and how much they were interested in external advice differed a lot in time. 

Note that the "death tolls per capita in the world" stats are  misleading, due to differences in reporting. Czechia had average or even slightly lower than average mortality compared to "Eastern Europe" reference class, but much better reporting. For more reliable data, see https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(21)02796-3/fulltext

Awards for the Future Fund’s Project Ideas Competition

Both "EA translation service" and "EA spaces everywhere" seem like ideas which can take good, but also many bad or outright harmful forms. 

A few years ago, I tried to describe how to to establish a robustly good "local effective altruism" in a new country or culture (other than Anglosphere).

The super brief summary is
1. it's not about translating a book, but about “transmission of a tradition of knowledge”
2. what is needed is a highly competent group of people 
3. who can, apart from other things, figure out what the generative question of EA means... (read more)

2RyanCarey2mo
Regarding coworking spaces everywhere, I strongly agree that you need competent people to set the culture locally, but the number of places with such people is rapidly increasing. And there's also something to the fact that if you try to run a project far away from current cultural hubs, then it can drift away from your intended purpose. But if there is decent local culture, some interchange of people between hubs, and some centralised management, I think it would work pretty well. The considerations for centralising office management and gatekeeping seem strong overall - freeing up a lot of EA organisers' time, improving scalability, and improving ability to travel between offices.
How we failed

Mass-reach posts came later, but sooner than the US mainstream updates

https://www.lesswrong.com/posts/h4vWsBBjASgiQ2pn6/credibility-of-the-cdc-on-sars-cov-2
https://slatestarcodex.com/2020/03/23/face-masks-much-more-than-you-wanted-to-know/
 

How we failed

The practical tradeoff was between what, where and when to publish. The first version of the preprint which is on medrxive contains those estimates. Some version with them could probably be published in a much worse journal than Science, and would have much less impact.

We could have published them separately, but a paper is a lot of work, and it's not clear to me whether, for example, to sacrifice some of the"What we tried"and get this done would have been a good call. 

It is possible to escape from the game in specific cases - in the case of covid, fo... (read more)

How we failed

NPI & IFR: thanks, it's now explained in the text.

Re: Rigour

I think much of the problem is due not to our methods being "unrigourous" in any objective sense, but to interdisciplinarity. For example, in the survey case, we used mostly standard methods from a field called "discrete choice modelling" (btw, some EAs should learn it - it's a pretty significant body of knowledge on "how to determine people's utility functions").  

Unfortunately, it's not something commonly found in the field of, for example, "mathematical modeling of infectious diseases"... (read more)

What we tried

Hi, as the next post in the sequencd is about 'failures' I think it would be more useful after that is published.

Hinges and crises

Sorry, but this seems to me to confuse the topic of the post "Experimental Longtermism" and the topic of this post. Note that the posts are independent, and about different topics. 

The table in this post is about timescales of OODA loops (observe–orient–decide–act), not about feedback. For example, in a situation which is unfolding on a timescale of days and weeks, as was the early response to covid, some actions are just too slow to have an impact: for example, writing a book, or funding basic research. The same is true for some decision and observat... (read more)

1lukehmiles3mo
The first O in OODA implies something new to observe, no? And within the OODAL of a city there are many smaller loops where eg you see if your friend where's a mask if you ask them. And with the ToC and such I thought the first post was kind of an introduction/abstract. Anyway I'm looking forward to these posts and very curious what the OODA loop of a city looks like
The Future Fund’s Project Ideas Competition

Note that CSER is running a project roughly in this direction.

4Sean_o_h1mo
An early output from this project: Research Agenda (pre-review) Lessons from COVID-19 for GCR governance: a research agenda [https://f1000research.com/articles/11-514] The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies. Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
2Sean_o_h3mo
https://www.cser.ac.uk/research/lessons-covid-19/ [https://www.cser.ac.uk/research/lessons-covid-19/]
Where would we set up the next EA hubs?

Thanks for sharing!  We plan to announce some new significant effort in Prague in next ~1 month, and also likely will offer some aid to people moving to Prague. If anyone is interested in moving somewhere in Europe, send me a pm. 

Basic reasoning is Prague is pareto-optimal on some combination of 'existing community (already 10-20 FTE people working on longtermist projects, other ~10FTE job openings this year)', 'better quality of living', 'costs to relocate', 'in Europe', 'cultural sanity'. There wasn't much effort going into promoting Prague in ... (read more)

6mariushobbhahn3mo
Thanks for the comment. I'll add it to the post :)
Experimental longtermism: theory needs data

Millions is probably a safe bet/lower bound: majority won't be via direct twitter reads, but via mainstream media using it in their writing. 

With twitter, we have a better overview in the case of our other research on seasonality (still in review!). Altmetric estimate is it was shared with accounts with an upper bound of 13M followers. However, in this case, almost all the shares were due to people retweeting my summary. Per twitter stats, it got 2M actual impressions. Given the fact the NPI research was shared and referenced more, it's probably ... (read more)

2MaxRa3mo
Thanks for the response, seems like a safe bet, yeah. :) Re forecasting, "making low-probability events happen" is a very interesting framing, thanks! I still am maybe somewhat more positive about forecasting: * many questions involve the actions of highly capable agents and therefore requiring at least some thinking in the direction of this framing * the practice of deriving concrete forecasting questions from my models seems very valuable for my own thinking, and some feedback from a generalist crowd about how likely some event will happen, and seeing in the comments what variables they believe are relevant + having some people posting new info that relate to the question seems fairly valuable, too, because you can easily miss important things
How big are risks from non-state actors? Base rates for terrorist attacks

Handy reference! Apart from the average rate, it seems also interesting to notice the variance in the table, spread over 4 orders of magnitude. This may point to something like 'global sanity' being an important existential risk factor. 

 

The Cost of Rejection

I mostly agree with the problem statement.

With the proposed solution of giving people feedback - I've historically proposed this on various occasions, and from what I have heard, one reason for not giving feedback on the side of organizations is something like "feedback opens up space for complaints, drama on social media, or even litigation". The problem looks very different from the side of the org: when evaluating hundreds of applications, it is basically certain some errors are made, some credentials misunderstood, experiences not counted as they shou... (read more)

5Daystar Eld9mo
Yeah, this seems a hard problem to do well and safely from an organizational standpoint. I'm very sympathetic to the idea that it is an onerous cost on the organization's side; what I'm uncertain about is whether it ends up being more beneficial to the community on net.

My vague understanding is that there's likely no legal issues with giving feedback as long as it's impartial. It's instead one of those things where lawyers reasonably advise against doing anything not required since literally anything you do exposes you to risk. Of course you could give feedback that would obviously land you in trouble, e.g. "we didn't hire you because you're [ethnicity]/[gender]/[physical attribute]", but I think most people are smart enough to give feedback of the form "we didn't hire you because legible reason X".

And it's quickly becom... (read more)

How to succeed as an early-stage researcher: the “lean startup” approach

I would guess the 'typical young researcher fallacy' also applies to Hinton  - my impression is he is  basically advising his past self, similarly to Toby. As a consequence,  the advice is likely  sensible for people-much-like-past-Hinton, but  not a good general advice for everyone.

In  ~3 years most people are able to re-train their intuitions a lot (which is part of the point!). This seems particularly dangerous in cases where expertise in the thing you are actually interested in does not exist, but expertise in something so... (read more)

3Rohin Shah9mo
I agree substituting the question would be bad, and sometimes there aren't any relevant experts in which case you shouldn't defer to people. (Though even then I'd consider doing research in an unrelated area for a couple of years, and then coming back to work on the question of interest.) I admit I don't really understand how people manage to have a "driving question" overwritten -- I can't really imagine that happening to me and I am confused about how it happens to other people. (I think sometimes it is justified, e.g. you realize that your question was confused, and the other work you've done has deconfused it, but it does seem like often it's just that they pick up the surrounding culture and just forget about the question they cared about in the first place.) So I guess this seems like a possible risk. I'd still bet pretty strongly against any particular junior researcher's intuition being better, so I still think this advice is good on net. (I'm mostly not engaging with the quantum example because it sounds like a very just-so story to me and I don't know enough about the area to evaluate the just-so story.)
How to succeed as an early-stage researcher: the “lean startup” approach

Let's start with the third caveat: maybe the real crux is what we think are the best outputs;  what I consider some of the best outputs by young researchers of AI alignment is easier to point at via examples - so it's e.g. the mesa-optimizers paper or multiple LW posts by John Wentworth.  As far as I can tell, none of these seems to be following the proposed 'formula for successful early-career research'. 

My impression is PhD students in AI in Berkeley need to optimise, and actually optimise a lot for success in an established field (ML/AI),... (read more)

2Rohin Shah9mo
I think the mesa optimizers paper fits the formula pretty well? My understanding is that the junior authors on that paper interacted a lot with researchers at MIRI (and elsewhere) while writing that paper. I don't know John Wentworth's history. I think it's plausible that if I did, I wouldn't have thought of him as a junior researcher (even before seeing his posts). If that isn't true, I agree that's a good counterexample. I agree the advice is particularly suited to this audience, for the reasons you describe. That sounds like the advice in this post? You've added a clause about being picky about the selection of people, which I agree with, but other than that it sounds pretty similar to what Toby is suggesting. If so I'm not sure why a caveat is needed. Perhaps you think something like "if someone [who is better or who is comparable and has spent more time thinking about something than you] provides feedback, then you should update, but it isn't that important and you don't need to seek it out"? I agree that's more clearly targeting the right thing, but still not great, for a couple of reasons: * The question is getting pretty complicated, which I think makes answers a bit more random. * Many students are too deferential throughout their PhD, and might correctly say that they should have explored their own ideas more -- without this implying that the advice in this post is wrong. * Lots of people do in fact take an approach that is roughly "do stuff your advisor says, and over time become more independent and opinionated"; idk what they would say. I do predict though that they mostly won't say things like "my ideas during my first year were good, I would have had more impact had I just followed my instincts and ignored my advisor". (I guess one exception is that if they hated the project their advisor suggested, but slogged through it anyway, then they might say that -- but I feel like that's more about motivation rather than impact.)
Announcing the launch of EA Impact CoLabs (beta) + request for projects, volunteers and feedback

It's good to see a new enthusiastic team  working on this! My impression, based on working on the problem ~2 years ago is this has good chances to provide value in global health a poverty, animal suffering, or parts of meta- cause areas; in case of x-risk focused projects, something like a 'project platform' seems almost purely bottlenecked by vetting. In the current proposal this seems to mostly depend on "Evaluation Commission"->  as a result,  the most important part for x-risk projects seems judgement of members of this commission and/or it's ability to seek external vetting

8Mats Olsen10mo
Thanks Jan! Yes, we even reference your post in our detailed write-up [https://docs.google.com/document/d/1SVO-pKcVqBNI4HWTq4bVhAisD0FsrCN5u668jqxexG0/edit?usp=sharing] and agree that vetting will be critical and a bottle-neck to maximum positive impact, particularly related to x-risk. Currently we have implemented a plan that we believe is manageable exclusively by a small group of volunteers, and have included a step in the process that involves CEA's Community Health team. Having said that, we don't think that is an ideal stopping point, we hope to expand into other forms of vetting pending general interest in the project, vetting volunteer interest and the building of other functionality or establishment of partnership with outside orgs. You can read more in sections IV.9 and VI.11 of the write-up about our thinking on these topics. Lastly, given your fantastic analysis in the past, if you would like to help out we would welcome any new team members that are interested in or familiar with this metaproject -- you can email info@impactcolabs.com [info@impactcolabs.com] anytime!
How to succeed as an early-stage researcher: the “lean startup” approach

In my view this text should come with multiple caveats.

- Beware 'typical young researcher fallacy'. Young researchers are very diverse, and while some of them will benefit from the advice, some of them will not. I do not  believe there is a general 'formula for successful early-career research'. Different people have different styles of doing research, and even different metrics for  what 'successful research' means. While certainly many people would benefit from the advice 'your ideas are bad', some young researchers actually have great ideas, s... (read more)

I'm not going to go into much detail here, but I disagree with all of these caveats. I think this would be a worse post if it included the first and third caveats (less sure about the second).

First caveat: I think > 95% of incoming PhD students in AI at Berkeley have bad ideas (in the way this post uses the phrase). I predict that if you did a survey of people who have finished their PhD in AI at Berkeley, over 80% of them would think their initial ideas were significantly worse than their later ideas. (Note also that AI @ Berkeley is a very selective p... (read more)

2tobyshevlane10mo
Thanks for the caveats Jan, I think that's helpful. It's true that my views have been formed from within the field of AI governance, and I am open to the idea that they won't fully generalise to other fields. I have inserted a line in the introduction that clarifies this.
EA Group Organizer Career Paths Outside of EA

Contrary to what seems an implicit premise of this post,  my impression is 

- most EA group organizers  should have this as a side-project, and should not think about "community building" as about their "career path" where they could possibly continue to do it in a company like Salesforce
- the label "community building" is unfortunate for what most of the EA group organizing work should consist of
- most of the tasks in "EA community building" involve skills which are pretty universal a generally useable in most other fields, like "strategizin... (read more)

How much does performance differ between people?

1.

For different take on very similar topic check  this discussion between me and Ben Pace  (my reasoning was  based on the same Sinatra paper).


For practical purposes, in case of scientists, one of my conclusions was

Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q se

... (read more)
Some thoughts on EA outreach to high schoolers

First EuroSPARC was in 2016. Targeting 16-19 year olds, my prior would be participants should still mostly study, and not work full-time on EA, or only exceptionally.

Long feedback loops are certainly a disadvantage.

Also in the meantime ESPR underwent various changes and actually is not optimising for something like "conversion rate to an EA attractor state".

The case of the missing cause prioritisation research

Quick reaction:

I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)

My experience so far is

  • some of the foundations / low hanging sensible fruits were discovered
  • when moving beyond that, I often run into questions which are some sort of "crucial consideration" for prioritisation research, but the research/understanding is often just not there.
  • often work on these "gaps" seems more interesting and tractable than trying to do some sort of "lets try to ignore this gap and move on" move

f... (read more)

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I posted a short version of this, but I think people found it unhelpful, so I'm trying to post somewhat longer version.

  • I have seen some number of papers and talks broadly in the genre of "academic economy"
  • My intuition based on that is, often they seem to consist of projecting complex reality into a space of single-digit real number dimensions and a bunch of differential equations
  • The culture of the field often signals solving the equations is profound/important, and the how you do the projection "world -> 10d" is less interestin
... (read more)
5Michael_Wiebe6mo
>academic economy Do you mean "academic economics"?
Neglected EA Regions

I'm not sure you've read my posts on this topic? (1,2)

In the language used there, I don't think the groups you propose would help people overcome the minimum recommended resources, but are at the risk of creating the appearance some criteria vaguely in that direction are met.

  • e.g., in my view, the founding group must have a deep understanding of effective altruism, and, essentially, the ability to go through the whole effective altruism prioritization framework, taking into account local specifics to reach conclusions valid at their region.
... (read more)
7DavidNash2y
I think I agree with the minimum recommended resources you suggest, but I don't see Facebook group membership requirements as the only filter. It's more likely to be based on seeing what people write/projects they do/future attendance at EA events. Sometimes obstacles can be good but maybe there are people who would be really great organisers if they just knew one other person who was interested or were encouraged to go to EAG. A tangential issue that might be part of this disagreement is that anyone can decide to become a group leader, create a meetup page and start telling people about their version of EA as there is no official licence/certification. That would require more thought as to whether having official groups is a good idea.
Neglected EA Regions

FWIW the Why not to rush to translate effective altruism into other languages post was quite influential but is often wrong / misleading / advocating some very strong prior on inaction, in my opinion

Neglected EA Regions

I don't think this is actually neglected

  • in my view, bringing effective altruism into new countries/cultures is in initial phases best understood as a strategy/prioritisation research, not as "community building"
    • importance of this increases with increasing distance (cultural / economic / geographical / ...) from places like Oxford or Bay

(more on the topic here)

  • I doubt the people who are plausibly good founders would actually benefit from such groups, and even less from some vague coordination due to facebook groups
    • actually I think on the marg
... (read more)
3DavidNash2y
I agree that Facebook groups are most likely not the ideal coordination tool, but I haven't found a platform that is as widely used without having bigger flaws. I also agree that the impact could be negative if there are people who would build communities just because they met via Facebook but I think a lot of that depends on how it is used. One check is ensuring that people who join understand EA and have a connection to that region. Another is having filters and coaching for people do want to organise, which should reduce the chance of a negative outcome whilst making it easier for a positive one. I think having someone involved in EA create the various focal points means that we are less likely in the future to see groups appear that have no connection to the wider EA network and research but have already become the default organisation in their area.
AI safety scholarships look worth-funding (if other funding is sane)
  • I don't think it's reasonable to think about FHI DPhil scholarships and even less so RSP as a mainly a funding program. (maybe ~15% of the impact comes from the funding)
  • If I understand the funding landscape correctly, both EA funds and LTFF are potentially able to fund single-digit number of PhDs. Actually has someone approached these funders with a request like "I want to work on safety with Marcus Hutter, and the only thing preventing me is funding"? Maybe I'm too optimistic, but I would expect such requests to have decent chance of success.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Sure

a)

For example, CAIS and something like "classical superintelligence in a box picture" disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator - which has (in my view) some "hard core" involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator's values are ... (read more)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

I think the picture is somewhat correct, and we surprisingly should not be too concerned about the dynamic.

My model for this is:

1) there are some hard and somewhat nebulous problems "in the world"

2) people try to formalize them using various intuitions/framings/kinds of math; also using some "very deep priors"

3) the resulting agendas look at the surface level extremely different, and create the impression you have

but actually

4) if you understand multiple agendas deep enough, you get a sense

  • how they are sometimes "reflecting" t
... (read more)

Thanks for the reply! Could you give examples of:

a) two agendas that seem to be "reflecting" the same underlying problem despite appearing very different superficially?

b) a "deep prior" that you think some agenda is (partially) based on, and how you would go about working out how deep it is?

Update on CEA's EA Grants Program

Re: future of the program & ecosystem influences.

What bad things will happen if the program is just closed

  • for the area overlapping with something "community building-is", CBG will become the sole source of funding, as meta-fund does not fund that. I think at least historically CBG had some problematic influence on global development of effective altruism not because of the direct impact of funding, but because of putting money behind some specific set of advice/evaluation criteria. (To clarify what I mean: I would expect the space would be he
... (read more)
3Nicole_Ross3y
Thanks for the thoughtful comment. I agree with most of your points, (though am a bit confused on your first one and would like to understand it better if you’d have the time to elaborate. EA Grants didn’t, when I was involved, have an overlapping funding mandate with CBGs, although I think that the distinction was a bit blurrier in the past). I am keen to work with others in the funding ecosystem so it can adapt in a good, healthy way. If you have more specific thoughts on how to make this happen, would love to hear them here or in a call.
Which Community Building Projects Get Funded?

As a side-note: In case of the Bay area, I'd expect some funding-displacement effects. BERI grant-making is strongly correlated with geography and historically BERI funded some things which could be classified as community building. LTFF is also somewhat Bay-centric, and also there seem to be some LTFF grants which could be hypothetically funded by several orgs. Also some things were likely funded informally by local philantrophists.

To make the model more realistic one should note

  • there is some underlying distribution of "worthy things to fund&quo
... (read more)
EA Hotel Fundraiser 6: Concrete outputs after 17 months

meta: I considered commenting, but instead I'm just flagging that I find it somewhat hard to have an open discussion about the EA hotel on the EA forum in the fundraising context. The feeling part is

  • there is a lot of emotional investment in EA hotel,
  • it seems if the hotel runs out of runway, for some people it could mean basically loosing their home.

Overall my impression is posting critical comments would be somewhat antisocial, posting just positives or endorsements is against good epistemics, so the personally safest thing to do for many is not to s... (read more)

7Greg_Colbourn3y
Regarding emotional investment, I agree that there is a substantial amount of it in the EA Hotel. But I don't think there is significantly more than there is for any new EA project that several people put a lot of time and effort into. And for many people, not being able to do the work they want to do (i.e. not getting funded/paid to do it) is at least as significant as not being able to live where they want to live. Still, you're right in that critical comments can (often) be perceived as being antisocial. I think part of the reason that EA is considered by new people/outsiders to not be so welcoming can be explained by this.

Flagging that there has been a post specifically soliciting reasons against donating to the EA Hotel:

$100 Prize to Best Argument Against Donating to the EA Hotel

And also a Question which solicited critical responses:

Why is the EA Hotel having trouble fundraising?

I agree that the "equilibrium" you describe is not great, except I don't think it is an equilibrium; more that, due to various factors, things have been moving slower than they ideally should have.

EA hotel struggles to collect low tens of $

I'm guessing you meant tens-of-thousan... (read more)

5RomeoStevens3y
Thanks for fleshing this out.

I agree that the epistemic dynamics of discussions about the EA Hotel aren't optimal. I would guess that there are selection effects; that critics aren't heard to the same extent as supporters.

Relatedly, the amount of discussion about the EA Hotel relative to other projects may be a bit disproportionate. It's a relatively small project, but there are lots of posts about it (see OP). By contrast, there is far less discussion about larger EA orgs, large OpenPhil grants, etc. That seems a bit askew to my mind. One might wonder about the cost-effectiveness of relatively long discussions about small donations, given opportunity costs.

Only a few people decide about funding for community builders world-wide

In practice, it's almost never the inly option - e.g. CZEA was able to find some private funding even before CBG existed; several other groups were at least partially professional before CBG. In general it's more like it's better if national-level groups are funded from EA

CZEA was able to find some private funding even before CBG existed

Interesting! Up until now, my intuition was that private funding is only feasible after the group has been around for a few years, gathered sufficient evidence for their impact and some (former student) members earn enough and donate to it (at least this was the case for EA Norway, as far as I know).

Somewhat off-topic, but if you have time, I'd be curious to hear how CZEA managed to secure early private funding. How long had CZEA been active when it first received funding, what kind ... (read more)

Long-Term Future Fund: August 2019 grant recommendations

The reason may be somewhat simple: most AI alignment researchers do not participate (post or comment) on LW/AF or participate only a little. For more understanding why, check this post of Wei Dai and the discussion under it.

(Also: if you follow just LW, your understanding of the field of AI safety is likely somewhat distorted)

With hypothesis 4.&5. I expect at least Oli to have strong bias of being more enthusiastic in funding people who like to interact with LW (all other research qualities being equal), so I'm pretty sure it's not the case

2.... (read more)

The reason may be somewhat simple: most AI alignment researchers do not participate (post or comment) on LW/AF or participate only a little.

I'm wondering how many such people there are. Specifically, how many people (i) don't participate on LW/AF, (ii) don't already get paid for AI alignment work, and (iii) do seriously want to spend a significant amount of time working on AI alignment or already do so in their free time? (So I want to exclude researchers at organizations, random people who contact 80,000 Hours for advice on how to get involved, people

... (read more)
Long-Term Future Fund: August 2019 grant recommendations

In my experience teaching rationality is more tricky than the reference class education, and is an area which is kind of hard to communicate to non-specialists. One of the main reasons seems to be many people have somewhat illusory idea how much they understand the problem.

Get-Out-Of-Hell-Free Necklace

I've suggested something similar for happiness (https://www.lesswrong.com/posts/7Kv5cik4JWoayHYPD/nonlinear-perception-of-happiness ). If you don't want to introduce the weird asymmetry where negative counts and positive not, what you get out of that could be somewhat surprising - it possibly recovers more "common folk" altruism where helping people who are already quite well off could be good, and if you allow more speculative views on the space on mind-states, you are at risk of recovering something closely resembling some sort of "buddhist utilitarian calculus".

EA Forum 2.0 Initial Announcement

As humans, we are quite sensitive to signs of social approval and disapproval, and we have some 'elephant in the brain' motivation to seek social approval. This can sometimes mess up with epistemics.

The karma represents something like sentiment of people voting on a particular comment, weighted in a particular way. For me, this often did not seemed to be a signal adding any new information - when following the forum closely, usually I would have been able to predict what will get downvoted or upvoted.

What seemed problematic to me was 1. a numbe... (read more)

EA Forum 2.0 Initial Announcement

It's not an instance of complain, but take it as a datapoint: I've switched off the karma display on all comments and my experience improved. The karma system tends to mess up with my S1 processing.

It seems plausible karma is causing harm in some hard to perceive ways. (One specific way is by people updating on karma pattern mistaking them for some voice of the community / ea movement / ... )

2MichaelDickens2y
Can you elaborate on how you turned off karma display? I would love to use your code if you're willing to share it. I strongly dislike posting on the EA Forum because of how the karma system works, and and my experience would be vastly improved if I couldn't see post/comment karma.
2Denise_Melchin3y
>> I've switched off the karma display on all comments and my experience improved. The karma system tends to mess up with my S1 processing. Fully understand if you don't want to, but I'm curious if you could elaborate on this. I'm not entirely sure what you mean.
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens?

I would expect if organizations working in the area have reviews of expected technologies and how they enable individuals to manufacture pathogens, which is likely the background necessary for constructing timelines, they would not publish too specific documents.

What new EA project or org would you like to see created in the next 3 years?

If people think this is generally good idea I would guess CZEA can make it running in few weeks. Most of the work likely comes from curating the content, not from setting up the service

Long-Term Future Fund: April 2019 grant recommendations

To clarify - agree with the benefits of splitting the discussion threads for readability, but I was unenthusiastic about the motivation be voting.

Long-Term Future Fund: April 2019 grant recommendations

I don't think karma/voting system should be given that much attention or should be used as a highly visible feedback on project funding.

I do think that it would help independently of that by allowing more focused discussion on individual issues.

Load More