Jan_Kulveit

2949Joined Dec 2017

Bio

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

Sequences
1

Learning from crisis

Comments
161

Just flagging that in my view the goal to have just 1 EAGx per region, and make the EAGx regionally focused, with taking very few people from outside the region, is really bad, in my view. Reasons for this are in the effects on network topology, and subsequently on the core/periphery dynamic.

I find the argument about the cost of "flying everyone from around the world to one location" particularly puzzling, because this is not what happens by default: even if you don't try to push events to being regional at all, they naturally are, just because people will choose the event which is more conveniently located closer to them. So it's not like everyone flying everywhere all the time (which may be the experience of the events team, but not of typical participants).

Not that important, but...  in terms of what intuitions people have, the split of the computation into neurons/environment  is not a reasonable model of how life works. Simple organisms do a ton of non-neuron-based computations distributed across many cells, and are able to solve pretty complex optimization problems. The neurons/environment  split pushes this into the environment , and this means the environment was sort of complex in a way for which people don't have good ituitions (e.g. instead of mostly thinking about costs of physics simulation, they should include stuff like immunse system simulations).

It seems to me that there is some subtle confusion going on here. 

0. It's actually more about the 'Season'.

1. This isn't really "a push to establish a community outside of The Bay or Oxford", as that community already exists in Prague for some time. E.g. Prague had it's coworking space since ca 2017, sooner than almost anywhere else, already has something like ˜15 FTE ppl working on EA/longtermist relevant projects, etc.  I think to some extent what happened over past few years was the existing Prague hub focused too much on 'doing the work' and comparably less on 'promoting the place' or 'writing posts about how it is a hub on EA forum'. So, in the hub dynamics, more than 'establishing something', perhaps you can view this as 'creating common knowledge about something' /  'upgrade'.

2.  I think structure with 'one giant hub' is bad not only for suving physical catastrophe, but mainly because more subtle memetics and social effects, talent-routing, and overall robustness. For example: if the US cultural wars stuff escalated and EA become subject of wrath of one of the sides, it could have large negative effects not only directly due to hostile environment, but also due to secondary reactions of EA, induced opinion polarization, etc. 

3. On practical level, I think the strongest current developments toward  multi-hub network structure are often clearly sensible - for example, not having visible presence on the East Coast was in my view a bug, not a feature. 

I do broadly agree with the direction and the sentiment: on the margin, I'd be typically interested in other forecasts than "year of AGI" much more.

For example: in time where we get "AGI" (according to your definition) ... how large fraction of GDP are AI companies? ... how big is AI as a political topic? ... what does the public think?

Strongly upvoted. 

In my view, part of the problem are feedback loops in the broader EA scene, where focus on "marketing" was broadly rewarded and copied. (Your uni group grew so large so fast! How we can learn what you did and emulate it?) . 

Also - I'm not sure what metrics are now evaluated by central orgs when people ask for grants or grant renewals, but I suspect something like "number of highly engaged EAs produced" is/was prominent, and an optimizer focusing on this metric will tend to converge on marketing, and will try to bring in more (highly engaged) marketers. 

If the main problem you want to solve is "scaling up grantmaking", there are probably many other ways how to do it other than "impact markets". 

(Roughly, you can amplify any "expert panel of judges" evaluations with judgemental forecasting.)

(i.e. most people who are likely to update downwards on Yudkowsky on the basis of this post, seem to me to be generically too trusting, and I am confident I can write a more compelling post about any other central figure in Effective Altruism that would likely cause you to update downwards even more)


My impression is the post is somewhat unfortunate attempt to "patch" the situation in which many generically too trusting people updated a lot on AGI Ruin: A List of Lethalities  and Death with Dignity  and subsequent deference/update cascades. 

In my view the deeper problem here is instead of disagreements about model internals, many of these people do some sort of "averaging conclusions" move, based on signals like seniority, karma, vibes, etc. 

Many of these signals are currently wildly off from truth-tracking, so you get attempts to push the conclusion-updates directly. 


 

I. It might be worth reflecting upon how large part of this seem tied to something like "climbing the EA social ladder".

E.g. just from the first part, emphasis mine
 

Coming to Berkeley and, e.g., running into someone impressive  at an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space).
If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in EA projects, hang around cool EAs.

Replace "EA" by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.

II. This isn't to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.

III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency. 

I don't think this has any clear bottom line - I do agree for many people caring about EA topics it's useful to come to the Bay  from time to time. Compared to the original post  I would probably mainly suggest to also consult virtue ethics and think about what sort of person you are changing yourself to, and if you, for example, most want to become "a highly cool and well networked EA" or e.g. "do  things which need to be done", which are different goals.
 

Suggested variation, which I'd expect to lead to better results: use raw "completion probabilities" for different answers.

E.g. with prompt "Will Russia invade Ukrainian territory in 2022?" extract completion likelihoods of the next few tokes "Yes" and "No". Normalize

Load More