james_aung's Comments

Differential technological development - summarised introduction

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

Differential technological development - summarised introduction

Unfortunately I have wrist pain which I'm trying to avoid getting RSI with. If you'd like to contribute to this, I think it would be cool to add a reading list, perhaps drawing on the top readings from MichaelA's collection. It would also be cool if someone could get a PDF of Superintelligence Chapter 14 and host it on Google Drive so that we can link to it.

AI Governance Reading Group Guide

Do you have a template of the shared document that you used? Or was it a quite unstructured blank document?

Differential technological development - summarised introduction

I wrote this up because I wanted a single resource I could send to people that explained differential technological development.

I made it quite quickly in about 1 hour, so I'm sure it's quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here: https://docs.google.com/document/d/1HcLcu-WObHO8y45yEMICfmqNpeugbmUX9HdRfeu7foM/edit?usp=sharing

Problems with EA representativeness and how to solve it

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P

Heuristics from Running Harvard and Oxford EA Groups

I think that makes sense and I agree with you. We also have run the sort of things you describe in Oxford.

Maybe don't teach can be understood as 'prefer using resources as a way of conveying ideas, rather than you teaching'.

I agree that we should aim to 'outreach', in '(on-topic) introductory' EA talks, and don't disagree here.

Heuristics from Running Harvard and Oxford EA Groups


I think there are easy ways to make it not weird. Some tips:

1) Emailing from an official email account, rather than a personal one, if you've never met the person before.

2) Mention explicitly that this is 'something you do' and that, for newcomers, you'd like to welcome them into the community. This makes it less strange that you're reaching out to them personally.

3) Mention explicitly that you'll be talking about EA, and not other stuff.

4) It's useful to meet people in real life at an event first and say hello and introduce yourself there.

5) Don't feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.

6) Absolutely don't pressure people, just reach out and offer to meet up if they'd find it useful

Heuristics from Running Harvard and Oxford EA Groups

Thanks for the comment JoshP!

I've spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:

1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where "a section become more high-fidelity later, and it ends up not making that much difference", as people who aren't actually interested drop out. But it comes at the cost of more engaged people spending time on management.

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.

It's worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I've had with them in the past.

Heuristics from Running Harvard and Oxford EA Groups

Hey! Thanks for the comment.

I think it captures a few different notions. I'll try and spell out a few salient ones

1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don't introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.

2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I've found that often you can just lend people books or send them articles and they'll be able to pick up the same stuff without it taking up your time.

3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.

Load More