Hi, I am a Physicist, Effective Altruist and AI safety student/researcher/organiser
Resume - Linda Linsefors - Google Docs
EA Forum feature request
(I'm not sure where to post this, so I'm writing it here)
1) Being able to filter for multiple tags simultaneously. Mostly I want to be able to filter for "Career choice" + any other tag of my choice. E.g. AI or Academia to get career advice specifically for those career paths. But there are probably other useful combos too.
- Someone could set up a leadership fast-track program.
How is this on the decentralisation list?
Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it's their money. No one is stopping anyone from getting their own funding, and doing their own thing.
Except for the fact that 80k (and other though leaders? I'm not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community.
The type of AI we are worried about is a an AI that peruses some kind of goal, and if you have a goal, then self preservation is a natural instrumental goal, as you point out in the paperclip maximiser example.
It might be possible that someone builds a super intelligent AI that don't have a goal. Depending on your exact definition GPT4 could be counted as super intelligent, since it knows more than any human. But it's not dangerous (by it self) since it's not trying to do anything.
You are right that it is possible for something that is intelligent to not be power seeking, or even trying to self preserve. But we are not worried about those AIs.
Almost as soon as people got GPT access, people created AutoGPT and ChaosGPT. I don't expect AIs to be goal directed because they spontaneously develop goals. I expect them to be goal directed because lots of people are trying to make them goal directed.
If the first ever superinteligent AGI decides to commit suicide, or just wirehead and then don't do anything, this don't save us. Probably someone will just tweak the code to fix this "bug". An AI that don't do anything is not very useful.
Also, this post might help:
Abstracting The Hardness of Alignment: Unbounded Atomic Optimization - LessWrong
In addition, if I were getting career-related information from a community builder, that community builder's future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).
As far as I know, this is exactly what is happening.
Can we address critiques of the DALY framework by selecting moral weighting frameworks that are appropriate for our particular applications, addressing methodological critiques when they get raised, and taking care to contextualize our usage of a particular framework? - Maybe.
I'm pretty sure the answer is "No, we can't". The whole point of DALY is that it lets us compare completely different interventions. If you replace it with something that is different in each context, you have not replaced it.
I think the best we can do is to calibrate it better, buy asking actual disabled people about their life quality. I think the answer will be very different depending on the disability, and also surrounding support and culture. This can be baked in, but you can't change the waits around for different interventions.
I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths.
If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I'm in AI Safety I value EA being about more than X-risk prevention.
I think the specific list of orgs you picked is a bit ad-hock but also ok.
It looks like you've chosen to focus on reperch orgs specifically, plus overview resources. I think this is a resonable choice.
Some orgs that would fit on the list (i.e. other research orgs), are
* Conjecture
* Orthogonal
* CLR
* Convergence
* Aligned AI
* ARC
There are also several important training and support orgs that is not on your list (AISC, SERI MATS, etc). But I think it's probably the right choice to just link to aisafety.training, and let people find variolous programs from there.
I'm confused why there is an arrow from The Future Society to aisafety.world.
aisafety.world is created and maintained be Alignment Ecosystem Development
As far as I know, the reason AISS shut down was 100% because of lack of funding. However, it's not so easy to just start things up again. People who don't get paid tend to quit and move on.