Linda Linsefors

Wiki Contributions

Comments

EA Survey 2019 Series: Community Information

Thanks for doing this!

When comparing whites and non-whites, did you do anything to control for location.

I noticed non-whites ranked EAG as less important. Could this be becasue they are more likely to live far away from EAG events?

Or maybe there are so few EAs living in non-white majority countries, that they don't skew the statistic? I.e. non-white EAs in majority white countries massively outnumber non-white EAs in non-white majority countries?

The Case for Impact Purchase | Part 1

That would also give you all the drawbacks of grants
See "Reasons to evaluate a project after it is completed" in the original post

If you want to give me a living wage without me first having to prove my self in some way, please give me money. 

For most people, grants aren't simply "available". There has to be some evidence. This can be provided either by arguing your case (normal grant application) or by just doing the work. I think many people (including me) would prefer to just do the work, and let that speak for itself (for the reasons explained in the original post).

Long-Term Future Fund: Ask Us Anything!

But I'd love to be proven wrong here.

I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers. 

I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.

However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.
 

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers.

We are working on that. I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

Ecosystems vs Projects in EA Movement Building

I'm not going to lead this, but would be happy to join.

Ecosystems vs Projects in EA Movement Building

 I've been told a few time that I belong in the group organizers slack, but never actually felt at home there, because I feel like I'm doing something very different from most group organizers. 

The main requirement of such a chat is that it attracts other ecosystem organizers, which is a marketing problem more than a logistical problem. There are lots of platforms that would be adequate.

Making a separate ecosystem slack channel in the group organizer slack, and marketing it here, may work (30% chance of success), and since it is low effort, it seems worth a try.

A some what higher effort, but also higher expected payoff, would be to find all ecosystem organizers, contact them personally and invite them to a group call. Or invite them to fill in a when2meet for deciding when to have said group call. 

Ecosystems vs Projects in EA Movement Building

We (AI Safety Support) are literally doing all these things

There is no CEA for people working on AI safety, that creates websites, discussion platforms, conferences, connects mentors, surveys members etc.


I don't blame DavidNash for not knowing about us. I did not know about EA Consultancy Network. So maybe what we need is a meta ecosystem for ecosystems? There is a slack group for local group organizer, and a local group directory at EA Hub. Similarly, it would be nice to have a dedicated chat some for ecosystem organizer, and a public directory somewhere.

CEA has said that they are currently not focusing on supporting this type of projects (source: privet conversation). So if someone want to set it up, just go for it! And let me know if I can help.

Long-Term Future Fund: Ask Us Anything!

That's surprisingly short, which is great by the way. 

I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications.

I'm happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don't know that more polish will not help. 

You can probably save a lot of time on the side of the applicants by:

  • Stating how much time you recommend people spend on the application
  • Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for.

I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.

Long-Term Future Fund: Ask Us Anything!

What do you think is a reasonable amount of time to spend on an application to the LFTT?

Long-Term Future Fund: Ask Us Anything!

What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context? 

What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context? 

For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.

Load More