No need to apologize! I think your idea might be even better than mine :)
Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.
Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.
A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
I also think that it would be worth exploring ways to give feedback with as little time cost as possible.
A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more fa... (read more)
[I]f feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
That sounds worse to me. Conferences are rare and hence conference-time is more valuable than non-conference time. Also, I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision.
I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.
See also the link by Michael above.
My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.
Can you give an example of communication that you feel suggests "only AI safety matters"?
Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.
That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the... (read more)
Without thinking much over it, I'd say yes. I'm not sure buying a book will get it more coverage in the news though.
I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.
Hey James!
I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.
I agree with the post, and the same has already been noticed previously.
However, there is also a risk from this: as a community, we have to struggle to avoid being elitist, and should be welcoming to everyone, even those whose personal circumstances are not ideal to change the world.
Thanks!
Thanks!
Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks
At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).
There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.
When considering starting an EA c... (read more)
Gracias Juan!
Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!
Thanks a lot Max, I really appreciate it.
So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.
I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?
On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.
One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.
Makes sense, and I agree
EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.
Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.
Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.
Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.
In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)
Hey Lukas!
If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier.
Note that even MIRI sometimes does this
... (read more)
- We could not yet create a beneficial AI system even via brute force. Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other
Yes, I do indeed :)
You can frame it if you want as: founders should aim to expand the range of academic opportunities, and engage more with academics.
Hi Steven,
Possible claim 2: "We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better." You didn't exactly say this, but sorta imply it. I disagree.
I don't agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don't think money is the issue.
Thanks
Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.
Hey Simon, thanks for answering!
We won't solve AI safety by just throwing a bunch of (ML) researchers on it.
Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place.
AGI will (likely) be quite different from current ML systems.
I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would al... (read more)
I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn't. If someone comes to me with such kind of argument I will go defensive really quickly, and he'll have to waste a lot of effort to convince me there is a slight chance that he's right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.
... (read more)Perh
Thanks Dan!
My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer
I think it would be an AGI very capable of chemistry :-)
one might even wonder what learnable quantum circuits / neural networks would entail.
Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air... (read more)
From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.
I think this is right.
You don't say that some of the top AI safety orgs are trying to hire you.
I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.
Then you have to consider how useful quantum algorithms are to existential risk.
I think it is qu... (read more)
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?
The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a ... (read more)
Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks. In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...
Have you considered doing this for a while if you think it's possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?
Indeed, I think that would be a good objective for the postdoc. It's also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.
Thanks for making concrete bets @aogara :)
Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I'm doing the best I can do than if I try to compare myself to some unattainable level. That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)
Thanks Chris! Not much: duration and amount of funding. But the projects I applied with were similar, so in a sense I was arguing that independent evaluations of a proposal might provide more signal of the perceived usefulness of this project.
I submitted an application about using causality as a means for improved value learning and interpretability of NN: https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation My main reason for putting forward this proposal is that I believe the models of the world humans operate, are somewhat similar to causal models, with some high-level variables that AI systems might be able to learn. So using causal models might be useful for AI Safety.
I think there are also some external reasons why it makes sense as a... (read more)
Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.
The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing s... (read more)
Yeah, I'll try to be creative. It is not a company though, it's academy. But that gives you flexibility too, so it's good. Even to do partially remote work.
Though I am not sure I can ask her to move from European to America. She values stability quite a lot, and she wants to get a permanent position as a civil servant in Spain, where we are from.
Thanks Yonathan!
Talking about X-risks - What is the best way to communicate about the importance of preventing global catastrophic risks, with people who have previously not heard of them before?
I'm not sure about the best format, perhaps a workshop. But I think this is very useful both for introducing people to EA in a non-offputting way, and also for decreasing the social cost of pursuing careers on this topic.
Intuitively I'd say that a lot of people convert nationality into part of their identity, and people value identity. And while I think nations should be more like an organization searching for the happiness of its citizens, or a decision process machine, the identity often plays a large role.
I'm not sure I understand your first bullet point - obviously I'd love to see Ukraine join the EU. But now they won't anyway. What's the pathway from this war to EU membership?
I think our disagreement lies in what is the likely outcome of this. I believe you think Ukrainians will have to concede on everything. And while he seems decided to conquer all of Ukraine, I would say some sort of compromise will be reached before that.
It is certainly true that the human loss and prosperity averted by making concessions is important. However, in the long term, I'm not sure the arguments above hold. My impression from historians' opinion, is that the only reason why Great Britain did ok by conceding to Germany to conquer the Sudeten was that GB was not prepared for a war with Germany. However, what was clear is that Hitler's demands would not stop there. Similarly, why would Putin stop there if conceded those things? I'm not sure, I think he would keep demanding and imposing a long-term t... (read more)
I'd say somewhere at the beginning that the objective is reducing the amount of animal product food.
I certainly want to be replaced by better AI Safety researchers (or any other worker in an important area) so that I don't have to make the personal sacrifice to work on them. I still put a lot of effort in being the best, but secretly wish there is someone better to do the job. Funny. Also, a nice excuse to celebrate rejection if you apply to an EA job.
I see, it makes sense. Yet, my belief is that these people are willing to say they would do so if it were "free", but it never is, if only because it requires efforts to change your own habits. If they really wanted, do you think they don't do it for the risk of criticism of others, or why? Notice that the idea of making it simple to eat less meat addresses what I think is the main obstacle: changing routines.
My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.