FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.
I divide my donation strategy into two components:
The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.
Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existe
I think the title is a bit unfortunate at the very least. I am also skeptical of the article's thesis of highlighting population growth as the problem itself.
You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks there’s a chance to get more information on what the objectives are supposed to mean.
I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the phy...
Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiency—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).
I’d be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some un...
With respect to the last question I think it is perhaps a bit unfair. I think they have clearly stated they unconditionally condemn racism, and I have a strong prior that they mean it. Why wouldn’t they, after all?
But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.
The HuggingFace RL course might be an alternative in the Deep Learning - RL discussion above: https://github.com/huggingface/deep-rl-class
Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.
I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.
I think I quite disagree with this post because batteries are improving quite a lot, and if we are capable of also improving Hydrogen production and usage, things should work pretty well. Finally, nuclear fusion no longer seems so far away. Of course, I agree with the author that this transition will take quite a long time, especially in developing countries, but I expect this to work out well anyways. One key argument of the author is that we are limited in the amount of different metals available, but Li is very common on Earth, even if not super cheap, so I am not totally convinced by this. Similar thoughts apply to land usage.
In the Spanish community we often have conversations in English, and I think at least 80% of the members are comfortable with both.
Maybe worth having some 'recap discussions in Spanish' or a few Spanish-only sessions for the remaining 20%. I expect there are a good number of people who are comfortable in English but much more comfortable, much more efficient, and more willing to speak out in their native language.
The point 1 is correct, but there is a difference: when you research it's often needed to live near a research group. Distillation is more open to remote and asynchronous work.
Thanks for the answer. The problem is that this is likely pointing in the wrong direction. Immigration has by itself quite large benefits for immigrants and almost all studies of the impact of immigration find positive or no effect for locals. From "Good economics for hard times" by Duflo and Barnejee there is only one case where locals ended up worse off: during the URRS, Hungarian workers were allowed to work but not live in East Germany, forcing them to spend their money at home. Overall, it is well known that open border situations would probably boost...
I think it is wrong to say that Syrian refugee crisis might have cost Germany 0.5T. My source: https://www.igmchicago.org/surveys/refugees-in-germany-2/. To be fair though I have not found a posterior analysis, and I am far from an expert.
My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.
Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.
Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.
A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
I also think that it would be worth exploring ways to give feedback with as little time cost as possible.
A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more fa...
[I]f feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
That sounds worse to me. Conferences are rare and hence conference-time is more valuable than non-conference time. Also, I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision.
I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.
See also the link by Michael above.
My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.
Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.
That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the...
Without thinking much over it, I'd say yes. I'm not sure buying a book will get it more coverage in the news though.
I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.
Hey James!
I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.
I agree with the post, and the same has already been noticed previously.
However, there is also a risk from this: as a community, we have to struggle to avoid being elitist, and should be welcoming to everyone, even those whose personal circumstances are not ideal to change the world.
Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks
At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).
There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.
When considering starting an EA c...
Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!
So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.
I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?
On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.
One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.
EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.
Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.
Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.
Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.
In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)
Hey Lukas!
If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier.
Note that even MIRI sometimes does this
...
- We could not yet create a beneficial AI system even via brute force. Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other
Yes, I do indeed :)
You can frame it if you want as: founders should aim to expand the range of academic opportunities, and engage more with academics.
Hi Steven,
Possible claim 2: "We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better." You didn't exactly say this, but sorta imply it. I disagree.
I don't agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don't think money is the issue.
Thanks
Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.
Hey Simon, thanks for answering!
We won't solve AI safety by just throwing a bunch of (ML) researchers on it.
Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place.
AGI will (likely) be quite different from current ML systems.
I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would al...
I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn't. If someone comes to me with such kind of argument I will go defensive really quickly, and he'll have to waste a lot of effort to convince me there is a slight chance that he's right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.
...Perh
My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer
I think it would be an AGI very capable of chemistry :-)
one might even wonder what learnable quantum circuits / neural networks would entail.
Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air...
From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.
I think this is right.
You don't say that some of the top AI safety orgs are trying to hire you.
I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.
Then you have to consider how useful quantum algorithms are to existential risk.
I think it is qu...
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?
The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a ...
While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.