PabloAMC

1010 karmaJoined Jan 2019Madrid, España

Bio

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
103

I already give everything, except what's required for the bare living necessities, away.

While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.

FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.

I divide my donation strategy into two components:

  1. The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.

  2. Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)

For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.

I think the title is a bit unfortunate at the very least. I am also skeptical of the article's thesis of highlighting population growth as the problem itself.

You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks there’s a chance to get more information on what the objectives are supposed to mean.

I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the physical level, but rather over abstractions. For example, is a human with high level of dopamine happier? What is exactly a human? Can a larger human brain be happier? My belief is that since these objectives are built over (possibly changing) abstractions, it is unclear whether a single agent might iron out its goal. In fact, if “what the representation of the goal was meant to mean” makes reference to what some human wanted to represent, you’ll probably never have a clear cut unchanging goal.

Though I believe an important problem in this case is how to train an agent able to distinguish between the goal and its representation, and seek to optimise the former. I find it a bit confusing when I think about it.

Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiency—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).

I’d be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some uncertainty over goal distributions. So it is unclear to me that generality will necessarily clarify goals.

To be a bit more concrete: I find it plausible that the AGI will encounter possible fine grained (concrete) goals that map into the same high level representation of its goal, whatever it may be. Then you have to refine what the goal representation was meant to mean. After all, a representation of the goal is not the goal itself necessarily. I believe this is what humans face, and why human goals are often a small mess.

With respect to the last question I think it is perhaps a bit unfair. I think they have clearly stated they unconditionally condemn racism, and I have a strong prior that they mean it. Why wouldn’t they, after all?

But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.

The HuggingFace RL course might be an alternative in the Deep Learning - RL discussion above: https://github.com/huggingface/deep-rl-class

Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.

I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.

Load more