All of PabloAMC's Comments + Replies

Some unfun lessons I learned as a junior grantmaker

My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.

Some unfun lessons I learned as a junior grantmaker

No need to apologize! I think your idea might be even better than mine :)

Some unfun lessons I learned as a junior grantmaker

Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.

Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.

Some unfun lessons I learned as a junior grantmaker

A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

I also think that it would be worth exploring ways to give feedback with as little time cost as possible.

A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more fa... (read more)

9Linch1mo
A solution that I'm more excited about is one-to-many channels of feedback where people can try to generalize from the feedback that others receive. I think this post by Nuño [https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers] is a good example in this genre, as are the EAIF and LTFF payout reports. Perhaps some grantmakers can also prioritize public comms even more than they already do (e.g. public posts on this Forum), but of course this is also very costly.

[I]f feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

That sounds worse to me. Conferences are rare and hence conference-time is more valuable than non-conference time. Also, I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision.

EA and the current funding situation

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.

See also the link by Michael above.

EA and the current funding situation

My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.

1[comment deleted]2mo

Can you give an example of communication that you feel suggests "only AI safety matters"?

EA is more than longtermism

Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.

That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the... (read more)

Bill Gates book on pandemic prevention

Without thinking much over it, I'd say yes. I'm not sure buying a book will get it more coverage in the news though.

The Effective Altruism culture

I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.

The Effective Altruism culture

Hey James!

I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.

2james.lucassen3mo
Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.
EA: A More Powerful Future Than Expected?

I agree with the post, and the same has already been noticed previously.

However, there is also a risk from this: as a community, we have to struggle to avoid being elitist, and should be welcoming to everyone, even those whose personal circumstances are not ideal to change the world.

A tough career decision

Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks

6Sjlver3mo
Oh... and for some companies, all you need to do to start a community is get some EA-related stickers that people can put on their laptops ;-) (It's a bit tongue-in-cheek, but I'm only half joking... most companies have things like this. At Google, laptop stickers were trendy, fashionable, and in high demand. I'm sure that after being at Xanadu for a while, you'll find an idea that works well for this particular company)

At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).

There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.

When considering starting an EA c... (read more)

A tough career decision

Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!

8JasperGeh3mo
Thanks Pablo, good luck to you too! I'll apply to a few interesting remote positions and have some independent projects in mind. I'll see :)
A tough career decision

Thanks a lot Max, I really appreciate it.

Issues with centralised grantmaking

So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.

Issues with centralised grantmaking

I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?

On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.

6Brendon_Wong3mo
Yep, there's definitely return persistence with top VCs, and the last time I checked I recall there was uncertainty around whether that was due to enhanced deal flow or actual better judgement. I think that just taking the average is one decentralized approach, but certainly not representative of decentralized decision making systems and approaches as a whole. Even the Good Judgement Project can be considered a decentralized system to identify good grantmakers. Identifying superforecasters requires having everyone do predictions and then find the best forecasters among them, whereas I do not believe the route to become a funder/grantmaker is that democratized. For example, there's currently no way to measure what various people think of a grant proposal, fund that regardless of what occurs (there can be rules about not funding downside risk stuff, of course), and then look back and see who was actually accurate. There haven't actually been real prediction markets implemented at a large scale (Kalshi aside, which is very new), so it's not clear whether that's true. Denise quotes Tetlock mentioning that objection here [https://forum.effectivealtruism.org/posts/FYuMgcNq2quC8CSyc/against-prediction-markets] . I also think that determining what to fund requires certain values and preferences, not necessarily assessing what's successful. So viewpoint diversity would be valuable. For example, before longtermism became mainstream in EA, it would have been better to allocate some fraction of funding towards that viewpoint, and likewise with other viewpoints that exist today. A test of who makes grants to successful individuals doesn't protect against funding the wrong aims altogether, or certain theories of change that turn out to not be that impactful. Centralized funding isn't representative of the diversity of community views and theories of change by default (I don't see funding orgs allocating some fraction of funding towards novel theories of change as a policy).
Issues with centralised grantmaking

One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.

5Brendon_Wong3mo
Do you have any evidence for this? There's definitely evidence to suggest that decentralized decision making can outperform centralized decision making; for example, prediction markets and crowdsourcing. I think it's dangerous to automatically assume that all centralized thinking and institutions are better than decentralized thinking and institutions.
Unsurprising things about the EA movement that surprised me

EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.

Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.

5Stefan_Schubert3mo
I don't think a consensus on what cause is most effective is incompatible with cause-neutrality as it's usually conceived (which I called cause-impartiality here [https://forum.effectivealtruism.org/posts/6F6ix64PKEmMuDWJL/understanding-cause-neutrality] ).
Meditations on careers in AI Safety

Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.

Meditations on careers in AI Safety

Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.

In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)

6acylhalide3mo
Makes sense. I'd probably look at something like, which of the following two factors is bigger: P(career change | postdoc) * ΔU(life satisfaction 10-15 years from now | career change) P((negative) change in relationship status today | postdoc) * ΔU(life satisfaction 10-15 years from now | change in relationship status today) And like maybe even explicitly write down probabilities if it helps. I can't know your relationship status nor feel qualified to discuss it, but I feel like it might be an important variable. What I do feel is it's valuable to plan for long-term if possible.
The role of academia in AI Safety.

Hey Lukas!

If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier.

Note that even MIRI sometimes does this

  1. We could not yet create a beneficial AI system even via brute force. Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other
... (read more)
7Lukas_Gloor3mo
It sounds like our views are close! I agree that this would be immensely valuable if it works. Therefore, I think it's important to try it. I suspect it likely won't succeed because it's hard to usefully simplify problems in a pre-paradigmatic field. I feel like if you can do that, maybe you've already solved the hardest part of the problem. (I think most of my intuitions about the difficulty of usefully simplifying AI alignment relate to it being a pre-paradigmatic field. However, maybe the necessity of "security mindset" for alignment also plays into it.) In my view, progress in pre-paradigmatic fields often comes from a single individual or a tight-knit group with high-bandwidth internal communication. It doesn't come from lots of people working on a list of simplified problems. (But maybe the picture I'm painting is too black-and-white. I agree that there's some use to getting inputs from a broader set of people, and occasionally people who isn't usually very creative can have a great insight, etc.) That's true. What I said sounded like a blanket dismissal of original thinking in academia, but that's not how I meant it. Basically, my picture of the situation is as follows: Few people are capable of making major breakthroughs in pre-paradigmatic fields because that requires a rare kind of creativity and originality (and probably also being a genius). There are people like that in academia, but they have their quirks and they'd mostly already be working on AI alignment if they had the relevant background. For the sort of people I'm thinking about, they are drawn to problems like AI risk or AI alignment. They likely wouldn't need things to be simplified. If they look at a simplified problem, their mind immediately jumps to all the implications of the general principle and they think through the more advanced version of the problem because that's way more interesting and way more relevant. In any case, there are a bunch of people like that in long-termist EA
The role of academia in AI Safety.

Yes, I do indeed :)

You can frame it if you want as: founders should aim to expand the range of academic opportunities, and engage more with academics.

The role of academia in AI Safety.

Hi Steven,

Possible claim 2: "We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better." You didn't exactly say this, but sorta imply it. I disagree.

I don't agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don't think money is the issue.

Thanks

2Steven Byrnes3mo
OK, thanks for clarifying. So my proposal would be: if a person wants to do / found / fund an AGI-x-risk-mitigating research project, they should consider their background, their situation, the specific nature of the research project, etc., and decide on a case-by-case basis whether the best home for that research project is academia (e.g. CHAI) versus industry (e.g. DeepMind, Anthropic) versus nonprofits (e.g. MIRI) versus independent research. And a priori, it could be any of those. Do you agree with that?
Meditations on careers in AI Safety

Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.

6acylhalide3mo
P.S. I'm sure there are also pros of having a research area being spread across multiple countries, although they may or may not outweight cons.
6acylhalide3mo
This makes sense. I should caveat here that I'm a BTech student myself and hence have very low confidence opinions on all this (and you would be better served asking someone with more knowledge :)) but: I have to wonder how much in-person time is necessary. Obviously work of any sort distributed between multiple people progresses fastest when the people are in the same room - but maybe you can work at a slower pace remotely? Plenty of research areas are progressed by people spread across multiple countries. Do you feel in-person interaction will critically speed up your work only in early stages of getting to know the field, or also once you've advanced and are possibly working at the forefront?
The role of academia in AI Safety.

Hey Simon, thanks for answering!

We won't solve AI safety by just throwing a bunch of (ML) researchers on it.

Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place.

AGI will (likely) be quite different from current ML systems.

I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would al... (read more)

2Simon Skade3mo
Wow, the "quite" wasn't meant that strongly, though I agree that I should have expressed myself a bit clearer/differently. And the work of Chris Olah, etc. isn't useless anyway, but yeah AGI won't run on transformers and not a lot of what we found won't be that useful, but we still get experience in how to figure out the principles, and some principles will likely transfer. And AGI forecasting is hard, but certainly not useless/impossible, but you do have high uncertainties. Breakthroughs happen when one understands the problem deeply. I think agree with the "not when people float around vague ideas" part, though I'm not sure what you mean with that. If you mean "academia of philosophy has a problem", then I agree. If you mean "there is no way Einstein could derive special or general relativity mostly from thought experiments", then I disagree, though you do indeed be skilled to use thought experiments. I don't see any bad kind of "floating around with vague ideas" in the AI safety community, but I'm happy to hear concrete examples from you where you think academia methodology is better! (And I do btw. think that we need that Einstein-like reasoning, which is hard, but otherwise we basically have no chance of solving the problem in time.) I still don't see why academia should be better at finding solutions. It can find solutions on easy problems. That's why so many people in academia are goodharting all the time. Finding easy subproblems of which the solutions allow us to solve AI safety is (very likely) much harder than solving those subproblems. Yes, in history there were some Einsteins in academia that could even solve hard problems, but those are very rare, and getting those brilliant not-goodharting people to work on AI safety is uncontroversially good I would say. But there might be better/easier/faster options than building the academic field of AI safety to find those people and make them work on AI safety. Still, I'm not saying it's a bad idea to promo
3Lukas_Gloor3mo
If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier. But we probably agree that insofar as some original-thinking genius reasoners can produce useful shovel-ready research questions for not-so-original-thinking academics (who may or may not be geniuses at other skills) to unbottleneck all the talent there, they should do it. The question seems to be "is it possible?" I think the best judges are the people who are already doing work that the alignment community deems valuable. If all of EA is currently thinking about AI alignment in a way that's so confused that the experts from within can't even recognize talent, then we're in trouble anyway. If EAs who have specialized on this for years are so vastly confused about it, academia will be even more confused. Independently of the above argument that we're in trouble if we can't even recognize talent, I also feel pretty convinced that we can on first-order grounds. It seems pretty obvious to me that work tests or interviews conducted by community experts do an okay job at recognizing talent. They probably don't do a perfect job, but it's still good enough. I think the biggest problem is that few people in EA have the expertise to do it well (and those people tend to be very busy), so grantmakers or career advice teams with talent scouts (such as 80,000 Hours) are bottlecked by expert time that would go into evaluations and assessments.
The role of academia in AI Safety.

I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn't. If someone comes to me with such kind of argument I will go defensive really quickly, and he'll have to waste a lot of effort to convince me there is a slight chance that he's right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.

Perh

... (read more)
3Chris Leong3mo
Well, if you have a low risk preference it is possible to incrementally push things out.
Meditations on careers in AI Safety

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer

I think it would be an AGI very capable of chemistry :-)

one might even wonder what learnable quantum circuits / neural networks would entail.

Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air... (read more)

Meditations on careers in AI Safety

From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.

I think this is right.

You don't say that some of the top AI safety orgs are trying to hire you.

I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.

Then you have to consider how useful quantum algorithms are to existential risk.

I think it is qu... (read more)

Meditations on careers in AI Safety

Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?

The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a ... (read more)

Meditations on careers in AI Safety

Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks. In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...

7MaxRa3mo
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problem full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field? (Btw I'm still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)
Meditations on careers in AI Safety

Have you considered doing this for a while if you think it's possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?

Indeed, I think that would be a good objective for the postdoc. It's also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.

3MaxRa3mo
Cool, I'd personally be very glad if you would contribute to this. Hmm, I wonder whether a plausible next step could be to work on this independently for a couple months to try how much you like doing the work. Maybe you could do this part-time while staying at your current job?
Meditations on careers in AI Safety

Thanks for making concrete bets @aogara :)

Meditations on careers in AI Safety

Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I'm doing the best I can do than if I try to compare myself to some unattainable level. That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)

Meditations on careers in AI Safety

Thanks Chris! Not much: duration and amount of funding. But the projects I applied with were similar, so in a sense I was arguing that independent evaluations of a proposal might provide more signal of the perceived usefulness of this project.

Meditations on careers in AI Safety

I submitted an application about using causality as a means for improved value learning and interpretability of NN: https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation My main reason for putting forward this proposal is that I believe the models of the world humans operate, are somewhat similar to causal models, with some high-level variables that AI systems might be able to learn. So using causal models might be useful for AI Safety.

I think there are also some external reasons why it makes sense as a... (read more)

2Chris Leong3mo
What's the difference between being funded by LTFF vs. one of the other two?
Meditations on careers in AI Safety

Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.

The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing s... (read more)

3Mantas Mazeika3mo
My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer, not whether quantum computing will play into the development of AGI. This question is important for AI safety concerns, and few people are talking about it / qualified to tackle it. Quantum algorithms seem highly relevant to this question. At the risk of revealing my total lack of expertise in quantum computing, one might even wonder what learnable quantum circuits / neural networks would entail. Idk. It just seems wide open. Some questions: * Forecasting is highly information limited. A superintelligence that can't see half the chessboard can still lose. Does quantum computing provide a differential advancement here? * Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant? Or would a 'quantum version of alphafold' trounce the original? (again, I am no expert here) * Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is "yes" is worth thinking about the implications of wrt AI safety.
Meditations on careers in AI Safety

Yeah, I'll try to be creative. It is not a company though, it's academy. But that gives you flexibility too, so it's good. Even to do partially remote work.

Though I am not sure I can ask her to move from European to America. She values stability quite a lot, and she wants to get a permanent position as a civil servant in Spain, where we are from.

What EAG sessions would you like on Global Catastrophic Risks?

Talking about X-risks - What is the best way to communicate about the importance of preventing global catastrophic risks, with people who have previously not heard of them before?

I'm not sure about the best format, perhaps a workshop. But I think this is very useful both for introducing people to EA in a non-offputting way, and also for decreasing the social cost of pursuing careers on this topic.

What is the moral values of nations?(China, for example)

Intuitively I'd say that a lot of people convert nationality into part of their identity, and people value identity. And while I think nations should be more like an organization searching for the happiness of its citizens, or a decision process machine, the identity often plays a large role.

I'm not sure I understand your first bullet point - obviously I'd love to see Ukraine join the EU. But now they won't anyway. What's the pathway from this war to EU membership?

I think our disagreement lies in what is the likely outcome of this. I believe you think Ukrainians will have to concede on everything. And while he seems decided to conquer all of Ukraine, I would say some sort of compromise will be reached before that.

1DavidZhang4mo
I don't completely disagree with your prediction about the outcome, but it seems highly likely to me that the compromise will be worse for Ukraine (and the West/world) than the concessions I outlined. Eg puppet government in Kyiv, death of Ukrainian democracy. I think Plus, this way thousands of people died from warfare and we carry the risk of nuclear war.

It is certainly true that the human loss and prosperity averted by making concessions is important. However, in the long term, I'm not sure the arguments above hold. My impression from historians' opinion, is that the only reason why Great Britain did ok by conceding to Germany to conquer the Sudeten was that GB was not prepared for a war with Germany. However, what was clear is that Hitler's demands would not stop there. Similarly, why would Putin stop there if conceded those things? I'm not sure, I think he would keep demanding and imposing a long-term t... (read more)

1DavidZhang4mo
I agree that Putin would probably have an extended list of demands, many of which would not be worth meeting. The question is whether there was a compromise last week that would have been better than the war we are now seeing. I'm not sure I understand your first bullet point - obviously I'd love to see Ukraine join the EU. But now they won't anyway. What's the pathway from this war to EU membership? I agree in an ideal world Ukraine would decide on concessions, but they have now had these regions taken by force. Is this really a better outcome for Ukranians?
Announcing the Reducetarian Fellowship!

I'd say somewhere at the beginning that the objective is reducing the amount of animal product food.

1Sofia_Fogel5mo
Thanks, PabloAMC! That's a good point—I forgot to introduce the organization and our mission! I'll make an edit to attempt to make that clearer :)
I want to be replaced

I certainly want to be replaced by better AI Safety researchers (or any other worker in an important area) so that I don't have to make the personal sacrifice to work on them. I still put a lot of effort in being the best, but secretly wish there is someone better to do the job. Funny. Also, a nice excuse to celebrate rejection if you apply to an EA job.

9Toni_Hoffmann5mo
I don't think it's just a "nice excuse", I think it makes sense to celebrate if you got rejected by an EA org. The work you wanted to do to help the world is being done better than you could have done it (assuming their application system works well enough). And you don't even have to lift a finger. That is not to say that I would predict myself to react in a very positive way immeadiately after rejection, but it's how I'd want myself to react.
The Liberation Pledge

I see, it makes sense. Yet, my belief is that these people are willing to say they would do so if it were "free", but it never is, if only because it requires efforts to change your own habits. If they really wanted, do you think they don't do it for the risk of criticism of others, or why? Notice that the idea of making it simple to eat less meat addresses what I think is the main obstacle: changing routines.

6nico stubler5mo
unfortunately, i think shockingly few people are willing to make significant personal "sacrifices" for ethical reasons (i put "sacrifice" in quotations because i don't see being vegan as a sacrifice—the important thing, however, is that others still do...). i think there are a lot of reasons that hold people back from "going" vegan... the [perceived] hassle, social cost, free-rider effect, associated identity change, etc. i think the solution is winning systemic change, i.e., policies that change the entire decision-making environment. e.g., as i argue in my forthcoming book (shameless plug), if the sale of meat was banned, all of society would go vegetarian by default (and the collective transition would make it easier for everyone). this systemic change reduces (or eliminates) the hassle, social cost, fear of free riders undermining us, and the need to change identity that often comes with going vegan. the key, in my mind, is creating the social conditions that will allow for far-reaching systemic changes to become viable, and i view the Pledge as one action (among others) that individuals can take to help.
Load More