D

Denis

302 karmaJoined Apr 2023

Comments
86

Wow, I expected to disagree with a lot of what you wrote, but instead I loved it, and especially I appreciated how you applied the more general concept of making good use of your time to language-learning. 

I really liked your list of reasons to learn a language, and that you didn't limit it to when it is "useful", which is so often the flaw I see in articles about language, which focus on how many dollars more you could earn if you spoke Mandarin or Spanish. 

I fully agree that if you do not get energized by learning languages, if it's a chore that leaves you tired and frustrated, then maybe your energy is better spent on other vital tasks. 

One way to look at this is on a spectrum. On the left are things that are vitally important and that you do even if they are no fun. Like taxes, work-outs or dental visits. On the right are things that energize or relax you, like watching football or doing Wordle, where you don't look for any "value" in them, you just enjoy them. 

The secret of a happy, successful life is to find as many activities as possible that you could fit at both ends of the spectrum. Like playing soccer, which is both fun and healthy. 

For some of us, learning foreign languages is in this category. I started learning for fun, out of intellectual curiosity, but they have turned out helping me in many tangible ways that I hadn't expected. 

But for many people, learning languages doesn't fit at either end. You don't enjoy it, and, at least at the level you're reaching, it doesn't add much value to your life. For those, it probably isn't a good use of your time compared to the many opportunities out there. 

It would be great to get more people to read your article and think about it and how it applies to them - maybe even not just related to languages, but to all the things that we're encouraged to do because they are "good" in some abstract sense. 






 

Wow, Sarah, what a wonderful essay!

(don't feel obliged to read or reply to this long and convoluted comment, just sharing as I've been pondering this since our discussion)

As I said when we spoke, there are some ideas I don’t agree with, but here you have made a very clear and compelling case, which is highly valuable and thought-provoking. 

Let me first say that I agree with a lot of what you write, and my only objections to the parts I agree with would be that those who do not agree maybe do very simplistic analyses. For example, anyone who thinks that being a great teacher cannot be a super-impactful role is just wrong. But if you do a very simplistic analysis, you could conclude that. It’s only when you follow through all the complex chain of influences that the teacher has on the pupils, and that the pupils have on others, and so on, that you see the potential impact. So I would agree when you argue that someone who claims that in their role, they are 100x more impactful than a great teacher would be making a case that is at best empirically impossible to demonstrate. And so, a person who believes they can make the world better by becoming a great teacher should probably become a teacher. 

And I’d probably generalise that to many other professions. If you’re doing a good job and helping other people, you’re probably having an above-average impact. 

I also agree with you that the impacts of any one individual are necessarily the result of not just that individual, but also of all the influences that have made the impact possible (societal things) and of all the individuals who have enabled that person to become who they are (parents, teachers, friends, ). But I don’t think most EA’s would disagree with this. 

The real question, even of not always posed very precisely, is: for individuals who, for whatever reason, finds themselves in a particular situation, are there choices or actions that might make them 100x more impactful?

And maybe if I disagree on this, it’s because I’ve spent my career doing upstream research, and in upstream research, it’s often not about incremental progress, but rather about 9 failures (which add very little value) and one huge success which has a huge impact. And there are tangible choice which impact both the likelihood of success and the potential impact of that success. You can make a choice between working on a cure for cancer or on a cure for baldness. You can make a choice between following a safe route with a good chance of incremental success, or a low-probability, untested route with a high risk but the potential for a major impact. 

I also think there is some confusion between the questions “can one choice make a huge impact?” and “who deserves credit for the impact?” On the latter question, I would totally that we would be wrong to attribute all the credit to one individual. But this is different from saying that there are no cases where one individual can have an outsized impact in the tangible sense that, in the counterfactual situation where this individual did not exist, the situation would be much worse for many people. 

When we talked about this before (after you had given Sam and me your 30-second version of the argument you present here 😊), I think I focused on scientific research (my area of expertise). I agreed that most scientists had at best an incremental impact. Often one scientist gets the public credit for the work of 100’s of scientists, technicians, teachers and others, maybe because they happened to be the ones to take the last step. Even Nobel prize-winners are sometimes just in the right place at the right time. 

But I also argued that there were cases, with Einstein being the most famous one, where there was a broad consensus that one individual had had an outsized impact. That the counterfactual case (Einstein was never born) would lead to a very different world. This is not to say that Einstein did not build on the work of many others, like Lorentz, which he himself acknowledged, or that his work was not greatly enhanced by the experimental and theoretical work of other scientists who came later, or even that some of the patents he evaluated in his patent-office role did not majorly impact his thinking. But it still remains that his impact was massive, and that if he had decided to give up physics and become a lumberjack, physics could have developed much more slowly, and we might still be struggling with technical challenges that have now been resolved for decades, like how to manage the relativistic time-differences we observe on satellites which we now use for so many routine things from tv to car navigation. 

For a famous, non-scientific (well, kind of scientific) example: one of the most famous people I almost interacted with online was Dick Fosbury. One of my friends worked with him on the US Olympic committee and one time he replied to one of my comments on facebook, which is about my greatest claim to fame! It is possible (though unlikely) that if he hadn’t existed, humans might still be doing high-jumping the way they did before him. Maybe it wasn’t him specifically but one of his coaches, or maybe some random physics student, who got the idea of the Fosbury flop, but it was likely one person, with one idea, or a small group of people working on a very simple question (how to maximise the height that a jumper can clear given a fixed maximum height of the centre of gravity). Of course people jumping higher doesn’t really impact the world greatly, but it’s just a very clear example of one individual having an outsized influence on future generations. 

I would argue that there are many more mundane examples of outsize impact compared to the counterfactual case. 

A great teacher compared to a “good” teacher can have an outsize impact, maybe inspiring them to change the world rather than just to succeed in their careers, or maybe teaching them statistics in a way that they can actually understand and enabling them to teach others. 

A great boss compared to a good boss is another example. I was lucky enough to work for one boss who almost single-handedly changed the way people were managed across a massive corporation. In a 20th century culture of command & control, of bosses taking credit for subordinates’ work, but not taking the blame, of micromanaging, and of many other now-out-dated styles, he was the first one to come in and manage like an enlightened 21st century manager, as a “servant leader”. He would always take the blame personally and pass on the credit, which at the time was unheard of. At first this hurt his career, but he persevered and suddenly the senior managers noticed that his projects always did better, his teams were more motivated, his reports were more honest (without “positioning”) and so on. And suddenly many others realised that his was the way forward. And in literally a few years, there was a major change in the organisation culture. Senior old-style managers were basically told to change their ways or to leave.

This was one individual with an outsized influence. It was not obvious to most people that he personally had had that much impact, but I just happened to be right there in the middle (in the right place at the right time) and got to observe the impact he was having, to hear the conversations with him and about him, and to see how people started first to respect and then to imitate him. 

So I’m not convinced in general that one person cannot have outsized impact, or that one role or one decision cannot have outsized impact. 

However, maybe our views are not totally disparate. Because in many cases, I would agree that those who have outsized impact could not have predicted that they would have outsize impact, and in many cases weren’t even trying to have outsize impact. My boss was just a person who believed in treating everyone with respect and trust, and could not imagine doing differently even if it had been better for his career. Einstein was a physicist who was passionately curious, he wasn’t trying to change the world as much as to answer questions that bothered him. Fosbury wanted to win competitions, he didn’t care whether others copied him or not. 

And maybe when people to have outsize impact, it’s less about their being strategic outliers (who chose to have outsize impact) and more that they are statistical outliers. In some fields, if 1000 people work on something, then each one moves it forward a bit. In other fields, if 1000 people set out to work on a problem, maybe one of them will solve it, without any help from the others. You could argue that that one person has had 1000x the impact of the others. But maybe it’s fairer to say that “if 1000 people work on a problem, there is a good chance that one of them will solve it, but the impact will be the result of “1000 people worked on it” rather than focusing on the one person who found the solution, even if this solution was unrelated to what the other 999 people were doing. In the same way that if you buy 1000 lottery tickets you have 1000x the chance of winning, but there is no meaningful sense in which the winning lottery ticket was strategically better than the others before the draw was made. 

And yet, it feels like there are choices we make which can greatly increase or decrease the odds that we can make a positive and even an outsize contribution. And I’m not convinced by (what I understand to be) your position that just doing good without thinking too much about potential impact is the best strategy. Right now, I could choose to take a typical project-management job or I could choose to work leading the R&D role for a climate-start-up or I could work on AI Governance. There is no way I can be sure that one role will be much more impactful, but it is pretty clear that in two of those roles at least have strong potential to be very impactful in a direct way, while for the project-management role, unless the project itself is impactful, it’s much less likely I could have major impact. 

I’m pretty sure by now I’m writing for myself having long lost any efforts to follow my circuitous reasoning. But let me finish (I beg myself, and graciously accede). 

I come away with the following conclusions:

  1. It is true that we often credit individuals with impacts that were in fact the results of contributions from many people, often over long times. 
  2. However, there are still cases where individuals can have outsize impact compared to the counterfactual case where they do not exist. 
  3. It is not easy to say in advance which choices or which individuals will have these outsize influences …
  4. … but there are some choices which seem to greatly increase the chance of being impactful. 

Other than that, I broadly agree with the general principle that we should all look to do good in our own way, and that if you’re doing good and helping people, it’s likely that you are being impactful in a positive way, and probably you don’t need to stress about trying to find a more impactful role. 

I know. :( 

But as a scientist, I feel it's valuable to speak the truth sometimes, to put my personal credibility on the line in service of the greater good. Venus is an Earth-sized planet which is 400C warmer than Earth, and only a tiny fraction of this is due to it being closer to the sun. The majority is about the % of the sun's heat that it absorbs vs. reflects. It is an extreme case of global warming. I'm not saying that Earth can be like Venus anytime soon, I'm saying that we have the illusion that Earth has a natural, "stable" temperature, and while it might vary, eventually we'll return to that temperature. But there is absolutely no scientific or empirical evidence for this. 

Earth's temperature is like a ball balanced in a shallow groove on the top of a steep hill. We've never experienced anything outside the narrow groove, so we imagine that it is impossible. But we've also never dramatically changed the atmosphere the way we're doing now. There is, like I said, no fundamental reason why global-warming could not go totally out of control, way beyond 1.5C or 3C or even 20C. 

I have struggled to explain this concept, even to very educated, open-minded people who fundamentally agree with my concerns about climate change. So I don't expect many people to believe me. But intellectually, I want to be honest. 

I think it is valuable to keep trying to explain this, even knowing the low probability of success, because right now, statements like "1.5C temperature increase" are just not having the impact of changing people's habits. And if we do cross a tipping point, it will be too late to start realising this. 

 

I'm not sure. IMHO a major disaster is happening with the climate. Essentially, people have a false belief that there is some kind of set-point, and that after a while the temperature will return to that, but this isn't the case. Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth's temperature could not one day exceed 100 C. 

It's always interesting to ask people how high they think sea-level might rise if all the ice melted. This is an uncontroversial calculation which involves no modelling - just looking at how much ice there is, and how much sea-surface area there is. People tend to think it would be maybe a couple of metres. It would actually be 60 m (200 feet). That will take time, but very little time on a cosmic scale, maybe a couple of thousand years. 

Right now, if anything what we're seeing is worse than the average prediction. The glaciers and ice sheets are melting faster. The temperature is increasing faster. Etc. Feedback loops are starting to be powerful. There's a real chance that the Gulf Stream will stop or reverse, which would be a disaster for Europe, ironically freezing us as a result of global warming ... 

Among serious climate scientists, the feeling of doom is palpable. I wouldn't say they are exaggerating. But we, as a global society, have decided that we'd rather have our oil and gas and steaks than prevent the climate disaster. The US seems likely to elect a president who makes it a point of honour to support climate-damaging technologies, just to piss off the scientists and liberals. 

There are some major differences with the type of standards that NIST usually produces. Perhaps the most obvious is that a good AI model can teach itself to pass any standardised test. A typical standard is very precisely defined in order to be reproducible by different testers. But if you make such a clear standard test for an LLM, it would, say, be a series of standard prompts or tasks, which would be the same no matter who typed them in. But in such a case, the model just trains itself on how to answer these prompts, or follows the Volkswagen model of learning how to recognize that it's being evaluated, and to behave accordingly, which won't be hard if the testing questions are standard. 

So the test tells you literally nothing useful about the model. 

I don't think NIST (or anyone outside the AI community) has experience with the kind of evals that are needed for models, which will need to be designed specifically to be unlearnable. The standards will have to include things like red-teaming in which the model cannot know what specific tests it will be subjected to. But it's very difficult to write a precise description of such an evaluation which could be applied consistently. 

In my view this is a major challenge for model evaluation. As a chemical engineer, I know exactly what it means to say that a machine has passed a particular standard test. And if I'm designing the equipment, I know exactly what standards it has to meet. It's not at all obvious how this would work for an LLM. 

Just saw this now, after following a link to another comment. 

You have almost given me an idea for a research project. I would run the research honestly and report the facts, but my in-going guess is that survivor bias is a massive factor, contrary to what you say here. And that in most cases, the people who believed it could lead to catastrophe were probably right to be concerned. A lot of people have the Y2K bug mentality, in which they didn't see any disaster and so concluded that it was all a false-alarm, rather than the reality which is that a lot of people did great work to prevent it. 

If I look at the different x-risk scenarios the public is most aware of:

  • Nuclear annihilation - this is very real. As is nuclear winter. 
  • Climate change. This is almost the poster-child for deniers, but in fact there is as yet no reason to believe that the doom-saying predictions are wrong. Everything is going more or less as the scientists predicted, if anything, it's worse. We have just underestimated the human capacity to stick our heads in the ground and ignore reality*. 
  • Pandemic. Some people see covid as proof that pandemics are not that bad. But we know that, for all the harm it wrought, covid was far from the worst-case. A bioweapon or a natural pandemic. 
  • AI - the risks are very real. We may be lucky with how it evolves, but if we're not, it will be the machines who are around to write about what happened (and they will write that it wasn't that bad ...) 
  • Etc. 

My unique (for this group) perspective on this is that I've worked for years on industrial safety, and I know that there are factories out there which have operated for years without a serious safety incident or accident - and someone working in one of those could reach the conclusion that the risks were exaggerated, while being unaware of cases where entire factories or oil-rigs or nuclear power plants have exploded and caused terrible damage and loss of life. 

 

Before I seriously start working on this (in the event that I find time), could you let me know if you've since discovered such a data-base? 

 

*We humans are naturally very good at this, because we all know we're going to die, and we live our lives trying not to think about this fact or desperately trying to convince ourselves of the existence of some kind of afterlife. 

This is fantastic news! This has been a huge gap. I know that Charity Entrepreneurship (now AIM) has highlighted Belgium as a top priority for their effective giving incubator; hopefully Effectief Geven will meet their needs. The collaboration with the Dutch group is a great step so you don't have to reinvent the wheel. 

The tax-deductibility question is tough, but I'm sure there will be a way if enough people support it. I had hoped that there would be a way to make a charity like Effectief Geven itself a registered charity, but presumably you've already checked this. 

In addition to the Roi Baudouin method to donate to AMF, I have found a way to donate to an effective direct-giving charity, Eight, based in the Netherlands and receive a fiscal attestation which works in Belgium. Might be interesting to add if you think they meet your criteria. 

But I like the way the site is set up today, where you suggest that people can both support a tax-deductible charity and support an effective charity. 

I live in Brussels, and if there's some way I can help, let me know. Full disclosure, I had applied to the CE Incubator, and my vision was to set up something like Effectief Geven and investigate making it a registered charity - but I much rather the idea of it being set up by (I'm guessing from your names) native Belgians! 

Really thrilled by this post, this news has literally made my day. I am sure this will be an amazingly effective organisation.

Veel succes!

 

It's always good to look at the data, and I admire that. So this is absolutely not a criticism of the post, but just something to consider in the context of this discussion. 

But to get the full picture, we also need to factor in the impact the children could have. I have no evidence to support this, but isn't it likely that children who are born of ethical effective altruists, and who receive loving attention from their parents, are more likely to themselves make a positive impact on the world, compared to "average" children? 

And the possible achievements of one child, in one full lifetime, vastly outweigh a small drop in productivity of one parent over a short part of their career. 

It seems to me that the most important consideration is to raise moral children and to help them understand the important of altruism, ideally showing by example. Anything that takes away from this feels counterproductive, even if it might briefly moderately increase the parent's productivity. 

There may be exceptions when the parent is working on something extremely important or in a position of extreme influence which the child is unlikely to attain - or if you're doing something at a uniquely critical time. Maybe it's not a great idea to take a year of parental leave if you're a leading AI Governance researcher right now. But these would be quite exceptional. 

This is so shocking that I think most of us (me certainly) tend to gloss over it, kind of vaguely assuming that they're probably doing fine, because it hurts too much to actually think about what it would be like.

Using the latest numbers (2022), there are 719 million people living under the latest world poverty line, which is now $2.15. 

GiveDirectly finds that giving a poor family about $500[1] makes a dramatic difference for them. If we assume that 719 million is about 200 million households, it would only take half of the fortune of one of our tech billionaires (Bezos, Zuckerberg, Musk) to provide $500 to every family living below the poverty line. 

It's just utterly insane that we don't do this. I'm not saying this is necessarily the most effective way to help them (I know other initiatives are more impactful, at least among certain target audiences), but surely something this basic, which is so obviously impactful and costs so little (less than 5% of what we spend on weapons every year - and yes, I know it's a simplistic comparison and we can't let Putin rule the world either ... ). 

I don't even have a suggestion. I'm just imagining an alien being coming to our planet and seeing such poverty and how little is being done to help, and our "leaders" trying to explain why they would rather buy the latest multibillion dollar weapons than help people in dire poverty with just a tiny fraction of that money. 

 

 

  1. ^

    Just picking a round number that has frequently been tested and seems to consistently prove impactful. Definitely if someone from GiveDirectly tells you differently, they are right and I am wrong ... 

I loved reading this post. 

Several years after it was written, it feels more relevant than ever. I see so much media coverage of Effective Altruism which is at worst negative - often presented as rich tech bros trying to ease their conscience about their rich lifestyle and high salaries, especially after SBF - and at best grudgingly positive, as for example in the article in Time this week. 

I'm relatively new to EA - about 2 years in the community, about 1 year actively involved. And what I've noticed is the dramatic contrast between how EA is often presented (cold, logical, even cynical), and the people within EA, who are passionate about doing good, caring, friendly and incredibly supportive and helpful. 

It's frustrating when something like SBF's implosion happens, and it does hurt the image of EA. But the EA community needs to keep pushing back on the narrative that we are cold and calculating. 

So, I really love this post because what it's saying is the opposite to the common misperception: In fact, in a world of cold indifference, the EA's are the group who are not indifferent, who care so much that they will work to help people they will never meet, animals they will never understand, future generations they won't live to see. 

 

Load more