Yes, I agree.
The OP seems to talk about cause-agnosticism (uncertainty about which cause is most pressing) or cause-divergence (focusing on many causes).
A groundbreaking paper by Aidan Toner-Rodgers at MIT recently found that material scientists assisted by AI systems "discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation."
MIT just put up a notice that they've "conducted an internal, confidential review and concluded that the paper should be withdrawn from public discourse".
Right. I think it could be useful to be quite careful about what terms to use since, e.g. some who might actually be fine with some level of monitoring and oversight would be more sceptical of it if it's described as "soft nationalisation".
You could search the literature (e.g. on other industries) for existing terminology.
Part of our linguistic struggle here is that we're attempting to map the entire spectrum of gov. involvement and slap an overarching label on it.
One approach could be to use terminology that's explicit about there being a spectrum. ...
Some of the listed policy levers seem in themselves insufficient for the government's policy to qualify as soft nationalization. For instance, that seems true of government contracts and some forms of government oversight. You might consider coming up with another term to describe policies that are towards the lower end of government intervention.
In general, you focus on the contrast between soft and total nationalization, but I think it could also be useful to make contrasts with lower levels of government intervention. In my view, there's a lot of ground...
I don't think one can infer that without having the whole distribution across different countries. It may just be that small countries have greater variance. (Though I don't know what principle the author used for excluding certain countries.)
I agree with that.
Also, notice that the top countries are pretty small. That may be because random factors/shocks may be more likely to push the average up or down for small countries. Cf:
...Kahneman begins the chapter with an example of data interpretation using cases of kidney cancer. The lowest rates of kidney cancer are in counties that are rural and vote Republican. All sorts of theories jump to mind based on that data. However, a few paragraphs later Kahneman notes that the data also shows that the counties with the highest rates of kidney cancer
@Lucius Caviola and I discuss such issues in Chapter 9 of our recent book. If I understand your argument correctly I think our suggested solution (splitting donations between a highly effective charity and the originally preferred "favourite" charity) amounts to what you call a barbell strategy.
I was going to make a point about a ‘lack of EA leadership’ turning up apart from Zach Robinson, but when I double-checked the event attendee list I think I was just wrong on this. Sure, a couple of big names didn’t turn up, and it may depend on what list of ‘EA leaders’ you’re using as a reference, but I want to admit I was directionally wrong here.
Fwiw I think there was such a tendency.
There's already a thread on this afaict.
Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification).
I think it's an important take since many in EA/AI risk circles have expected governments to be less involved:
https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19
It would be good to see more discussion on this crucial question.
The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question tha...
I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at th...
The reasoning is that knowledgeable people's beliefs in a certain view is evidence for that view.
This is a type of reasoning people use a lot in many different contexts. I think it's a valid and important type of reasoning (even though specific instances of it can of course be mistaken).
Some references:
https://plato.stanford.edu/entries/disagreement/#EquaWeigView
https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252
https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty
Thoughtful post.
If you're perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that's more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area.
I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.
If that's so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It's possible that i...
How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?
It doesn't just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.
There's not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I'd like to see more of it.
@Lucius Caviola and I have written a paper where we put forward a specific theory of wh...
Another factor is that recruitment to the EA community may be more difficult if it's perceived as very demanding.
I'm also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)
I think this discussion is a bit too abstract. It could be helpful with concrete examples of non-academic EA research that you think should have been published in academic outlets. It would also help if you would give some details of what changes they would need to make to get their research past peer reviewers.
Some evidence that people tend to underuse social information, suggesting they're not by default epistemically modest:
...
Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental lit
You may want to have a look at the list of topics. Some of the terms above are listed there; e.g. Bayesian epistemology, counterfactual reasoning, and the unilateralist's curse.
On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius's talk Against naive effective altruism.
I gave an argument for why I don't think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn't engage with my argument.
I'm not sure what you're trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we've solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
I also guess cry wolf-effects won't be as large as one might think - e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.
Yeah, I was going to post that tweet. I'd also like to mention my related thread that if you have a history of crying wolf, then when wolves do start to appear, you’ll likely be turned to as a wolf expert.
Thanks, very interesting.
Regarding the political views, there are two graphs, showing different numbers. Does the first include people who didn't respond to the political views question, whereas the second exclude them? If so, it might be good to clarify that. You might also clarify that the first graph/sets of numbers don't sum to 100%. Alternatively, you could just present the data that excludes non-responses, since that's in my view the more interesting data.
Yes, I think that him, e.g. being interviewed by 80K didn't make much of a difference. I think that EA's reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.
"Co-writing with Julia would be better, but I suspect it wouldn't go well. While we do have compatible views, we have very different writing styles, and I understand taking on projects like this is often hard on relationships."
Perhaps there are ways of addressing this. For instance, you could write separate chapters, or parts; or have some kind of dialogue between the two of you. The idea would be that each person owns part of the book. I'm unsure about the details, but maybe you could find a solution.
Yes this was my thought as well. I'd love a book from you Jeff but would really (!!) love one from both of you (+ mini-chapters from the kids?).
I don't know the details of your current work, but it seems worth writing one chapter as a trial run, and if you think its going well (and maybe has good feedback) considering taking 6 months or so off.
Informed speculation might ... confuse people, since there's already plenty of work people call "AI forecasting" that looks similar to what I'm talking about.
Yes, I think using the term "forecasting" for what you do is established usage - it's effectively a technical term. Calling it "informed speculation about AI" in the title would not be helpful, in my view.
Great post, btw.
This is a summary of Temporal Distance Reduces Ingroup Favoritism by Stefan Schubert, @Lucius Caviola, Julian Savulescu, and Nadira S. Faber.
Most people are morally partial. When deciding whose lives to improve, they prioritise their ingroup – their compatriots or their local community – over distant strangers. And they are also partial with respect to time: they prioritise currently living people over people who will live in the future. This is well known from psychological research.
But what has received less attention is how these psychological dime... (read more)