Root out maximizers within yourself. Even 'doing the most good.' Maximizer processes are cancer, trying to convert the universe into copies of themselves. But this destroys anything that the maximizing was for.
Potentially of use in running a short workshop is the effectiveness of pedagogical techniques. From engaging with the literature on such, the highest quality systematic review I could find pointed to four techniques as showing robust effect size across many contexts and instantiations. They are
in the long run yes. But that's overly simplistic when considering humans because of all the things we might do to either memetically or technologically undermine evolutionary equilibria.
In order for hingeyness to stay uniform robustness to x-risk would need to scale uniformly with power needed to cause x-risk.
In the same way that an organism tries to extend the envelope of its homeostasis, an organization has a tendency to isolate itself from falsifiability in its core justifying claims. Beware those whose response to failure is to scale up.
Towards measuring poverty costs of covid from economic disruption: https://blogs.worldbank.org/developmenttalk/lives-or-livelihoods-global-estimates-mortality-and-poverty-costs-covid-19
Thank you for the work put into this.
I can imagine a world in which the idea of a peace summit that doesn't involve leaders taking mdma together is seen as an 'are you even trying' type thing.
Great points. I feel like there's a rule of thumb somewhere in here like 'marginal dollars tend to be low information dollars' that feels helpful.
This portion of the PBS documentary A Century of Revolution covers the cultural revolution:
https://www.youtube.com/watch?v=PJyoX_vrlns (Around the 1 hour mark)
Recommended. One interesting bit for me is that I think foreign dictators often appear clownish because the translations don't capture what they were speaking to, either literally in terms of them being a good speech writer, or contextually in terms of not really being familiar with the cultural context that animates a particular popular political reaction. I think this applies even if you speak nominally the same language as the dictator but don't share their culture.
Appreciate the care taken, especially in the atomistic section. One thing is that it seems to assume that best we can do with such a research agenda is analyze correlates, where what we really want is a causal model.
I really enjoyed this. A related thing is about a possible reason why more debate doesn't happen. I think when rationalist style thinkers debate, especially in public, it feels a bit high stakes. There is pressure to demonstrate good epistemic standards, even though no one can define a good basis set for that. This goes doubly so for anyone who feels like they have a respectable position or are well regarded. There is a lot of downside risk to them engaging in debate and little upside. I think the thing that breaks this is actually pretty simple and i...
> Costs of being vegan are in fact trivial, despite all the complaining that meat-eaters do about it. For almost everyone there is a net health benefit and the food is probably more enjoyable than the amount of enjoyment one would have derived from sticking with one's non-vegan diet, or at the very least certainly not less so. No expenditure of will-power is required once one is accustomed to the new diet. It is simply a matter of changing one's mind-set.
Appreciate some of the points, but this part seems totally disconnected from what people report along several dimensions.
Potential EA career: go in to defense R&D specifically for 'stabilizing' weapons tech i.e. doing research on things that would favor defense over offense. In 3d space, this is very hard.
Defence isn't necessarily stabilising! A working missile defence system degrades your opponent's second strike capabilities.
This is only half formed but I want to say something about a slightly different frame for evaluation, what might be termed 'reward architecture calibration.' I think that while a mapping from this frame to various preference and utility formulations is possible, I like it more than those frames because it suggests concrete areas to start looking. The basic idea is that in principle it seems likely that it will be possible to draw a clear distinction between reward architectures that are well suited to the actual sensory input they receive and rew...
Literally today I was idly speculating that it would be nice to see more things that were reminiscent of the longer letters academics in a particular field would write to each other in the days of such. More willingness to explore at length. Lo and behold this very post appears. Thanks!
WRT content, you mention it in passing, but yeah this seems related to tendency towards optimization of causal reality (inductive) or social reality (anti-inductive).
Panpsychism still seems like a flavor of eliminativism to me. What do we gain by saying an electron is conscious too? Novel predictions?
Seems like you're trying to get at what I've seen referred to as 'multifinal means' at one point. Keyword might help find related stuff.
This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I've been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved...
So a conceptual slice might be that not only do generals fight the last war, but the ontology of your institutions reflect the necessities of the last war.
It has been noted that when status hierarchies diversify, creating more niches, that people are happier than when status hierarchies collapse to a single or a small number of very legible dimensions. This suggests that it would be possible to increase net happiness by studying the conditions by which these situations arise and tilting the playing field. E.g. are social media sites only having a negative impact on mental health because they compress the metrics by which success is measured?
Related: surely someone somewhere is doing critical path analysis of vaccine development. It certainly wouldn't be the case that in the middle of a crisis people just keep on doing what they've always done. Even if it isn't anyone's job to figure out what the actual non parallelizable causal steps are in producing a tested vaccine and trimming the fat, someone would still take it on right?
...surely...
Training children that it is a good idea to keep psychopaths as pets as long as they are cute probably results in them voting actors into positions of authority later in life.
1. Make a survey of EAs preferences over puppies x kittens. I suppose it's correlated with cause areas and donation strategies.
2. Dogs and cats are too mainstream and fail the neglectedness test. If you're a true EA, you should consider smaller animals, such as mice, or farm animals like chickens or pigs - so teaching people to expand their moral circle.
(BTW, has anyone seriously considered measuring if meeting a farm animal pet decreases someone's willingness to eat meat, etc? I bet it does not, but...)
3. If you're hardcore, you'l...
Exploit selection effects on prediction records to influence policy.
During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem. When a crisis looms on the horizon, EAs coordinate to all make different predictions thus maximizing the chance that one of them will appear prescient and thus obtain outsize influence.
"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"
"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)
"Matching problems: a literature review"
"Entropy for intentional content: a formal model" (AI related)
"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)
"One weird trick that made my note taking 10x more useful"
A lot of people are willing to try new things right now. Rapid prototyping of online EA meetups could lead to better ability to do remote collaboration permanently. This helps cut against a key constraint in matching problems, co-location.
Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.
Whether or not EA has ossified in its philosophical positions and organizational ontologies.
At $50 per ton cost to sequester the average American would need to generate $1000 per year of positive impact to offset their co2 use. The idea that the numbers are even close to comparable means priors are way way off. The signaling commons have been polluted on this front from people impact larping their short showers, lack of water at restaurants and other absurdities.
I think that much of the disconnect comes down to focusing on goals over methods. I think it is better to think of goals as orienting us in the problem-space, while most of the benefits accrue along the way. By the time you make it a substantial fraction of the way to a goal, you'll likely be in a much better position to realize the original goal was slightly off and adjust course. So 'eliminating all infectious disease' could easily be criticized as unrealistic for endless reasons, yet it is very useful for orienting us to be scope sensitiv...
> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
yup, thanks. Also across time as well as across agents at a particular moment.
Like other links between VNM and Utilitarianism, this seems to roll intersubjective utility comparison under the rug. The agents are likely using very different methods to convert their preferences to the given numbers, rendering the aggregate of them non rigorous and subject to instability in iterated games.
Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.
You don't, but that's a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.
The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).
First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.
>he thinks the world is metaphorically more made of liquids than solids.
Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our pa...
EA is well positioned for moonshot funding (though to date has mostly attracted risk averse donors AFAICT). It seems like an interesting generator to ask what moonshots look like for these categories.
> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.
we can't know if this is a good or bad number without context.
Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.
Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.
I expect 80,000 patients would be...
The number of people working on things outside the overton window is sharply limited by being able and willing to risk being unsuccessful.
Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.
I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.
Dating apps have misaligned incentives. A dating app run as a non profit could plausibly out compete on the metric of successful couple formation.
IIRC Interactive Brokers isn't going to let you lever up more than about 2:1, though if you have 'separate' personal and altruistic accounts you can potentially lever your altruistic side higher. e.g. if you have 50k in personal accounts and 50k in altruistic accounts, you can get 100k in margin, allowing you to lever up the altruistic side 3:1.
Lazy people can access mild leverage (1.5:1) through NTSX for low fees. Many brokerages don't grant access to the more extreme 3:1 ETFs.
You may be interested in this convo I had about research on pedagogical models. The tl;dw if you just want the interventions that have replicated with large effects sizes: