All of RomeoStevens's Comments + Replies

You may be interested in this convo I had about research on pedagogical models. The tl;dw if you just want the interventions that have replicated with large effects sizes:

  1. Deliberate practice
  2.  Lots of low stakes quizzing
  3. Elaboration of context (deliberately structuring things to give students the chance to connect knowledge areas themselves)
  4. Teaching the material to others (forcing organization of the material in a way helpful to the one doing the teaching, and helping them identify holes in their own understanding)
3
Michael Noetel
1y
This is a useful list of interventions, some of which are mentioned in the post (e.g., quizzes; we've summarised the meta-analyses for these here). I think steps 1, 2 and 3 from the summary of the above post are the 'teacher focused' versions of how to promote deliberate practice (have a focus, get feedback, fix problems). Deliberate practice literature often tells learners how they should structure their own practice (e.g., how musicians should train).  Teaching to others is a useful way to frame collaboration in a way that makes it safe to not know all the answers. Thanks for the nudges.

Root out maximizers within yourself. Even 'doing the most good.' Maximizer processes are cancer, trying to convert the universe into copies of themselves. But this destroys anything that the maximizing was for.

6
Coafos
2y
I agree. The motto is "doing good better" not doing good the best.

Potentially of use in running a short workshop is the effectiveness of pedagogical techniques. From engaging with the literature on such, the highest quality systematic review I could find pointed to four techniques as showing robust effect size across many contexts and instantiations. They are

  1. Deliberate practice
  2. Cuing elaboration of context
  3. Regular low stakes quizzing
  4. Teaching the material to others

Lots of markets fail to clear for a long time until coordination problems are solved.

I propose that March 26th (6 months equidistant from Petrov day) be converse Petrov day.

8
NunoSempere
3y
Nemo day, perhaps

in the long run yes. But that's overly simplistic when considering humans because of all the things we might do to either memetically or technologically undermine evolutionary equilibria.

2
Milan_Griffes
3y
I don't think we yet are collectively wise enough to engage in memetic and/or tech projects that undermine evolutionary equilibria, fwiw.
4
Milan_Griffes
3y
K strategists still need to reproduce at the replacement rate or above to be viable.

And later iirc "maybe not needing to hear their screams is what being the comet king means."

In order for hingeyness to stay uniform robustness to x-risk would need to scale uniformly with power needed to cause x-risk.

In the same way that an organism tries to extend the envelope of its homeostasis, an organization has a tendency to isolate itself from falsifiability in its core justifying claims. Beware those whose response to failure is to scale up.

6
MichaelStJules
3y
What is this referring to?

Thank you for the work put into this.

I can imagine a world in which the idea of a peace summit that doesn't involve leaders taking mdma together is seen as an 'are you even trying' type thing.

Great points. I feel like there's a rule of thumb somewhere in here like 'marginal dollars tend to be low information dollars' that feels helpful.

This portion of the PBS documentary A Century of Revolution covers the cultural revolution:

https://www.youtube.com/watch?v=PJyoX_vrlns (Around the 1 hour mark)

Recommended. One interesting bit for me is that I think foreign dictators often appear clownish because the translations don't capture what they were speaking to, either literally in terms of them being a good speech writer, or contextually in terms of not really being familiar with the cultural context that animates a particular popular political reaction. I think this applies even if you speak nominally the same language as the dictator but don't share their culture.

Appreciate the care taken, especially in the atomistic section. One thing is that it seems to assume that best we can do with such a research agenda is analyze correlates, where what we really want is a causal model.

I really enjoyed this. A related thing is about a possible reason why more debate doesn't happen. I think when rationalist style thinkers debate, especially in public, it feels a bit high stakes. There is pressure to demonstrate good epistemic standards, even though no one can define a good basis set for that. This goes doubly so for anyone who feels like they have a respectable position or are well regarded. There is a lot of downside risk to them engaging in debate and little upside. I think the thing that breaks this is actually pretty simple and i... (read more)

> Costs of being vegan are in fact trivial, despite all the complaining that meat-eaters do about it. For almost everyone there is a net health benefit and the food is probably more enjoyable than the amount of enjoyment one would have derived from sticking with one's non-vegan diet, or at the very least certainly not less so. No expenditure of will-power is required once one is accustomed to the new diet. It is simply a matter of changing one's mind-set.

Appreciate some of the points, but this part seems totally disconnected from what people report along several dimensions.

2
Rupert
4y
I admit I probably should check out whether I can empirically substantiate it. I'm generalising from the experience of myself and my wife and a lot of the long-term vegans that I know. But if in the broader population of people who attempt to be vegan it's not reported as true, well okay first I should try to find out why that's the case and then also understand why it's the case.

Potential EA career: go in to defense R&D specifically for 'stabilizing' weapons tech i.e. doing research on things that would favor defense over offense. In 3d space, this is very hard.

Defence isn't necessarily stabilising! A working missile defence system degrades your opponent's second strike capabilities.

This is only half formed but I want to say something about a slightly different frame for evaluation, what might be termed 'reward architecture calibration.' I think that while a mapping from this frame to various preference and utility formulations is possible, I like it more than those frames because it suggests concrete areas to start looking. The basic idea is that in principle it seems likely that it will be possible to draw a clear distinction between reward architectures that are well suited to the actual sensory input they receive and rew... (read more)

Literally today I was idly speculating that it would be nice to see more things that were reminiscent of the longer letters academics in a particular field would write to each other in the days of such. More willingness to explore at length. Lo and behold this very post appears. Thanks!

WRT content, you mention it in passing, but yeah this seems related to tendency towards optimization of causal reality (inductive) or social reality (anti-inductive).

3
Dawn Drescher
4y
Thank you! Yeah, that feels fitting to me too. I found these two posts on the term: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality https://www.lesswrong.com/posts/j2mcSRxhjRyhyLJEs/what-is-social-reality A lot of social things appear arbitrary when deep down they must be deterministic. But bridging that gap is perhaps both computationally infeasible and doesn’t lend itself to particularly powerful abstractions (except for intentionality). At the same time, though, the subject is more inextricably integrated with the environment, so that it makes more sense to model the environment as falling into intentional units (agents) who are reactive. And then maybe certain bargaining procedures emerged (because they were adaptive) that are now integrated into our psyche as customs and moral intuitions. For these bargaining procedures, I imagine, it’ll be important to abstract usefully from specific situations to more general games. Then you can classify a new situation as one that either requires going through the bargaining procedure again or is a near-replication of a situation whose bargaining outcome you already have stored. That would require exactly the indexer types of abilities – abstracting from situations to archetypes and storing the archetypes. (E.g., if you sell books, there’s a stored bargaining solution for that where you declare a price, and if it’s right, hand over the book and get the money for it, and otherwise keep the book and don’t get the money. But if you were the first to create a search engine that indexes the full-text of books, there were no stored bargaining solutions for that and you had to go through the bargaining procedures.) It also seems to me that there are people who, when in doubt, tend more toward running through the bargaining procedure, while others instead tend more toward observing and learning established bargaining solutions very well and maybe widening their references classes for games. I

Panpsychism still seems like a flavor of eliminativism to me. What do we gain by saying an electron is conscious too? Novel predictions?

6
MichaelStJules
4y
Relevant: * https://longtermrisk.org/the-eliminativist-approach-to-consciousness/ * https://reducing-suffering.org/is-there-suffering-in-fundamental-physics/ By saying an electron is conscious too (although I doubt an isolated electron on its own should be considered conscious, since there may be no physical process there), we may need to expand our set of moral patients considerably. It's possible an electron is conscious and doesn't experience anything like suffering, pleasure or preferences (see this post), but then we also don't (currently, AFAIK) know how to draw lines between suffering and non-suffering conscious processes.

Seems like you're trying to get at what I've seen referred to as 'multifinal means' at one point. Keyword might help find related stuff.

This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I've been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved... (read more)

So a conceptual slice might be that not only do generals fight the last war, but the ontology of your institutions reflect the necessities of the last war.

It has been noted that when status hierarchies diversify, creating more niches, that people are happier than when status hierarchies collapse to a single or a small number of very legible dimensions. This suggests that it would be possible to increase net happiness by studying the conditions by which these situations arise and tilting the playing field. E.g. are social media sites only having a negative impact on mental health because they compress the metrics by which success is measured?

Related: surely someone somewhere is doing critical path analysis of vaccine development. It certainly wouldn't be the case that in the middle of a crisis people just keep on doing what they've always done. Even if it isn't anyone's job to figure out what the actual non parallelizable causal steps are in producing a tested vaccine and trimming the fat, someone would still take it on right?

...surely...

7
Davidmanheim
4y
I've actually done this, and talked to others about it. The critical path, in short, is reliable vaccine, facilities for production, and replication for production. But this has nothing to do with your announcing your candidacy for office - congratulations on deciding to run, and good luck with your campaign!

Training children that it is a good idea to keep psychopaths as pets as long as they are cute probably results in them voting actors into positions of authority later in life.

1. Make a survey of EAs preferences over puppies x kittens. I suppose it's correlated with cause areas and donation strategies.

2. Dogs and cats are too mainstream and fail the neglectedness test. If you're a true EA, you should consider smaller animals, such as mice, or farm animals like chickens or pigs - so teaching people to expand their moral circle.

(BTW, has anyone seriously considered measuring if meeting a farm animal pet decreases someone's willingness to eat meat, etc? I bet it does not, but...)

3. If you're hardcore, you'l... (read more)

Exploit selection effects on prediction records to influence policy.

During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem. When a crisis looms on the horizon, EAs coordinate to all make different predictions thus maximizing the chance that one of them will appear prescient and thus obtain outsize influence.

5
Linch
4y
This seems incredibly optimistic.
3
NunoSempere
4y
That is evil, I like it.

"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"

"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)

"Matching problems: a literature review"

"Entropy for intentional content: a formal model" (AI related)

"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)

"One weird trick that made my note taking 10x more useful"

3
EdoArad
4y
Do you mind expanding a bit on CNS Imaging, Entropy for Intentional content, and Graph Traversal?

A lot of people are willing to try new things right now. Rapid prototyping of online EA meetups could lead to better ability to do remote collaboration permanently. This helps cut against a key constraint in matching problems, co-location.

6
Peter Wildeford
4y
This is a really good point. I'd love to see more people look into this. One thing I've really liked is Zoom breakout groups... I think if done right they can actually be better than in-person meetups for coordinating discussion among >20 people. Though apparently Zoom's privacy sucks (see also).

Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

8
OllieBase
4y
Could you spell out what this means? I'd guess that most people (myself included) aren't familiar with ossification and organizational ontologies.

Touchscreen styluses for all those public touchscreens.

At $50 per ton cost to sequester the average American would need to generate $1000 per year of positive impact to offset their co2 use. The idea that the numbers are even close to comparable means priors are way way off. The signaling commons have been polluted on this front from people impact larping their short showers, lack of water at restaurants and other absurdities.

I think that much of the disconnect comes down to focusing on goals over methods. I think it is better to think of goals as orienting us in the problem-space, while most of the benefits accrue along the way. By the time you make it a substantial fraction of the way to a goal, you'll likely be in a much better position to realize the original goal was slightly off and adjust course. So 'eliminating all infectious disease' could easily be criticized as unrealistic for endless reasons, yet it is very useful for orienting us to be scope sensitiv... (read more)

> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.

yup, thanks. Also across time as well as across agents at a particular moment.

Like other links between VNM and Utilitarianism, this seems to roll intersubjective utility comparison under the rug. The agents are likely using very different methods to convert their preferences to the given numbers, rendering the aggregate of them non rigorous and subject to instability in iterated games.

2
calebo
4y
I can't tell whether you are denying assumption 1 or 2.

Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.

You don't, but that's a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.

1
Eli Rose
4y
I'm saying we need to specify more than, "The chance that the full stack of individual propositions evaluates as true in the relevant direction." I'm not sure if we're disagreeing, or ... ?

The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).

1
Eli Rose
4y
Suppose you're in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not? There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety. The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that's why Buck's number seems crazy high to Gordon.

First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.

>he thinks the world is metaphorically more made of liquids than solids.

Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our pa... (read more)

EA is well positioned for moonshot funding (though to date has mostly attracted risk averse donors AFAICT). It seems like an interesting generator to ask what moonshots look like for these categories.

2
BrownHairedEevee
4y
Well, for starters, I think any kind of policy work is a moonshot. Lobbying for pro-growth/globalist policies would have a small chance of boosting econ growth by a lot, which would in turn affect a lot of the other SDG targets.

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.

Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.

I expect 80,000 patients would be... (read more)

The number of people working on things outside the overton window is sharply limited by being able and willing to risk being unsuccessful.

Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.

I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.

2
EdoArad
4y
This is interesting. Do you feel that motivation is a bigger factor for you in this advice as opposed to increasing the variance of efforts for doing good as a way of doing more good? I am not sure in what contexts you give this advice, but I worry that in some cases it might be inappropriate. Say in cases where people's gut feelings and immediate intuitions are clearly guiding them in non-effective altruistic directions. I'd prefer a norm where people interested in doing the most good would initially delegate their decisions to people who have thought long and hard on this topic, and if they want to try something else they should elicit feedback from the community. At least, as long as the EA community also has a norm for being open to new ideas.

Dating apps have misaligned incentives. A dating app run as a non profit could plausibly out compete on the metric of successful couple formation.

5
Ozzie Gooen
4y
Just want to second this, I think it's a pretty well-known issue in the industry. Dating apps that do incredibly well at setting up people on dates will get little use (because they typically charge monthly rates, and will get to charge fewer months if the customer is happy and leaves quickly). It's possibly a very large market failure. I could imagine a hypothetical app that was able to charge a lot more initially (Like, $2,000) but did a much better job. Of course, one issue with this is that these apps need a lot of users, so this could be really difficult. I think it could be possible to figure out a solution here, but I imagine the solution may be 2/3rds payment/economic-innovation. Perhaps an idea solution would look something like a mix between personal guidance and online support.

IIRC Interactive Brokers isn't going to let you lever up more than about 2:1, though if you have 'separate' personal and altruistic accounts you can potentially lever your altruistic side higher. e.g. if you have 50k in personal accounts and 50k in altruistic accounts, you can get 100k in margin, allowing you to lever up the altruistic side 3:1.

Lazy people can access mild leverage (1.5:1) through NTSX for low fees. Many brokerages don't grant access to the more extreme 3:1 ETFs.

5
CarlShulman
4y
Interactive Brokers allows much higher leverage for accounts with portfolio margin enabled, e.g. greater than 6:1. That requires options trading permissions, in turn requiring some combination of options experience and an online (easy) test. I would be more worried about people blowing up their life savings with ill-considered extreme leverage strategies and the broader fallout of that.
Load more