timunderwood

Wiki Contributions

Comments

Samuel Shadrach's Shortform

Assuming that some people respond to these memetic tools by reducing the amount of children they have more than other people do, the next generation of the population will have an increased proportion of people who ignore these memetic tools. And then amongst that group, those who are most inclined to have larger numbers of children will be the biggest part of the following generation, and so on.

The current pattern of low fertility due to cultural reasons seems to me to be very unlikely to be a stable pattern. Note: There are people who think it can be stable, and even if I'm right that it is intrinsically unstable, there might be ways to plan out the population decline to make it stable without the substantial use of harsh coercive measures.

But really, fewer people being a really, really bad thing is the core of my value structure, and promoting any sort of anti natalism is something I'd only do if I was convinced there was no other path to get the hoped for good things.

Samuel Shadrach's Shortform

The really big con which is that people are awesome, and 1/70th of the people is way, way less awesome than the current number of people. Far, far fewer people reading fan fiction, falling in love, watching sports, creating weird contests, arguing with each other, etc is a really, really big loss.

Assuming that if it could be done, that it would be an efficient in utility loss/gain terms way to improve coordination, I think it probably goes way too slow to be relevant to the current risks from rapid technological change. It seems semi-tractable, but in the long run I think you'd end up with the population evolving resistance to any memetic tools used to encourage population decline.

What Makes Outreach to Progressives Hard

I feel like trying to be charitable here is missing the point.

It mostly is Moloch operating inside of the brains of people who are unaware that Moloch is a thing, so in a Hansonian sense they end up adopting lots of positions that pretend to be about helping the world, but are actually about jockeying for status position in their peer groups.

EA people also obviously are doing this, but the community is somewhat consciously trying to create an incentive dynamic where we get good status and belonging feelings from conspicuously burning resources in ways that are designed to do the most good for people distant in either time or space.

What Makes Outreach to Progressives Hard

Possibly the solution should be to not try to integrate everything you are interested in.

By analogy, both sex and cheese cake are god, but it is not troubling that for most people there isn't much overlap between sex and cheese cake. EA isn't trying to be a political movement, it is trying to be something else, and I don't think this is a problem.

What Makes Outreach to Progressives Hard

I think the survey is fairly strong evidence that EA has a comparative advantage in terms of recruiting left and center left people, and should lean into that.

The other side though is that the numbers show that there are a lot of libertarians (around 8 percent) and more 'center left' people who responded to the survey than there are 'left' people. There are substantial parts of SJ politics that are extremely disliked amongst most libertarians, and lots of 'center left' people. So while it might be okay from a recruiting and community stability pov to not really pay attention to right wing ideas, it is likely essential for avoiding community breakdown to maintain the current situation where this isn't a politicized space vis a vis left v center left arguments.

Probably the idea approach is some sort of marketing segmentation where the people in Yale or Harvard EA communities use a different recruiting pitch and message that emphasizes the way that EA is a way to fulfill the broader aim of attacking global oppression, inequity and systemic issues, while people who are talking to Silicon Valley inspired earn-to-give tech bros should keep with the current messages that seem to strongly resonate with them.

More succinctly:  Scott Alexander shouldn't change what he's saying, but a guy trying to convince Yale Law students to join up shouldn't sound exactly like Scott.

Epistemologically this suggests we should spend more time engaging with the ideas of people who identify as being on the right, since clearly this is very likely to a bigger blindspot than ideas popular with people who are 'left wing'.

Want to alleviate developing world poverty? Alleviate price risk.​ (2018)

I feel like this would end up like microloans: Interesting, inspiring, and useful for some people, but from the pov of solving the systemic issue a dead end. The obvious question being: Why doesn't this already exist? And the answer presumably being that it cannot be done profitably.

Still, it is the sort of thing that if someone who has the skills and resources to do so is directly trying to set up specific systems like this, their efforts likely have a very high probability of being way more useful than anything else they could do.

When can Writing Fiction Change the World?

Thanks for the links, which definitely include things I wish I'd managed to find earlier. Also I loved the special containment procedures framing of the story objects.

I wonder if there is any information on whether very many people's minds actually are changed by The Ones Who Walk Away from Omelas, my experience of reading it was very much like what I claimed the standard response of people exposed to fiction they already strongly disagree with was: Not getting convinced. I did think about it a bunch, and I realized that I have this weird non-utilitarian argument inside my head for why it is legitimate to subject someone to that sort of suffering whether or not they volunteer 'for the greater good'. But on the whole I thought the same after reading the story as before.

The EA Meta Fund is now the EA Infrastructure Fund

Okay, I suppose that's vaguely legit. They are in broadly the same space. And also the new name is definitely better.

timunderwood's Shortform

Does anyone know about research on the influence of fiction on changing elite/public behaviors and opinions?

The context of the question is that I'm a self published novelist, and I've decided that I want to focus the half of my time that I'm focusing on less commercial projects on writing books that might be directly useful in EA terms, probably by making certain ideas about AI more widely known. I at some point decided it might be a good idea to learn more about examples of literature actually making an important difference beyond the examples that immediately came to my mind -- which were Uncle Tom's Cabin, Atlas Shrugged, Methods of Rationality and the way the LGBTQ movement probably gained a lot of its present acceptance through fictional representation.

I've found some stuff through academia.edu searches (like this journal article describing the results of a survey of readers of climate change fiction), but it seems like there is a good chance that the community might be able to point me in useful directions that I won't quickly find on my own.

Will AGI cause mass technological unemployment?

I think the standard assumption is that with any task you can create an expert system that is cheaper to power and run than it is to feed humans. Though I was talking with someone during EAG Virtual who was worried that humans might be one of the most efficient tools if you are only thinking about needing to feed them, and then it would be efficient for malevolent AI to enslave them.

I think the basic issue with the argument is that we are dealing with a case that Tiger Woods can just create a new copy of himself to mow the lawn while another copy is filming a commercial. So the question is whether creating the processors and then feeding them electricity to get the compute to run the process is cheaper than paying a human, and the most a human could be worth to pay is the amount that it costs to build compute that could replicate the performance of the human.

My intuition has always been that humans are unlikely to be at the actual optimum for energy efficiency of compute, but even if we are, I highly doubt that we'd be worth much more in the long run working for the AGI than it costs to feed us.

The solution to technological unemployment following AGI is to set everything up so that we make moving to a world in which there are no jobs a good thing, not to try to keep jobs by figuring out a way to compete with tools that can do literally everything better than we can.

A post employment society, where everyone has a right to their fraction of mankind's resources.

Load More