I'm Renee, an architecture student doing my thesis on possible floating cities. I'm also working on starting up an EA group at my university.

I'd love to speak to anyone doing the same or with experience doing the same!


Sorted by New

Topic Contributions


Revisiting the karma system

A suggestion that might preserve the value of giving higher karma users more voting power, while addressing some of the concerns: give users with karma the option to +1 a post instead of +2/+3, if they wish.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Thanks for writing this up! 

I'm not sure about the implications, but I just want to register that deciding to roll repeatedly, after each roll for a total of n rolls, is not the same as committing to n rolls at the beginning. The latter is equivalent in expected value to rolling every trial at the same time: the former has a much higher expected value. It is still positive, though.

Apply to attend an EA conference!

I wanted to describe my personal experience in case it shifts anyone like me towards applying. I was accepted, received travel support, and went to EAG London last month. 

Initially, I considered the likelihood that I would be accepted and be able to go very low: I didn't think I was involved enough in EA and I didn't think it made sense for me to receive travel support to go as I live very far from London. I also didn't think that I 'deserved' to go: I reasoned that I shouldn't take a spot from someone more engaged in EA or could provide more value to other attendees. I probably wouldn't have applied if not for having a personal connection with someone else who applied. 

Nearly every interaction I had at the conference was positive. Many people I spoke to were happy to share about their area even if I had little prior understanding, and I was surprised to find I had ideas and perspectives that were unique/might not have surfaced in conversation had I not been there.

As a young person, I have never felt more respected as a full person and equal with meaningful ideas to contribute. EAG is intense - it can be near constant interaction with a lot of people, focused on the most important problems in the world. But going to EAG made me feel like a 'part of' EA, and gave me a lot more confidence to make decisions, to try things, to reach out to people. 

If you're like me and concerned about not being qualified or not having done enough, let the organisers judge, and consider the possibility that EAG might give you the ability to do more later.

The Many Faces of Effective Altruism

Thanks for writing this up!

What are the use cases you envision for terms like these ones?

I appreciate the concern that people might feel deceived when finding out that the movement doesn't look quite like what they were expecting, but I think this might be better addressed by pointing out to new people EA is a broad group with a variety of interests, values, and attitudes.

I'm concerned that splitting up EA according to aesthetics/subcultures might be harmful, and I think it should be handled with care. The human tendency to look for identity labels and subgroups to belong to is very strong, and subgroup identification can create insularity and group polarization, which are probably things we should avoid. It could also result in people altering beliefs in order to fit an identity framing as Lizka describes in the case of longtermism here.

Any large coalition will have variation across the group, and terms that describe subgroups can be helpful. However, while describing EA in terms of cause area or even terms like 'longtermist' give me a strong idea what a person or group might be interested in and what might be valuable to them, I'm not sure what information the aesthetic categories give me as a descriptor.

There's also a lot of complexity in the connections between groups and ideas in EA, and I think this is an aspect of EA which should be encouraged and emphasized, not flattened into categories. 

A tale of 2.75 orthogonality theses

(disclaimer that I talked to Sasha before he put up this post) but as a 'random EA person' I did find reading this clarifying.

It's not that I  believed that "orthogonality thesis the reason why AGI safety is an important cause area", but that I had never thought about the distinction between "no known law relating intelligence and motivations" and "near-0 statistical correlation between intelligence and motivations".  

If I'd otherwise been prompted to think about it, I'd probably have arrived at the former, but I think the latter was rattling around inside my system 1 because the term "orthogonality" brings to mind orthogonal vectors.

Against immortality?

I've sometimes thought about if 'immortality' is the right framing, at least for the current moment. Like AllAmericanBreakfast points out, I think that anti-ageing research is unlikely to produce life extensions in the 100x to 1000x range all at once.

In any case, even if we manage to halt ageing entirely, ceteris paribus there will still be deaths from accidents and other causes. A while ago I tried a fermi calculation on this, I think I used this data (United States, 2017). The death rate for people between 15-34 is ~0.1%/year, this rate of death would put the median lifespan at ~700 years (Using X~Exp(0.001)). 

Probably this is an underestimate of lifespan - accidental death should decrease (safety improvements, of which self-driving cars might be significant given how many people die of road accidents), curing ageing might have positive effects on younger people as well, and other healthcare improvements should occur, and people might be more careful if they're theoretically immortal(?). However, I think this framing poses a slightly different question:

Do we prefer that more people:

  • Live shorter lives and die of heart disease/cancer/respiratory disease*, or
  • Live (possibly much) longer lives and die of accidents/suicide/homicide

I don't know how I feel about these. I think in the theoretical case of going immediately from current state to immortality I'd be worried about Chesterton's-fence-y bad results - not that someone put ageing into place, but I'd expect surprising and possibly unpleasant side effects of changing something so significant**.

*I inferred from the data I linked above that heart disease and cancer are somewhat ageing-related, I'm not sure if this is true

**The existence of the immortal jellyfish Turritopsis dohrnii, implies that a form of immortality was evolvable, which in turn might imply that there's some reason evolution didn't favour more immortal things/things that tended slightly more towards immortality.

EA can be hard: links for that

Thanks for your list and please do!

Load More