Hey everyone, I am extremely excited to write my first ever post on the EA forum!

I figure, why not come out swinging and go Meta right off the bat with a post asking which of my post ideas I should create a post about? After all, the meta, the better (said with a British accent,) right?

I only joined the movement three months ago with the University of Southern California group, but have been thinking along EA lines for many years. I have collected a couple dozen post ideas as I’ve been learning about EA in most of my free time over the past few months. I’d love any feedback on which would be most interesting and useful to community members! I’d also appreciate references to work already written on these topics.

  1. If AI inevitably or almost inevitably dominates the future of the universe, then the hard problem of consciousness and how to ensure conscious, happiness-seeking AI may be the most important cause area

  2. Are intelligence, motivation, consciousness, and creativity all orthogonal, especially at the upper limit of each? If not, what does this mean for AI?

  3. An analysis of Impact Investing as an EA cause area

  4. Analysis of the identity of consciousness, i.e. is consciousness instantaneous (as in Buddhism, non-self or emptiness), continuous over a lifetime (similar to the notion of a soul, with your consciousness starting at birth and ending at death), or universal (the exact same consciousness is in every conscious being simultaneously); AND what does this mean for practical ethics and the long-term future of the universe

  5. Could the future state of a democratic, believability weighted public goods market, essentially a futarchy system, be the system which Yudkowsky’s coherent extrapolated volition, an AI alignment mechanism, uses to have AI predict humanity’s ultimate preferences?

  6. Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe

  7. Dramatic and easy IQ boost for EAs; evidence suggests creatine boosts IQ by 15 points in vegans. And the importance of vegan supplements generally

  8. Social psychology forces which may cause EAs to be hyper-focused on AI

  9. Massively scalable project-based community building idea

  10. Takeaways from EAGx Boston

  11. Why existntial hope (positive longtermism) may be much more effective at reducing existential risk than trying to reduce existential risk

  12. Speeding up human moral development may be the most effective animal welfare intervention

  13. A series on effective entrepreneurship

  14. My approach to organizational design

  15. Marketing survey on what EA messaging has been most persuasive to comminty members

  16. Why I think broad longtermism is massively underrated

  17. What if we had a perpetual donor contest and entrepreneurial ecosystem rather than just a donor lottery?

  18. The joys of Blinkist (book summary app) for rapid broad learning

  19. Is there a GiveWell for longtermism? There should be.

  20. My current possible trajectories and request for feedback/career advice

  21. How I came to longtermism on my own, and what I think EA longtermism may be getting wrong

  22. Initial thoughts on creating a broad longtermism fellowship

  23. EA dating site idea and prototype

  24. Ultimate Pleasure Machine Dilemma: If you had the opportunity to press a button that turns the entire universe into a perpetual motion pleasure machine, which eternally forces the entire universe into a state of maximum happiness (however you define that), would you press it? (This one was inspired by USC EA Strad Slater)

Feel free to just comment the number or numbers you think is most effective, or to argue why you think so. Really appreciate your feedback, thanks everyone!

Comments6


Sorted by Click to highlight new comments since:
9. Massively scalable project-based community building idea

If your idea for this is good this might be the highest value post you could write from this list.

20 and 21 (before you get too familiar with EA thinking and possibly forget your origin story) also seem high value.

If 17 is a novel practical idea it's probably also worth writing about.

8 and 16 interest me.

Thanks William! This feedback is super valuable. Yes I think the massive scalable community building project would be novel and it actually ties in with the donor contest as well. Glad to know this would be useful! And good thought, I think writing about my own story will be easiest as well. And I will definitely write about broad longtermism, it is one of my main areas of interest.

6. Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe

I was thinking about the human alignment portion of this earlier today--how bad actors with future powerful (non-AGI) AI systems at their disposal could cause a tremendous amount of damage. I haven't thought through just how severe this damage might get and would be interested in reading your thoughts on this. What are the most significant risks from unaligned humans empowered by future technology?

Yes! I think the main threats are hard to predict, but mostly involve terrorism with advanced technology, for example weaponized blackholes, intentional grey goo, super coordinated nuclear attacks, and probably many, many other hyper-advanced technilogies we can’t even conceive of yet. I think if technology continues to accellerate it could get pretty bad pretty fast, and even if we’re wrong about AI somehow, human malevolence will be a massive challenge.

Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!

6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.

7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is a whole standard deviation, so I'd love to see the evidence for that.

8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).

From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I'm always in favor of more people challenging the EA community's current approach. I'm interested in 21 for the same reason.

(In my experience, the answer to 19 is no, probably because there isn't a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)

Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I'd be interested to read it!

Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime!

I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up.

Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this!

Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like.

Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig