PH

Patrick Hoang

Electrical Engineer Student @ Texas A&M University
141 karmaJoined Pursuing an undergraduate degreeCollege Station, TX, USA

Bio

Howdy!

I am Patrick Hoang, a student at Texas A&M University.

How others can help me

Others can probably help me with community building at Texas A&M University.

How I can help others

I am planning to start an Effective Altruism group at Texas A&M. This is my plan:

Summer 2024: Non-Trivial Fellowship

Early Fall 2024 Q1: Find existing organizations at Texas A&M and understand Texas A&M culture/values

Late Fall 2024 Q2: Finding people who might be interested in EA; networking

Early Spring 2025 Q3: Get some people to do the EA Introductory Fellowship

Late Spring 2025 Q4: Start an MVP, such as a 6-8 week reading group. 

Summer 2025: Do some back-preparations for advertising the group

Fall 2025: Launch!

Comments
16

There could be another reason EAs and rationalists specifically value life a lot more. Suppose there's at least a 1% chance of AI going well and we live in a utopia and achieve immortality and can live for 1 billion years at a really high level. Then the expected value of life is 10,000,000 life-years. It could be much greater too, see Deep Utopia.
 

Anecdotally, I agree with the secularization hypothesis. Does this imply people should be more religious?

While I like it, stories often sensationalize these issues such as "AI by 2027 we're all gonna die" without providing good actionable steps. It almost feels like the climate change crisis by environmentalists that say "We're all gonna die by 2030 because of climate change! Protest in the streets!"

I know stories are very effective in communicating the urgency of AGI, and the end of the video has some resources about going to 80k. Nonetheless, I feel some dread such as "oh gosh, there's nothing I can do," and that is likely compounded by YouTube's younger audience (for example college students will graduate after 2027).

 

Therefore, I suggest the later videos should give actionable steps or areas if someone wants to reduce risks from AI. Not only will it relieve the doomerism but it will give actual relevant advice for people to work on AI.

While I do agree with your premise on arithmetic, the more valuable tools are arithmetic-adjacent. I am thinking of game theory, Bayesian reasoning, probability, expected value, decision modeling, and so on. This is closer to algebra and high school math, but still pretty accessible. See this post.

The main reason why people struggle with applying arithmetic to world modeling is because transfer learning is really difficult, and EAs/rationalists are much better at applying transfer learning than the regular person. I notice this in my EA group: students who are engineers and aced differential equations and random variables quite struggle with Bayesian reasoning, even though they learned Bayes' theorem.

Patrick Hoang
1
0
0
60% disagree

I feel like many of these risks could go either way as annihilation or immortality. For example, changing fundamental physics or triggering vacuum decay could unlock infinite energy, which could lead to an infinitely prosperous (and protected) civilization. 

Essentially, just as there are galactic existential risks, there are galactic existential security events. One potential idea would be extracting dark energy from space to self-replicate in the intergalactic void to continually expanding forever.

Even if the goal is communication, it could be the case that normalizing strong attractive titles could lead to more clickbait-y EA content. For example, we could get: "10 Reasons Why [INSERT_PERSON] Wants to Destroy EA."

Of course, we still need some prioritization system to determine which posts are worth reading (typically via number of upvotes).

I enjoyed reading this post!

One thing I would like to add is in terms of getting jobs, it is fundamentally a sales process. This 80k article really highlighted this for me. Sales and interpersonal communication also play a huge role in the currently neglected EA skills (management, communication, founder, generalist). I'm currently writing a forum post so hopefully I can get that out soon.

I was among the three that defected. I warned yall!

I defected! Everyone, if you want to lose, choose DEFECT

Patrick Hoang
1
0
0
50% ➔ 57% disagree

I think the most likely outcome is not necessarily extinction (I estimate <10% due to AI) but rather an unfulfilled potential. This may be humans simply losing control over the future and becoming mere spectators and AI not being morally significant in some way.

I feel like this is too short notice with EAG conferences. Three weeks is not a lot of time between receiving your decision and flying to the Bay Area making arrangements. Maybe it is because I am a student.

Load more