Bio

Participation
2

I don't know why people keep downvoting my posts. I agree that they could be better, but I don't think my post's karma accurately reflects their worth. I am biased, however.

My name is Wes Reisen. I’m currently figuring out career stuff/life plans. I’m relatively new to the EA (I started interacting with it on July 12, 2023) 

How others can help me

Sometimes, I might ask for help with research. Also, if you know any good internship or volunteer roles, or any good job boards, or are hiring, please reach out! (Public résumé: https://www.canva.com/design/DAF2mR33vuE/4-qki-V4l87yNrkkI2oXdA/edit?utm_content=DAF2mR33vuE&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton) (One piece of vital and relevant[1] information about me is somewhat private[2], and is available upon request).

(I'm not currently searching for work)

  1. ^

    That is, it would likely change [how/if] you would [hire me OR consider hiring me].

  2. ^

    I don't want this info on the internet publicly.

How I can help others

My email is wesreisen2@gmail.com . I respond to most all emails, and read practically all of them, since I really don’t get many (excluding subscription emails). (Like a normal amount, maybe less)

I have the general goal of doing the most good, I'm currently consequentialist, and assign numerical value to different things, similar to (maybe exactly like, depending on the definition) utilitarianism, although I could be convinced otherwise, since [knowing what "doing good" is] is a secondary goal to the primary goal of "doing good".

 

 

 

I’ve gotten much better at brainstorming.

my general strategy is to use a flowchart to model things such that every possible outcome is accounted for, and numerically defining any given problem so I can see what variables to change.

For example, here’s what my process would look like for if the US should launch nukes at North Korea at any given time (from the perspective of 🇺🇸)

E(value of sending nukes)=p(🇰🇵 responds by sending nukes|🇺🇸 sent nukes)*E(value of the world if a nuclear exchange happened.)+p(🇰🇵 doesn’t respond by sending nukes|🇺🇸 sent nukes)*E(value of world if 🇺🇸 sent nukes to 🇰🇵, but 🇰🇵 doesn’t respond with nukes)-E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) {here I split it up into potential outcomes, and the value of doing thing A instead of thing B is E(A)-E(B)}, and this equals 

p(🇰🇵 responds by sending nukes|🇺🇸 sent nukes)*(E(value of the world (in the 5-months period after this happens) if a nuclear exchange happened)+E(the same thing but all time after the first 5 months))+p(🇰🇵 doesn’t respond by sending nukes|🇺🇸 sent nukes)*(E(value of world if 🇺🇸 sent nukes to 🇰🇵, but 🇰🇵 doesn’t respond with nukes, but only how it effects people who were in 🇰🇵 at the time)+E(the same thing but how it effects everyone who weren’t in 🇰🇵 at the time.))-E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) {here I split it up by time and by people (here “people” means “anything that has inherent value in the eyes of 🇺🇸”).}
 

Here, a conclusion could be to try and increase E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) in order to prevent a nuclear exchange, since that’s pretty aligned with many moral perspectives.(the same thing goes if we replace 🇺🇸&🇰🇵 with 🇷🇺&🇺🇸, or 🇰🇵&🇺🇸, etc., in this example)

Another similar strategy I’ve been using recently is using certain models of the world that account for every scenario for which the model accounts for every impact my decision could have, or at least be a close approximate.

In this example, one model might be that any given country has some choices in regards to nukes (namely, a probability distribution of when, if ever, a nuke launches from some section in space and lands in another section of space; Ability to communicate with other relevant decision-making nations or otherwise manipulate the information they have (often in a way that better reflects reality), manipulate their own information, manipulate their and others’s amount of and power of nukes, manipulate theirs and others’s preferences on what each nation should decide. They are also allowed to use probabilities to make any of these decisions. (The equivalent of them having plenty of dice in the decision room). Some of their options are limited, so each nation only has a certain set of options. Canada can’t send one million nukes to the moon by 2023, for example, largely because 2023 already happened.), and they can also manipulate theirs and others’s options.

Each nation has a preference on which combination of [decisions that each nation makes] should happen, such that, if given the choice between two options of this combination, they would consistently choose one of them irrespective of factors other than their preferences and information.

Note that this model is specifically tailored such that the lessons learned from game theory can be applied here. This might be a model that, say, [an ambassador to a nation with nuclear weapons] might use, but a model tailored to a reporter might mainly include probability of certain things that effect consumers of the 🗞️ news. (e.g., Peace talks, nukes used, jobs created by any given program, how any given action impacts how each scenario effects them (e.g., if it were announced that 🇺🇸 went through with the “star wars” idea to make 🇺🇸 not get hit if a nuke was sent to them, that would effect readers, and would be a major news story.))

A doctor might use the model of “everything that effects a certain part of the body has an impact. going through the list of parts of the body, we can develop a list of ways any given issue effects each facet of the body.”

This can then be used by the doctor by them going through the list of body parts and imagining what things might cause them to not function normally, thus anticipating much of the potential negative health impacts that might exist.

 

 

 

I’m also pretty good with numbers(I feel comfortable thinking about (e^(1/0)), so if you have any questions about math, I’m happy to help!

I can also help with work and productivity tips (e.g., to stay awake, do something actively. Pause your all-nighter, play one round of Call Of Duty, and then get back to work. It’ll keep you up and productive all night long.)

I can also provide research assistance, summarize information, check things for errors (Not very well though😅), and more. Feel free to ask if I can help with _______, and hopefully/probably, I’ll say yes!

I also have ChatGPT plus, in case you want to ask it a question.

I also have connections with leading antibiotics💊 creation focussed microbioligist Kim Lewis, as well as someone connected to 🇺🇸 congressman David Scott's campaigning (presumably his campaign manager) (I'm not sure for reasons I'd explain if you want to know, but I'd rather not put it here.), as well as a delegate for the 2024 democratic national committee (DNC) 🇺🇸, and I know a guy who knows an aide to the Kamala Harris POTUS 2024 campaign. (although I could totally change my political views if given information that I didn't otherwise have, which is often the case. (e.g., if Trump has a really good foreign policy, and I learn about it, I will change my mind.)) if you'd gain from that, feel free to reach out, and I'll see how I could help with that.

I’m also quite passionate (though 8️⃣0️⃣0️⃣0️⃣0️⃣ hours points out that there are more pressing & tractable matters) about ageism/rights of youth, so if you have any questions about that, I can help with that too.

Schedule a meeting with me here:
calendly.com/wesreisen2/30min 📆

Comments
134

Another way is to try to make human error guide someone in a similar direction to logical decisions. For example, there is a major taboo against drug use in many areas, which supposedly decreases drug use when unnecessary.

More generally, a common strategy is to limit how much human error changes someone’s decisions, on average.

A common strategy used to limit the effects of human error it to better account for it in models and whatnot, often by coming up with a value system that would make sense for any given set of decisions where some of them are due to human error. For example, in economics, one might say that a person ascribes inherent additional value to things that are on sale.

A world leader’s goals are probably adjustable one way or another. In the case where a world leader is committed to some values that depend on something (e.g., whatever is seen as “patriotic”, whatever their religion says (this only applies to some religions), changing those things changes their values. That might be very difficult for some value systems, but luckily [a commitment to the values of something that can easily change] has plenty of good logical arguments against them (https://youtu.be/wRHBwxC8b8I), which could be a better strategy to change someone’s mind if they have such a commitment that is difficult to change, but for which one can change if they have such a commitment.

If anyone here knows any info that can help with this (e.g., Does any world leader have a commitment to their current values instead of their overall values?), please let me know in a comment, email, etc.

I imagine this would be implemented in a similar fashion ion to other UN programs when they started, but before that, we should work out key things that would change how or if the program should happen.

(Joke) This post lacks an epistemic status, and (not a joke) I’d say I don’t know where of these two positions I belong in. I haven’t had much success relative to others who are definitely EAs, and I’m pretty naive. People in Effective Altruism have actually unironically told me that I should sort of step aside until later on where I can contribute better, but I have so many ideas, and I imagine worst-case scenario of me posting them in the Effective Altruism anywhere slack channel is just people getting a bit annoyed at a bad idea. Also, some of my stuff has been at least a little good, which has been recognized, but do you have any thoughts?

Quick note: (Note taken while I am tired, so medium “parse-ability”): this program should be able to adjust to new ideas such that [an idea on how this program can be improved] can be implemented as soon as possible, perhaps without having to do an event. This is tricky for some ideas (e.g., how the event could be more fun). This would cause ideas to be implemented sooner, and also there’s be less of a cost to do the program sooner, since you wouldn’t be “missing” most important ideas. One idea that MIGHT satisfy this is: Part of the UN normal chat space (slack, discord, or whatever they use, if anything) was a philosophy section on what philosophy to go by and why, so the discussion can continue 24/7, and ideas for improvement can get implemented for the next day (or sooner).

Thanks! I’ll give it a read (or, more realistically, a listen if there’s an audiobook version.)

Load more