All of jkmh's Comments + Replies

Is working on AI safety as dangerous as ignoring it?

This answer clarified in my mind what I was poorly trying to grasp at with my analogy. Thank you. I think the answer to my original question is a certain "no" at this point.

jkmh's Shortform

I sometimes get frustrated when I hear someone trying to "read between the lines" of everything another person says or does. I get even more frustrated if I'm the one involved in this type of situation. It seems that non-rhetorical exploratory questions (e.g. "what problem is solved by person X doing action Y?") are often taken as rhetorical and accusatory (e.g. "person X is not solving a problem by doing action Y.")

I suppose a lot of it comes down to presentation and communication skills. If you communicate very well, people won't try as hard to read betw... (read more)

How do you decide how much to donate in “user fees” for free services?
Answer by jkmhAug 06, 20211

I occasionally donate to user-funded services, but it is very ad-hoc and not a lot of thought goes into deciding which ones. I think I donated to Wikipedia a few years ago, and I donated to a local public radio station once. It usually happens after I use a service for awhile and suddenly think "hmm, I want the people who make that to know I appreciate their service."

I don't think it's ever been anything more than $20. And again, no rigorous decision making process, something about $20 just seems right as an "appreciation donation." The dollar amount might go higher if I ever encountered a user-funded service that I believed needed more from me to stay afloat.

1Cameron_Meyer_Shorb2moThanks so much for sharing your perspective! That’s basically what I’ve been doing so far. But I’ve started feeling the urge often enough that each appreciation donation makes me worried about my overall approach to appreciation donations — which seriously distracts from the warm fuzzies I was trying to buy in the first place.
How do you decide how much to donate in “user fees” for free services?

To add on to your thoughts about argument 2: even if taking breaks with X podcaster is crucial to your personal productivity, you should still ask yourself whether X podcaster needs your money to continue podcasting. And then even if you decide they don't need your money to continue, but you really want those fuzzies from donating to X podcaster, then remember to  purchase fuzzies and utilons separately.

Nathan_Barnard's Shortform

What do you mean by correct?

When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?

Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?

What should CEEALAR be called?
Answer by jkmhJun 15, 20216

I imagine most people reading your question don't want to list out a bunch of bad ideas. But I think that might be what's needed at this point, because the more we enumerate (and eliminate), the more clear it becomes whether or not CEEALAR is actually the best option. Or maybe seeing a bunch of bad ideas will spark a good one in someone's mind. Here:

Centre for Effective Learning, Centre for Learning Good, Blackpool Pastures, Hotel Hedon, Hotel for Effective Research, Maxgood Productions, Maxwell Hotel, EA Retreat House.

Yeah this is difficult.

2Arepo6moI would encourage people to list 'bad' ideas if that's what they're thinking of. It's good brainstorming strategy to not self-censor, and we ended up with the current alphabet soup because of a lack of inspiration last time!
2evelynciara6moI like "EA Retreat House"
Where I can practice judging opportunities to lend or donate or invest, for $25 per decision?
Answer by jkmhJun 13, 20215

I don't have much to add, other than letting you know you're not alone in looking for this. I started doing a similar thing a few weeks ago. A couple more "sandboxes" you could add are GlobalGiving and Kickstarter. It's a bit difficult to find the projects that an EA might be looking for on Kickstarter, but the "evaluation exercise" that you're trying to do could be good practice even there (e.g. trying to determine the potential positive impact of this app that aims to build a habit of not touching your face)

1Cienna6moThank you - exploring GlobalGiving now. I'd love to hear about your experience practicing, if you're willing to share!

It's possible that OPS could be useful to EA, but as stated in the post, the validity is not established. It's hard for me to see how OPS has more predictive ability for mental illness (and subsequent treatment) than any other model of personality. The key feature that makes OPS unique seems to be that it tracks changing personality throughout the day - but what is it about that feature that makes you believe that it could be a better model with more predictive power? Just more granularity?

What are the key first steps that an EA could take? Are y... (read more)

3Archer1yHi jkmh, thanks for all your questions, it gives me the opportunity to layout my thinking without having to put it in the structure of an essay. I hope you’ll forgive me for answering in bullet points rather than prose. * Awareness: Firstly, I just want to make sure the EA community is aware that the OPS exists (in case I drop dead or something). * Feedback: Secondly, I’m hoping to get some feedback to see if others share my opinion that the OPS shows enough promise to suggest the model may have utility above and beyond existing systems - particularly the Five Factor Model. (based on their own subjective impressions – as that’s all we really have to go on at this point) * Build a team: If the feedback is positive, a next step could be to set up a research team to conduct the necessary research to test the validity of the model – if I can build that team with non-EA people to avoid wasting EA career hours that would probably be ideal. * Funding: Likewise, it would be a last resort to look to the EA community for any funding that would be needed. * Contacts:Contacts would be useful. I lack any connections with any researchers or institutions in the field. I can obviously try to contact these people independently, but if there is mutual contact within the EA community that would probably aid my cause. But again, at this stage, I am just looking for people’s thoughts on the model. * Learn the system: - Learn how the system works. One might find the system useful in their daily life. Of course, in using the model, one might be making some assumptions about the model’s validity. Fair warning:it can take some time before it clicks. There is a limited amount of material to learn from. Though, it is something than you can, to some extent, learn passively – once you understand the basic components of the system you can cross-reference them with observations of yourself and others as you go about daily life. I
Objections to Value-Alignment between Effective Altruists

This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives."

Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-grou... (read more)

Studies on behavior of people receiving help: Shame & Reciprocity
Answer by jkmhJun 25, 20202

Reading and following through reference links in the Wikipedia for "Reciprocity" might be a good start:

I had trouble finding much else Googling things like "science of guilt".

Are you wondering if the possible negative effects of shame/guilt could cause more harm than help in certain scenarios?

I also wonder if help coming from "institutions" helps lower any feeling of guilt for recipients, because it's less personal? Receiving help from "Organization X"... (read more)

1Moritz91yYeah, I couldn't find much either. "Are you wondering if the possible negative effects of shame/guilt could cause more harm than help in certain scenarios?" Not necessarily more harm, but measured but a lifetime, perhaps. In the personal development of individuals, feelings of unworth, guilt or shame play huge roles and lead to unfulfilled lifes and less self-initiative. Receiving help from organizations definitely seems easier to accept, I agree. However organizations also create problems like dependency or entitlement, which reduces self-sufficient behavior.
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering?

I like how you've defined the terms and created sort of a scale. However, the difference between pain and suffering is somewhat unclear to me - is it that suffering is awareness of pain (which maybe makes it even more painful)? Or is the scale really just pain, expected pain, and unexpected pain?

While originally agreeing that unexpected suffering is the worst of the 4 (or 3), I ran across this study that found pain was worse when expected:

Of course, it might b... (read more)

EA is risk-constrained

What academic disciplines are being developed to make the career-switch less risky? I'm also interested in how insurance/pension funds could even begin to be developed.

9Brendon_Wong1yI think this could be set up by launching a 501(c)(3) as a donor-advised fund and fiscal sponsor and then setting up funds inside the entity that support specific purposes. For example, having a fund that pays UBI for people working on high-impact entrepreneurship. I welcome anyone to get in touch with me if they're interested in collaborating on and/or funding such a proposal (estimated setup cost of the entity and necessary legal work: $15,000–$25,000). Edit: Was inspired to write an EA Forum post on this [] !
2EdoArad1yI'd also like to know how someone could go about that kind of insurance/pension fund :)
4EdoArad1yI was thinking of the Global Priorities Institute [] as the clearest example of trying to normalize longtermism and global priorities research in the academia as a discipline. AI Safety is also getting more mainstream over time, some of it as academic work in that field. EA tackles problems which are more neglected. Some of it is still somewhat high-status (evidence-based development as the main thing that pops to my head). So perhaps that kind of risk is almost unavoidable and can only be mitigated for the next generation (and by then, it might be less neglected)