Great list, Kyle! Thanks for sharing. :)
I wasn't aware of The Life You Can Save's Helping Women & Girls Fund until I read your post. It's wonderful to know something like this exists.
Hi Rakafet,
Welcome to the EA Forum!
I never knew the Abstinence Violation Effect had a name - I think that's something I'll have to add to my lexicon. :)
While reading through your post, I was having a bit of trouble understanding your arguments and, the evidence behind why you think this intervention is particularly important and neglected.
If I'm understanding correctly, your argument is:
More funding should be directed towards providing vegan food to soldiers, who experience difficulties maintaining a vegan diet. By providing this support, we could reduce the likelihood of soldiers falling victim to the Abstinence Violent Affect and abandoning their vegan diet altogether, which could affect ~4000 animals over the span of a given soldier's life.
Would you say that's accurate?
I recently came across this great introductory talk from the Center for Humane Technology, discussing the less catastrophic, but still significant risks of generative large language models (LLMs). This might be a valuable resource to share with those unfamiliar with the staggering pace of AI capabilities research.
A key insight for me: Generative LLMs have the capacity to interpret an astonishing variety of languages. Whether those languages are traditional (e.g. written or verbal English) or abstract (e.g. images, electrical signals in the brain, wifi traffic, etc) doesn't necessarily matter. What matters is the events in that language can be quantified and measured.
While this opens up the door to numerous fascinating applications (e.g. translating animal vocalizations to human language, enabling blind individuals to see), it also raises some serious concerns regarding privacy of thought, mass surveillance, and further erosion of truth, among others.
This is fantastic! Props to the Type 3 Audio and EA Forum team.
Quick question regarding accessibility:
I'm aware EA Forum posts can both:
Is this information included in these audio narrations?
Only accounts with at least 100 Karma on forum.effectivealtruism.org or lesswrong.com are allowed to vote.
I'm a little confused by this. What's the motivation for using a karma threshold to decide who does and doesn't get to vote?
Hi Harrison,
Love this idea - thanks for taking the time to write this.
I think idea sharing, discussion, and coordination for that matter among community organizers could be so much better than it is today. Discussion boards and messaging platforms are limited in that you have to sift through the entire discussion before being able to compare and contrast the perspectives shared. These kinds of discussions also seem very ad-hoc and the value they generate diminishes over time as fewer and fewer people end up finding and engaging with the information.
In general, I think there's a huge opportunity to leverage next-generation web technology to map arguments, cause areas, and other forms of knowledge such that it can be transferred more easily between people and organizations across space and time. This is what organizations like OpenGlobalMind are striving to do.
As Babel mentioned though, I think the biggest challenge is adoption. Community organizers tend to operate in different timezones and contexts. I think you would need at least a couple of high-profile people in the community on board in order to establish this as a norm, unless this technology were to be integrated directly into the forum (which frankly, would be fantastic!)
Finally, are you aware of DebateGraph? It might also peak your interest.
Hey David, you might already be aware, but Vaidehi Agarwalla has recently spearheaded a project to migrate numerous EA Slacks, Discords, etc into the EA Anywhere Slack.
https://docs.google.com/document/d/1YluxFbKmZipIXbzJDPWBqrTPtFMI_3pzWcidzZHT3OQ/edit?usp=sharing
Edit: I see you're one of the editors! I'll keep this comment up for others to reference.
Hi EAlly,
It seems like there are numerous questions to unpack here. If I'm understanding you correctly, it seems like you're generally curious about how others have sought to increase their impact through an EA lense, given a background in IT. Is that right?
If so, I think your questions might be better answered by searching for, reaching out to, and scheduling informational interviews with people working at the intersection of EA and IT. I previously came across a helpful framework for doing this sort of thing here: [Webinar] The 2-Hour Job Search - YouTube
From one generalist IT person to another, would it be helpful to hop on a call to discuss your uncertainties? https://calend.ly/quinnpmchugh/meet
While I may not have a lot to offer in terms of career guidance, I can certainly relate to your position. My background is in mechanical engineering, but I currently do a mix of IT, operations, project management, and software engineering work. Professionally, I am interested in moving into project management full-time, but am also very interested in leveraging my IT skills to improve the movement's overall coordination and intellectual diversity through projects like EA Explorer.