Hide table of contents

Welcome!

If you're new to the EA Forum:

  • Consider using this thread to introduce yourself!
  • You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
  • (You can also put this info into your Forum bio.)

Everyone: 

  • If you have something to share that doesn't feel like a full post, add it here! (You can also create a Shortform post.)
  • You might also share good news, big or small (See this post for ideas.)
  • You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).

For inspiration, you can see the last open thread here


Other Forum resources

  1. 🖋️  Write on the EA Forum
  2. 🦋  Guide to norms on the Forum
  3. 🛠️  Forum User Manual
I personally like adding images to my Forum posts. Credit to DALL-E.

20

0
0

Reactions

0
0
Comments99
Sorted by Click to highlight new comments since: Today at 9:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Roko
1y-7
0
1

I feel confused about how dangerous/costly it is to use LLMs for private documents or thoughts to assist longtermist research, in a way that may wind up in the training data for future iterations of LLMs. Some sample use cases that I'd be worried about:

  • Summarizing private AI evals docs about plans to evaluate future models
  • Rewrite emails on high-stakes AI gov conversations
  • Generate lists of ideas for biosecurity interventions that can be helped/harmed by AI
  • Scrub potentially risky/infohazard-y information from a planned public forecasting questions
  • Summarize/rewrite speculations of potential near-future AI capabilities gains.

I'm worried about using LLMs for the following reasons:

  1. Standard privacy concerns/leakage to dangerous (human) actors
    1. If it's possible to back out your biosecurity plans from the models, this might give ideas to terrorists/rogue gov'ts.
    2. your infohazards might leak
    3. People might (probabilistically) back out private sensitive communication, which could be embarrassing
      1. I wouldn't be surprised if care for consumer privacy at AGI labs for chatbot consumers is much lower than say for emails hosted by large tech companies
        1. I've heard rumors to this effect, also see
    4. (unlikely) your
... (read more)
1
Carlos Ramírez
1y
The privacy concerns seem more realistic. A rogue superintelligence will have no shortage of ideas, so 2 does not seem very important. As to biasing the motivations of the AI, well, ideally mechanistic interpretability should get to the point we can know for a fact what the motivations of any given AI are, so maybe this is not a concern. I guess for 2a, why are you worried about a pre-superintelligence going rogue? That would be a hell of a fire alarm, since a pre-superintelligence is beatable. Something you didn't mention though: how will you be sure the LLM actually successfully did the task you gave it? These things are not that reliable: you will have to double-check everything for all your use cases, making using it kinda moot.

Forgive me if I'm just being dumb, but -- does anyone know if there is a way in settings to revert to the old font/CSS? I'm seeing a change that (for me) makes things harder to read/navigate.

I would like advice on writing a resume and applying to work in an effective career.  I will graduate with an economics bachelor's degree in April. I'm taking many statistics courses. I also took calculus and computer science courses. I live on the west coast of Canada and I am willing to move.

I believe I would be well suited to AI Governance but it may be better currently to find statistics/econometrics work or do survey design (to build general skills until I know more AI Governance people, or switch into a different effective cause area)

I am also o... (read more)

9
Ishan Mukherjee
1y
The EA Opportunities Board and Effective Thesis' database (they also have a newsletter) might be useful. I expect they're listed on 80,000 Hours so you might already know them, but if not: ERA Cambridge are accepting applications for AI governance research fellowships.

Hi there everyone, I'm William the Kiwi and this is my first post on EA forums. I have recently discovered AI alignment and have been reading about it for around a month. This seems like an important but terrifyingly under invested in field. I have many questions but in the interest of speed I will involve Cunningham's Law and post my current conclusions.

My AI conclusions:

  1. Corrigiblity is mathematically impossible for AGI.
  2. Alignment requires defining all important human values in a robust enough way that it can survive near-infinite amounts of optimisat
... (read more)
4
Carlos Ramírez
1y
You might want to read this is as a counter to AI doomerism: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case This for a way to contribute to solving this problem without getting into alignment: https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai this too: https://betterwithout.ai/pragmatic-AI-safety and this for the case that we should stop using neural networks: https://betterwithout.ai/gradient-dissent
2
Robi Rahman
1y
Hi William! Welcome to the Forum :) Why do you think that corrigibility is mathematically impossible for AGI? Because you think it would necessarily have a predefined utility function, or some other reason?
3
William the Kiwi
1y
Hi Robi Rahman, thanks for the welcome. I do not know if has a predefined utility function, or if the functions simply have similar forms. If there is a utility function that provides utility for the AI to shutdown if some arbitrary "shutdown button" is pressed, then there exists a state where the "shutdown button" is being pressed at a very high probability (e.g. an office intern is in the process of pushing the "shutdown button") that provides more expected utility than the current state. There is therefore an incentive for the AI to move towards that state (e.g. by convincing the office intern to push the "shutdown button"). If instead there was negative utility in the "shutdown button" being pressed, the AI is incentivized to prevent the button from being pressed. If instead the AI had no utility function for whether the "shutdown button" was pressed or not, but there somehow existed a code segment that caused the shutdown process to happen if the "shutdown button" was pressed, then there existed a daughter AGI that has slightly more efficient code if this code segment is omitted. An AGI that has a utility function that provides utility for producing daughter AGIs that are more efficient versions of itself, is incentivized to produce such a daughter that has the "shutdown button" code segment removed. There is a more detailed version of this description in https://intelligence.org/files/Corrigibility.pdf I could be wrong about my conclusion about corrigiblity (and probably am), however it is my best intuition at this point.

GiveWell traditionally has quarterly board meetings; were there ones in August and December 2022? If so, are notes available? (https://www.givewell.org/about/official-records#Boardmeetings)

5
Dane Magaway
1y
This has now been fixed. Our tech team has resolved the issue by using dummy bullet points to widen the columns. Thanks for reaching out! Let me know if you run into any issues on your end.
2
Misha_Yagudin
1y
Hey, I think the fourth column was introduced somehow… You can see it by searching for "Mandel (2019)"
3
Misha_Yagudin
1y
Thank you very much, Dane and the tech team!
3
Dane Magaway
1y
Hi, Misha! Thanks for reaching out. We're on it and will let you know when it's sorted.

I am new to EA. My name is Trudy Beerman. I am pursuing doctoral studies in strategic leadership at Liberty University. My business is legally registered as Profitable Stewardship Inc; however, we are active under the PSI TV brand. At PSI TV, we make you the star and deliver your content to our TV audience. We also build these Netflix-like TV channels for brands to have a presence on Roku TV, Amazon Fire TV, VIDAA TV, and inside a mobile app (which we also build for our clients). I am enjoying the posts I have read here and commented on. 

Hey everyone! First time poster here, but long time advocate for effective altruism.

I've been vegan for a couple of years now, mostly to mitigate animal suffering. Recently I've been wondering how a vegetarian diet would compare in terms of suffering caused. Of course I presume veganism would be better, but by how much?

With this in mind I'm wondering is there any resources that attempt to quantify how much suffering is caused by buying various animal products?  For example dairy cows produce about 40,000 litres of milk in their lifetime, which can be ... (read more)

3
emre kaplan
1y
Here's a compilation of such calculations.
6
emre kaplan
1y
I suspect most of the impact of veganism comes from its social/political side effects rather than the direct impact of the consumption. I believe it's better to mostly think about "what kind of meme and norm should I spread" as most of the impact is there.
1
Jack FitzGerald
1y
I'm inclined to agree, although I was curious nonetheless. Also anecdotally  it seems like an increasing number of people are basing their diet on  calculated C02 emissions, so calculations based on suffering seem like they would be a useful counterpart. Thanks for sharing the compilation!
5
Lorenzo Buonanno
1y
Hi Jack! You might be interested in https://faunalytics.org/animal-product-impact-scales/#:~:text=Wondering%20About%20Your%20Impact%20Per%20Serving%20As%20An%20Individual%3F and https://foodimpacts.org/ . In particular, eggs seem to cause a surprising amount of suffering per serving (compared to e.g. milk or cheese)
2
Jack FitzGerald
1y
Both of those resources are excellent and exactly the sort of thing I was looking for. Thank you so much!  

I'm looking for statistics on how doable it is to solve all the problems we care about. For example, I came across this: https://www.un.org/sustainabledevelopment/wp-content/uploads/2018/09/Goal-1.pdf from the UN which says extreme poverty could be sorted out in 20 years for $175 billion a year. That is actually very doable, in light of the fact of how much money can go into war (in 1945, the US spent 40% of its GDP into the war). I'm looking for more numbers like that, e.g. how much money it takes to solve X problem. 

 

I intend to use them for a ... (read more)

2
Brad West
1y
I think I could help you in your total war. PM me if interested in learning more. https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

Does anyone know why Singer hasn't changed his views on infanticide and killing animals after he had become a hedonist utilitarian? As far as I know, his former views were based on the following:

a. Creation and fulfilment of new preferences is morally neutral.

b. Thwarting existing preferences is morally bad.

c. Persons have preferences about their future.

d. Non-persons don't have a sense of the future, they don't have preferences about their future either. They live in the moment.

e. Killing persons thwarts their preferences about the future.

f. Killing non-p... (read more)

2
NickLaing
1y
Thanks Emre - simple question what are his current views, I'm assuming from what you are saying he is still pro infanticide in rare circumstances soon after birth?
3
emre kaplan
1y
I think he's not commenting on it much anymore since this issue isn't really a major priority. But I think he used to advocate for infanticide in a larger set of circumstances(eg. when it's possible to have another child who will have a happier life). The part about infanticide isn't that relevant to any kind of work EA is doing. But his views are still debated in animal advocacy circles and I am not sure what exactly his position is.
2
NickLaing
1y
Gotcha. It's true it's not immediately obvious from google or chatGPTx.
2
Lorenzo Buonanno
1y
I think he writes a bit about it here: https://petersinger.info/faq in the section: "You have been quoted as saying: "Killing a defective infant is not morally equivalent to killing a person. Sometimes it is not wrong at all." Is that quote accurate?"

Hi guys !

I posted about 2 weeks ago here asking for masters project ideas around the field of computational social choice and machine learning for ethical decision making.

To recap: I'm currently doing my master's project in design engineering at Imperial, where I need to find something impactful, implementable and innovative.

I really appreciated all the help I got on the post, however, I've hit a kind of dead end - I'm not sure I can find something within my scope with the time frame in the field I've chosen.

So now I'm asking for any project ideas which fi... (read more)

6
Lorenzo Buonanno
1y
Hi Grace! I don't have any project ideas in mind, but I wonder if it would make sense to talk with the people at https://effectivethesis.org/ and maybe to have a look at this board https://ea-internships.pory.app/board for inspiration Good luck with your project!

Hello everyone!  My name is Carlos. I recently realized I should be leading a life of service, instead of one where I only care about myself, and that has taken me here, to the place that is all about doing the most good.

I'm an odd guy, in that I have read some LessWrong and have been reading Slate Star Codex/Astral Codex Ten for years, but am for all intents and purposes a mystic. That shouldn't put me at odds here too much, since rationality is definitely a powerful and much needed tool in certain contexts (such as this one), it's just that it canno... (read more)

3
Felix Wolf
1y
Hi Carlos, welcome to the Forum!  Moya is probably the most mystic person I know of, so nice to see that you already encountered her. :D Here in the Forum, we really try to be nice and welcoming, if you follow along, I don't see any reason this couldn't work out. ;) If you are open to suggestions, I want to recommend you looking into the Podcast Global Optimum from Daniel Gambacorta. He talks about how you can become a more effective altruist and has some good thinking about the pros and cons of different topics, for example the episode about how altruistic should you be?. "[…] the decision to give to charity […] is not exactly rational." Can you please explain? With kind regards Felix
1
Carlos Ramírez
1y
Hi Felix, thanks for the recs! What I mean by  giving to charity not being exactly rational, is that giving to charity doesn't help one in any way. I think it makes more sense to be selfish than charitable, though there is a case where charity that improves ones community can be reasonable, since an improved community will impact your life. And sure, one could argue the world is one big community, but I just don't see how the money I give to Africa will help me in any way. Which is perfectly fine, since I don't think reason has a monopoly on truth. There are such things as moral facts, and morality is in many ways orthogonal to reason. For example, Josef Mengele's problem was not a lack of reason, his was a sickness of the heart, which is a separate faculty that also discerns the truth.

Is there a way to only show posts with ≥ 50 upvotes on the Frontpage?

8
Hauke Hillebrandt
1y
Stop free-riding! voting on new content is a public good, Misha ;P
8
Misha_Yagudin
1y
Thank you, Hauke, just contributed an upvoted to the visibility of one good post — doing my part! Alternatively, is there a way to apply field customization (like hiding community posts and up-weighting/down-weighting certain tags) to https://forum.effectivealtruism.org/allPosts?
2
NunoSempere
1y
Yes, ctrl+F on "customize tags"
6
Lizka
1y
Hi! On the All Posts page, you can't filter by most tags, unfortunately, although we just added the option of hiding the Community tag: Find the sorting options: Hide community: On the Frontpage, you can indeed filter by different topics. 

Hi all,

Moya here from Darmstadt, Germany. I am a Culture-associated scientist, trans* feminist, poly, kinky, and a witch.
I got into LessWrong in 2016 and then EA 2016 or 2017, don't quite remember. :)

I went to the University of Iceland, did a Master's degree in Computer Science / Bioinformatics there, then built software for the European Space Agency, and nowadays am a freelance programmer and activist in the Seebrücke movement in Germany and other activist groups as well. I also help organize local burn events (some but not all of them being FLINTA* exclu... (read more)

2
Carlos Ramírez
1y
Nice to meet you! Also a new guy. Good to see you're a witch, I'm a mystic! A burn event is a copy of Burning Man? Definitely would like to go to one of those.
2
Moya
1y
Hi there :) Yes indeed, burn events are based on the same principles as Burning Man, but each regional burn is a bit different just based on who attends, how these people choose to interpret the (intentionally) vague and contradicting principles, etc. :)
4
Milena Canzler
1y
Hi Moya! Welcome to the forum from another person in southern Germany. I'm curious: Are you connected to the Darmstadt local group? If so, hope to see you at the next event in the area (I live in Freiburg). Would love to connect and hear what your perspective on EA is! Also, the password manager story is too relatable. ^^ Cheers, Mila
1
Moya
1y
Hi Mila, Yeah, I am involved in the Darmstadt local group (when I have the time, many many things going on.) And wheee, would be glad to meet you too :)
2
Milena Canzler
1y
Sweet! I'm sure we'll meet sooner or later then :D

First time poster here.
I am currently doing my master's degree in design engineering at Imperial College London, and I am trying to create a project proposal around the topic of computational social choice and machine learning for ethical decision making. I'm struggling to find a "design engineering" take on this - what can I do to contribute in the field as a design engineer?

In terms of prior art, I've been inspired by MIT's Moral Machine, feeding ML models of aggregate ethical decisions from people. If anyone has any ideas on a des eng angle to approach this topic, please give me some pointers! 

TIA

2
garymm
1y
Seems somewhat related to RadicalXChange stuff. Maybe look into that. They have some meetups and mailing lists.
3
quinn
1y
I don't think it'll help you in particular but my thinking was influenced by Critch's comments about how CSC applies to existential safety https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Computational_Social_Choice__CSC_ 

I have seen Sabine Hossenfelder claim that it will be very expensive to maintain superintelligent AIs. I also hear many people claiming that digital minds will use much less energy than human minds, so they will be much more numerous. Does anyone have some information or a guess on how much energy ChatGPT spends per hour per user?

7
Felix Wolf
1y
Epistemic status: quick google search, uncertain about everything, have not read the linked papers. ~15 minutes of time investment. Source 1 The Carbon Footprint of ChatGPT [...] ChatGPT is based on a version of GPT-3. It has been estimated that training GPT-3 consumed 1,287 MWh which emitted 552 tons CO2e [1]. Using the ML CO2 Impact calculator, we can estimate ChatGPT’s daily carbon footprint to 23.04 kgCO2e. [...] ChatGPT probably handles way more daily requests [compared to Bloom], so it might be fair to expect it has a larger carbon footprint. Source 2 The carbon footprint of ChatGPT 3.82 tCO₂e per day Also, maybe take a look into this paper about a different language model: ESTIMATING THE CARBON FOOTPRINT OF BLOOM, A 176B PARAMETER LANGUAGE MODEL https://arxiv.org/pdf/2211.02001.pdf  Quantifying the Carbon Emissions of Machine Learning https://arxiv.org/pdf/1910.09700.pdf  You can play a bit with this calculator, which was also used in source 1: ML CO2 Impact https://mlco2.github.io/impact/   
2
constructive
1y
I think a central idea here is that superintelligence could innovate and thus find more energy-efficient means of running itself. We already see a trend of language models with the same capabilities getting more energy efficient over time through algorithmic improvement and better parameters/data ratios. So even if the first Superintelligence requires a lot of energy, the systems developed in the period after it will probably need much less.     
1
emre kaplan
1y
Thanks a lot, Felix! That's very generous and some links have even more relevant stuff. Apparently, ChatGPT uses around 11870 kWh per day whereas the average human body uses 2,4 kWh.

Hello all,

long time lurker here. I was doing a bunch of reading today about polygenic screening, and one of the papers was so good that I had to share it, in case anyone interested in animal welfare was unfamiliar with it. The post is awaiting moderation but will presumably be here in due time.

So while I am making my first post I might as well introduce myself.

I have been sort of vaguely EA aligned since I discovered the movement 5ish years ago, listened to every episode of the 80k podcast and read a tonne of related books and blog posts.

I have a background in biophysics, though I am currently working as a software engineer in a scrappy startup to improve my programming skills. I have vague plans to return to research and do a phd at some point but lets see.

EA things I am interested in:

  • bio bio bio (everything from biorisk and pandemics to the existential risk posed by radical transhumanism)
  • ai (that one came out of nowhere! I mean I used to like reading Yudkowskys stuff thinking it was scifi but here we are. AGI timelines shrinking like spinach in frying pan, hoo-boy)
  • global development (have lived, worked and travelled extensively in third world countries. lots of human capital out
... (read more)

It’ll be my first time at a Bay Area EA Global at the end of this month - does anyone have any tips? Any things I should definitely do?

Also if you’re interested in institutional reform you might like my blog Rules of the Game: https://connoraxiotes.substack.com/p/what-can-the-uk-government-do-to

1
Ishan Mukherjee
1y
Hey! This might be useful: An EA's Guide to Berkeley and the Bay Area
3
Felix Wolf
1y
Hey Axiotes, congratulations on your accepted EAG application! Here are three articles you may find interesting. * How to Get the Maximum Value Out of Effective Altruism Conferences * Doing 1-on-1s Better - EAG Tips Part II * EA Global Tips: Networking with others in mind My personal tips are: take time for yourself and don't overwhelm yourself too much. Write down beforehand how the best EAG would look like to you and how a great EAG would look like. Take notes on what you want to accomplish and what to speak about in your 1o1s. Make 1o1s and have a good, productive time. After the EAG reevaluate what happened, what you have learned and write down next steps.

Hello, All!

I found EA via the New Yorker article about William MacAskill.
I am the author of "Thank You For Listening". 
I listen, therefore you are. We understand and respect, therefore we are. We bring out the best in each other, therefore we thrive.
Go beyond Can Do. We Can understand, respect, and bring out the best in others, often beyond our expectations.
We know how to cooperate on roads. We can cooperate at home, at work, and in society. Teach everyone to listen (yield), check biases (blind spots), and reject ideological rage (road rage).
Bringing Out  The  Best In Humanity

1
Felix Wolf
1y
Hey Marc, here is a workable link to your post from October: https://forum.effectivealtruism.org/posts/7srarHqktkHTBDYLq/bringing-out-the-best-in-humanityYLq/ 

Howdy everyone!

 

I'm Brendan O'Hare, and I was an arete fellow in college and I have been involved with EA since! I have recently decided to try and chart my own career path after striking out a couple of times in job application process post-graduation.  I have decided to start a newsletter/blog/media outlet focused on Houston and local issues, particularly focusing on urbanism. I want to become an advocate for better local policies that I understand quite a bit. 

 

If anyone has any tips with regards to writing, growing on twitter, etc. I would love to hear it! Thank you all so much for this platform. 

Hi everyone,

I was close to becoming a statistic of someone who started reading 80,000 hours but never completed the career planning program. I am coming back now as I need some direction.

Of all the global priorities, I gravitate toward those that focus on improving physical and mental health. As someone who deals with chronic pain and is in between jobs, nothing consumes my attention more than alleviating physical and mental suffering.

I am curious if anyone in the community spends their work life thinking and working on increasing longevity, eliminating ch... (read more)

5
Erich_Grunewald
1y
Not sure how helpful this is to you, but the Happier Lives Institute does research on mental health and chronic pain. See e.g. this recent post on pain relief, and this one evaluating a mental health intervention (but also this response, and this response to the response).

Just a warning on treating everyone as if they argue in good faith. They don’t. Émile P. Torres, aka @xriskology on Twitter doesn’t. He may say true honest things but if you find anything he says insightful check all the sources.

Émile P. Torres’s history of dishonesty and harassment An incomplete summary https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty

7
Nathan Young
1y
I think Émile is close to the line for me but I think we've had positive interactions. 
Erin
1y16
13
1

Not trying to disagree with what you're saying - just want to point out that Emile goes by they/them pronouns.

Internship / board of trustees!

My name is Simon Sällström, after graduating with a masters in economics from Oxford in July 2022, I decided against going on the traditional 9-5 route in the City of London to move around money to make more money for people who already have plenty of money… Instead, I launched a charity

DirectEd Development Foundation is a charitable organisation whose mission is to propel economic growth and develop and deliver evidence-based, highly scalable and cost-effective bootcamps to under-resourced high-potential students in Africa, ... (read more)

Woof. This look’s exhausting. So I found out I’m on the autism spectrum. My energy for people saying things is… not a very high capacity. It’s been fun recently to stretch my curiosity with this AI https://chat.openai.com/chat But engaging with people is generally an overwhelming prospect.

I want to design a stupidly efficient system that revives public journalism and research, strengthens eco-conscious businesses challenged by competitors who manufacture unsustainable consumer goods, provides supplemental education for age groups to support navigating cha... (read more)

Does anyone have estimates on the cost effectiveness of trachoma prevention? It seems as though mass antibiotic administration is effective and cheap, and blindness is quite serious. However room for funding might be limited. I haven't seen it investigated by many of the organizations, but maybe I just haven't found the right report.

5
Ian Turner
1y
GiveWell looked at this in 2009 and decided that chemoprophylaxis is not cost effective. GiveWell leans on a 2005 Cochrane study that concluded that "For the comparisons of oral or topical antibiotic against placebo/no treatment, the data are consistent with there being no effect of antibiotics". However, it looks like Cochrane revisited this in 2019 and I'm not sure if Givewell took a second look.
3
Rafael Vieira
1y
Hey Wubbles, I realise that my response is a bit late, but there is some peer-reviewed literature on this matter. The most relevant paper would be this one from 2005. The main results are: Unfortunately, I am not aware of any more recent paper using updated azythromycin costs. It would be interesting for someone to perform a new cost-effectiveness study based on the 2015 International Medical Products Price Guide, as the price of azythromycin is known to have decreased since 2005.  There is, however, a recent study restricted to Malawi that suggests that mass treatment with azythromycin may be cost-effective.

Hey everyone, I'm curious about the extent to which people in EA take (weak/strong) antinatalism/ negative utilitarianism seriously. I've read a bit around the topic and find some arguments more persuasive than others, but the idea that many lives are net-negative, and that even good lives might be worse than we think they are, has stuck with me. 

Based on my own mood diary, I'm leaning towards something around a 5.5/10 on a happiness scale being the neutral point, under which a life isn't worth living. 

This has made me a lot less enthusiastic about 'saving lives' for its own sake, especially those lives in countries/ regions with very poor quality of life. So I suspect that some 'life-saving' charities could be actively harmful and that we should focus way more on 'life-improving' charities/ cause areas. (There are probably very few charities that only save lives- preventing malaria/ reducing lead exposure both improves and saves lives- but we can imagine a 'pure-play life-saving charity'.)

I haven't come to any conclusions here, but the 'cost to save a life' framing, still common in EA, strikes me as probably morally invalid. I don't hear this argument mentioned much (you don't seem to get anyone actively arguing against 'saving lives'), so I'm just curious what the range of EA opinion is. 

4
Ian Turner
1y
Regarding the question of the population ethics of donating to Givewell charities, a 2014 report commissioned by Givewell suggested that donating to AMF wouldn't have a big impact on total population, because fertility decisions are related to infant mortality. Givewell also wrote a lengthy blog post about their work in the context of population ethics.  I think the gist of it is that even if you don't agree with Givewell's stance on population ethics, you can still make use of their work because they provide a spreadsheet where one can plug in one's own moral weights.
5
NunoSempere
1y
You might be interested in: * https://www.preventsuffering.org * https://centerforreducingsuffering.org * Historically also https://longtermrisk.org 

How do folks! Stoked to have the opportunity to try and be a participant that contributes something meaningful here on the EA Forum. 

EA Forum Guidelines (and Aaron)...thank you for the guidance and encouraging me to write the bio. 

All, I'm new to the EA community. I'll hope to meet some of you soon. Please feel free to send a hello anytime. 

I see the "Commenting Guidelines". They remind me of the Simple Rules of Inquiry that I've used for many years. Are they a decent match for the spirit of this Forum?

  1. Turn judgment into curiosity
  2. Turn confli
... (read more)
4
Felix Wolf
1y
Hey Matt, welcome to the EA Forum. :) Your personal guidelines translate well into our community guidelines here in the forum. No worries on that front. If you want any guidance on where to find more information or where to start, feel free to ask or write me a personal message.  I was browsing your website/blog and found a missing page: https://www.creatingafuturewewant.com/praxes/democratizingeffectiveness → https://prezi.com/vsrfc7ztmkvn/democratizing-effectiveness/?present=1 The presentation is offline atm. I hope this helps. :D A suggestion for your work as lead head deputy associate administrator and facilitator could be to visit this website: https://www.non-trivial.org/  Non-Trivial sponsors fellowships for student projects, which is something you could do in the future, but more importantly for now maybe take a look at their course: https://course.non-trivial.org/ "How to (actually) change the world" could be interesting. With kind regards Felix
2
Matt Keene
1y
Thank you Felix. Nice to feel welcome.  Grateful for the new opportunities and resources you've shared. We will look into them and keep them handy.  I appreciate the website feedack...It is a work in progress and I could do much better at tidying things up that I won't likely get to in the near term. On it! Thank you for your service to educate our friends and peers about the environment. Take good care of yourself Felix.  Matt

I thought it might be helpful to share this article. The title speaks for itself.

 

How to Legalize Prediction Markets

What you (yes, you) can do to move humanity forward

Hi I’m Silas Barta. First comment here! I organize the Austin LessWrong group. I’m currently retired off of earlier investing (formerly software engineer) but am still looking for my next career to maximize my impact. I think I have a calling in either information security (esp reverse engineering) or improving the quality of explanations and introductions to technical topics.

I have donated cryptocurrency and contributed during Facebook’s Giving Tuesday, and gone to the Bay Area EA Globals in 2016 and 2017.

4
Agustín Covarrubias
1y
You might want to know that a few weeks ago, 80.000 hours updated their career path profile on information security.
4
Felix Wolf
1y
Hey Silas, welcome to the Forum. I wish you the best of luck to find a fulfilling career. :) If you have any kind of question on where to find resources or what not, feel free to ask. With kind regards Felix

Qn: Where is the closest EA community base to the US? How accesible is the USA from it (US Consulate)?

Context: I am recently let go from my job while on a visa in the states. Which means I have to leave the US within the next 7 days. I would like to live somewhere close to the US where I can find community so that I don't loose momentum to do the intense work that job search needs. I tend to be really affected by the energy of where I am; I work best in cities, I tend to sleep most on a countryside.

This might also be a good resource for people who are not ... (read more)

1
jwpieters
1y
There are some EAs hanging out in CDMX until the end of Jan (and maybe some after) Agree that having a nomad friendly community near the US would be great
2
She's done it
1y
I did end up in Mexico City. I plan to continue the job search from here while exploring independent contracting for some supplemental income and diverse project experience.  - If anyone is looking for expertise in biosecurity/global health to help with ongoing projects, please reach out and delegate to me! I am new here, so I haven't gathered any "EA karma" from well-written posts yet. I would love to change that! LinkedIn - I am open to ideas in up-skilling for the most impactful work I can do as a physician-scientist. Open to ideas for skills to master and funds to apply for the same.  - Also, EAs in the Americas, take a work-cation in CDMX! The weather is excellent, and the city is energetic and green. So far, a good group of EAs have been here after the fellowship ended. I would love to keep it up!

What kind of lightbulb is Qualy? Incandescent or LED? probably not CFL given the shape

Hi all, I'm Vlad, 35, from Romania. I've been working in software engineering for 12 years. I have a bachelor's and master's degree in Physics.

I'm here because I read "What we owe the future", after it was recommended to me by a friend.

I got the book recommended to me because I had an idea which is a little unconfortable for some people, but I think this idea is extremely important, and this friend of mine instantly classified my thoughts as "a branch of long-termism". I also think my idea is extremely relevant to this group, and I'm interested in getting feedback about it.

Context for the  idea: Long-termism is concerned about people as far into the future as possible, up to the end of the universe.

The idea: ...what if we can make it so there doesn't have to be an end? If we had a limitless source of energy, there wouldn't have to be an end. Not only that, but we could make a lot of people very happy (like billions of billions of billions .....of billions of them? a literal infinity of them even)

 

It sounds crazy, I realize, but my best knowledge on this topic says this:

  • We know that we don't know all the laws of the universe
  • Even the known laws kind of have a loop-hole in t
... (read more)
2
DC
1y
You would like Alexey Turchin's research into surviving the end of the universe.
4
Guy Raveh
1y
I would not call it "research". Science fiction might be a better term. Which is also, I suspect, why Vlad's comment is very disagreed with. There's nothing to suggest surviving the end of the universe is any more plausible than any supernatural myth being true.
1
vlad.george.ardelean
1y
Hey Guy, thanks for your feedback. I might be wrong on this, but the way I understand probability to work is that, generally: * if event A has probability P(A) * and if event B has probability P(B) * then the probability of both A and B to happen is P(A) * P(B) What this means, is that technically: * The existence of supernatural beings, with personalities, and specific traits AND the power "to do anything they want" is at most equal to the possibility for an endless source of energy to exist simply on the basis that more constraints make the probability of the event smaller.   The interesting point however is that I have found (so far) no physicist that says this is not possible. I have also not found anyone yet who knows how to estimate the effort so far.   I would be very interested however if there are arguments against this position.   And I'd be even more interested in people who want to help me with this initiative :D Arguments are nice, but making progress is better! 
4
Guy Raveh
1y
Given that empirical science cannot ever conclusively prove anything, you may never find a physicist to tell you that it isn't possible. But there's no reason to think that it is possible. Compare to Russell's Teapot. Regarding your argument about probabilities - yes, the probability of an omnipotent god is necessarily smaller than that of any infinite source of energy (although it's not a product - that's just true for independent events). However I was not only talking about omnipotent gods, and anyway this probabilistic reasoning is the wrong way to think about this. When you do it, you get things like Pascal's wager (or Pascal's mugging, have your pick).
-1
vlad.george.ardelean
1y
Hi Guy, Thanks for your answer. We don't know whether this is possible. You are the only one to make the choice between: * so we shouldn't try to find out * so we should try to find out Pascal's wager and oppotunity cost madness ensues thereafter. However, maybe I'm blindspotted, but I can't find a better topic to bet on - would solve all problems solvable with resources. I don't think I can find a non-emotional way to convince people to switch from we should not search to we should search (for infinite energy).  Addressing rationally (but it's not clear how reason can change values/emotions) : 1. there's a big difference in the impact of Russel's teapot and infinite energy. One is irrelevant, the other is extremely relevant 2. 2000 years ago, there was no reason to think that it would be possible to get to the moon or have mobile phones. The universe isn't obliged to respect human intuitions. 3. True, there's at this point no clear reason to think this is possible  1. well except energy possibly not being conserved in general relativity - I can't tell if there's a consensus on this topic or not at this point - crazy!  2. Also, fundamentally because something exists (rather than nothing), some hope exists that there's arbitrarily more of this "something". Why would existence necesarily be constrained to a finite quantity? 4. However, the impact of infinite energy, to me, seems high enough to require some serious research on the topic. The current times also leave a lot of gaps, where we can try to find infinite energy: 1. quantum mechanics and relativity are incompatible with each other 2. relativity itself is failing (dark energy vs dark matter clearly show we don't understand what happens in ~95% of the universe). Dark matter can explain some things but not others, modified gravity explains others, but not some. 3. the big bang at t=0 possibly violates conservation of energy   Comparison to Pascal's wager is an interesting point.
Erin
1y14
7
0

Hi Vlad,

You're getting a lot of disagree votes. I wanted to explain why (from my perspective), this is probably not a useful way to spend your time.

Longtermists typically propose working on problems that impact the long run future and can't be solved in the future. X-risks is a great example - if we don't solve it now, there will be no future people to solve it. Another example is historical records preservation, which is something that is likewise easy to do now but could be impossible to do in the future.

This seems like a problem that future people would be in a much better position to solve than we are.

Obviously there's nothing wrong with pursuing an idea simply because you find it interesting. A good starting place for you might be Isaac Arthur on Youtube. He has a series called Civilizations at the End of Time which is related to what you are thinking about.

1
vlad.george.ardelean
1y
Hi Erin, Thanks for your explanations of what likely is the issue regarding disagreement here. I appreciate it that you spent some time to shed light here, because feedback is important to me. I knew about Isaac Arthur, I'm trying to reach out to him and his community as we speak. I'd try to add some clarrifications, hoping I adress the concerns of those people that seemed to be in disagreement with my idea. I find it quite surprising that people concerned with the long-term welfare of humanity seem to be against my idea. If there are genuine arguments against my position, I'd totally be open to hear it - maybe indeed there's something wrong with my idea. However I can't find a way to get rid of these points (I think this is philosophy) * Sure, investing more than 0 effort into this initiative, takes away from other efforts * The faster we reach this goal, the faster we can make tremendous improvements in peoples' lives * If we delay this for long enough, society might not be in such a state as to afford doing this kind of research (society might also be in a better position, but I'm more concerned about   Regarding viability: * I don't know how much effort must be invested into this initiative, in order to achieve its goals  * I don't know if this is possible (Though through my own expertise, and the expertise of 11 physicists out of which at least 4 are physics professors, this goal does not seem impossible to reach)   Framing in "What we owe the future" terms: * Contingency: I'd give it 3/5 because * 1 would be something obvious to everyone * 2 would be obvious to experts * 3 would be obvious to experts, but there would be cultural forces against it. William MacAskill talks about "cultural lock-in". I think science is in such kind of a situation today. You might have heard of issues such as "publish or perish" ( https://en.wikipedia.org/wiki/Publish_or_perish ). There's also the taboo created because of similarities with "perpetual mot
Erin
1y13
5
0

I don't think I stated my core point clearly. I will be blunt for the purpose of clarity. Pursing this is not useful because, even if you could make a discovery, it would not possibly be useful until literally 100 quintillion years from now, if not much longer. To think that you could transmit this knowledge that far into future doesn't make any sense.

Perhaps you wish to pursue this as a purely theoretical question. I'm not a physicist, so I can not comment on whether your ideas are reasonable from that perspective. You say that physicists have told you that they are, but do not discount the possibility that they were simply being polite, or that your questions were misinterpreted.

Additionally, the reality is that people without PhDs in a given field rarely make significant contributions these days - if you seek to do so, your ideas must be exceptionally well communicated and  grounded in the current literature (e.g., you must demonstrate an understanding of the orthodox paradigm even if your ideas are heterodox). Otherwise, your ideas will be lumped in with perpetual motion machines and ignored. 

I genuinely think it would be a mistake to pursue this idea at all, even fro... (read more)

-1
vlad.george.ardelean
1y
Hi @Erin , thanks for your continued interest in this topic. Thanks for being blunt. Bluntness is good for saving time. Let me address some things you said: That is simply just not true. If we had infinite energy tomorrow, very soon after that, we could  solve all problems solvable using resources. Let me present a list of stuff we could do very very soon (likely <10 years, extremely likely <100 years): 1. solve climate change (trivially even!) 2. solve all basic necesities of people (food, water, clothing, shelter) 3. solve all non-basic necesities: cars, airplanes, mobile phones, laptops - you name it, we got it 4. interstellar travel: Yes, people would already be flying to Alpha centauri and lots of other places. They would even reach them in "a few years/months" (a few years for them, but lots of years for us back on Earth) There is lots of potential here, but I found that if I start talking about all the things that could be done, people are actually    Based on the refutation above, this point does not stand anymore. This is an awkward argument to address. Sure, everybody I ever met could be lying, and there's always solipsism. Same argument applies to everyone. I don't think this is a healthy way to continue a conversation - throwing doubt into what people say. It's not healthy compared to an alternative that fortunately enough, we have: * I am currently reaching out to more and more physicists, and asking them for their opinion on this. I am posting updates regularly on the discord server that you can find on http://infiniteenergy.org . If you are interested, you'll find there how much physicists are interested in this. * If you have any idea of what I would need to show you, so you consider there's enough interest from the science community, I'm all ears.  Please however let's avoid distrust-based arguments in the future, and let's replace them with data-based arguments.  I'd avoid them first of all because, being from Eastern Europe, I am
[comment deleted]1y2
0
0