BM

Benjamin M.

84 karmaJoined Sep 2023Pursuing an undergraduate degree

Bio

Here to talk about phytomining for now.

Posts
1

Sorted by New

Comments
8

Topic contributions
1

If you're willing to consider literature, The Glass Bead Game by Hermann Hesse is the book that led me to EA ways of thinking, and also the best book, in my opinion, that I have ever read.

This looks very interesting!

One note: Friends Peace Teams has also been producing ceramic water filters, formerly in Indonesia and more recently in the Philippines, I believe. Unfortunately it's not well documented on their website (I only found out about it through a talk that I went to). At that talk one of their members implied that they thought that they had a better production method based on training local people to make the filters using local materials in some way and then having them train others; I'm not really sure how this differs from other locally-produced water filter manufacturers but they implied that it was.

Link:

https://friendspeaceteams.org/wp-content/uploads/2018/06/Spring-2010.pdf

Alma "Kins" Aparece was the person who gave most of the talk and, if I remember correctly, helped facilitate the water filter making.

They very much don't fulfill the idea of a charity focusing on one intervention (or maybe a few interventions), however; they do a wide variety of programs, most of which are focused on mediation and interpersonal training rather than clean water/other more tangible goods.

I'm not the first person to post this, but, if you're an American, calling your senator or representative is probably a good idea. Here's roughly how calls have gone when I do them:

  • Find the DC office number of your two senators and one representative and dial it 
  • If it's during weekday hours, a staffer will probably answer the phone. If it's on the evening, holidays, or weekends, you'll probably be talking to an answering machine
  • A staffer or an answering machine message will ask you to say who you are and what you're calling for
  • I usually give my first name, zip code, and say that I'm calling to express my opinion on an issue or a piece of legislation. If you're talking to an answering machine, you usually have to press a button about what you're calling for (constituent services, expressing opinions, and some other stuff)
  • Then they'll ask you to say your comment
  • I begin by briefly saying what PEPFAR is (a bill that funds efforts to fight AIDS around the world) 
  • I mention that it needs reauthorization to continue being funded
  • I then give my main reason for supporting it (it's a very effective way that the US saves lives cheaply around the world)
  • I might also give some side reasons, especially if there's a person saying things like mm-hmm on the other end of the line (it's boosted America's reputation abroad, the program isn't actually related to abortion, yada yada yada)
  • I say what I have to say in under a minute
  • I don't have a personal or professional connection to the issue, but, if I did (i.e. doctor, nurse, AIDS patient, immigrant from a country with PEPFAR funds), I'd probably try to mention it
  • Some advice online says it's good to make it more of a conversation and less of a speech, but that's never worked well for me
  • You can adjust it a little for Democrats vs. Republicans as long as you're saying true statements; usually I emphasize the boosting America's reputation abroad with Republicans and maybe throw in a George W. Bush mention. But I'd definitely caution against saying things you don't believe in to signal membership with one party or another
  • If there's somebody on the line, they'll say thanks and let you know that they'll pass along your concern. 
  • Occasionally they'll ask a follow-up question (the only one I got about PEPFAR was whether this was a matter needing an individual response, but I've received actual questions about details when calling about other issues before).
  • Then I tell them thanks for listening and hang up. 
  • This takes at most 2 minutes per call in my experience. I try to call my senators and reps about something (lately it's usually PEPFAR) most weeks; I wouldn't recommend going over once a week.

    All this comes from a mix of reading some online articles, my own experience, talking with people who have been calling about other causes, and a bit of speculation.

    Conclusion: Calling senators and representatives is easy and a good way to support PEPFAR reauthorization

Edit: found an earlier comment at https://forum.effectivealtruism.org/posts/ebGwTM2FAQcp8aMNH/francis-s-quick-takes#7cQuwprYi9AiAjjK4 that talks more about effectiveness.

This is a comment because it's not actually a justification for EA elitism. 

There are some okay-ish ways to quantify where students interested in Effective Altruism might end up. If we assume that, for a student to be interested in effective altruism, they need to have independently pursued some kind of extracurricular activity involving a skill of the kind that Effective Altruism might discuss, we can look at where the top competitors for those kinds of extracurriculars are.

One thing to beware is confounding factors. People who would be good for EA might be too busy to participate in these activities (either because they have busy class schedules or are involved in research or because they work outside of school). People might also be doing activities because they are superficially impressive, which probably isn't a good sign for thinking in a very EA way.

Here are some brief summaries of where top competitors in different American extracurriculars come from:

Ethics Bowl (https://en.wikipedia.org/wiki/Intercollegiate_Ethics_Bowl)- no clear pattern among the universities

Bioethics Bowl (https://en.wikipedia.org/wiki/Bioethics_Bowl)- similar to above

National Debate Tournament (https://en.wikipedia.org/wiki/List_of_National_Debate_Tournament_winners)- often but by no means exclusively prestigious US schools, seems to lean towards private schools a bit also? but I'm just eyeballing it

US Universities Debating Championship (https://en.wikipedia.org/wiki/US_Universities_Debating_Championship)- Mostly Ivy-League or similarly prestigious schools

Putnam Exam (https://en.wikipedia.org/wiki/William_Lowell_Putnam_Mathematical_Competition)- Strongly dominated by MIT

College Model UN (https://bestdelegate.com/2022-2023-north-american-college-model-u-n-final-rankings-world-division/)- no clear pattern besides DC-based schools tending to do well

I'm sure other people can add more to this list.

If you think that Putnam results are a strong predictor of Effective Altruism, that could justify more elitism. Personally, I doubt that.

Thanks for pointing this out; I'll note that Partners in Health is also available, and GiveWell seems to like them but doesn't think that they beat the GiveWell charity bar, at least when this was written (https://www.givewell.org/international/charities/PIH#:~:text=Partners%20in%20Health%20provides%20comprehensive,network%20of%20community%20health%20workers.). I'd be interested in seeing anything about whether Partners in Health is a better option than GiveDirectly.

I'm not an expert on most of the evidence in this post, but I'm extremely suspicious of the claim that GPT-4 represents AI that is "~ human level at language", unless you mean something by this that is very different from what most people would expect.

Technically, GPT-4 is superhuman at language because whatever task you are giving it is in English, and the median human's English proficiency is roughly nil. But a more commonsense interpretation of this statement is that a prompt-engineered AI and a trained human can do the task roughly as well.

What you link to shows the results of how GPT-4 performs on a bunch of different exams. This doesn't really show how language is used in the real world, especially since the exams very closely match past exams that were in the training data. It's good at some of them, but also extremely bad at others (AP English Literature and Codeforces in particular), which is an issue if you're making a claim that it's roughly human level.

Furthermore, language isn't just putting words together in the right order and with the right inflection. It also includes semantic information (what the actual meaning of the sentences is) and pragmatic information (is the language conveying what it is trying to convey, not just the literal meaning). I'm not sure whether pragmatics in particular would be relevant for AI risk, but the fact that anecdotally even GPT-4 is pretty bad at pragmatics prevents a literal interpretation of your statement.

In my opinion, the best evidence for GPT-4 not being human level at language is that, in the real world, GPT-4 is much cheaper than a human but consistently unable to outcompete humans. News organizations have a strong incentive to overhype GPT-caused automation, but the examples that they've found are mostly of people saying that either GPT-4 or GPT-3 (it's not always clear which) did their job much worse than them, but good enough for clients. Take https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/ as a typical story. 

Exams aren't exactly the real world, but the popular example of GPT-4 doing well on exams is https://www.slowboring.com/p/chatgpt-goes-to-harvard. This both ignores citations (which is a very important part of college writing, and one that GPT-3 couldn't do whatsoever and which GPT-4 still is significantly below what I would expect from a human) and relies on the false belief that Harvard is a hard school to do well at (grade inflation!)

I still agree with two big takeaways of your post, that an AI pause would be good and that we don't necessarily need AGI for a good future, but that's more because it's robust to a lot of different beliefs about AI than because I agree with the evidence provided. Again, a lot of the evidence is stuff that I don't feel particularly knowledgeable about, I picked this claim because I've had to think about it before and because it just feels false from my experience using GPT-4.

I agree with large chunks of this post, but I'm weakly confident (75ish%) that the claim about how newspapers work is wrong. Most newspapers that I am familiar with give their reporters specified beats (topics that they focus on), to at least some extent, although I think there are also reporters that don't have specific beats. So if there's an important tech story that needs covered, like AI x-risk, some of that is going to be as a replacement for other tech stories and some of that is going to be taken from people who write on pretty much anything. That still might mean more AI present-risk coverage, because it's hard to talk about one without talking about the other, and because there's a lot of other room in tech to take away stories from, but I don't think it's as simple as it appears.

However, I'm basing this mostly off info from newspapers that wouldn't write important AI x-risk stories. Maybe they behave differently? Some other people here probably know more than I do.

I cut out a bit that I had on how much it could scale to make it shorter, but there's still a bit about that in the practical section. In general, phytomining could potentially produce the vast majority of some of the less common elements (thallium), and Krol-Sinclair and Hale say that the same is true for cobalt. It could also produce a pretty large chunk of nickel- I don't have an exact number but I'm pretty confident that it could be a double-digit percentage of the world's consumption. So yes, it scales, but figuring out how much it scales would be a good target for future research.