All of velutvulpes's Comments + Replies

Shelly Kagan - readings for Ethics and the Future seminar (spring 2021)

If anyone were able to find pdfs for all of the papers and share the links here, that would be much appreciated

Shelly Kagan - readings for Ethics and the Future seminar (spring 2021)

I wasn’t aware it was first published in your blog. Thanks for nudging Prof Shelly Kagan to share their syllabus!

2018-2019 Long-Term Future Fund Grantees: How did they do?

Is there a useful way to financially incentivise these sort of independent evaluations? Seems like a potential good use of fund money

I'd be pretty excited about financially incentivizing people to do more such evaluations. Not sure how to set the incentives optimally, though – I really want to avoid any incentives that make it more likely that people say what we want to hear (or that lead others to think that this is what happened, even when didn't), but I also care a lot about such evaluations being high-quality and and having sufficient depth, so don't want to hand out money for any kind of evaluation.

Perhaps one way is to pay $2,000 for any evaluation or review that receives >120 Karma on the EA Forum (periodically adjusted for Karma inflation), regardless of what it finds? Of course, this is somewhat gameable, but perhaps it's good enough.

EARadio - more EA podcasts!

Done! Thanks for working on this! Do the other links still work fine?

1Patrick4moThanks! Yes, they do.
Should EA Buy Distribution Rights for Foundational Books?

I've set up a system for buying books for people on request. If people are interested in using it you can read more and express interest here: eabooksdirect.super.site 

How much do you (actually) work?

I track my time using hourstack.com and try to be quite strict with only tracking 'sit down work time'. I normally can do around 3.5-4h of work a day. I normally start at 10am and finish around 5pm.

This matches my experience at college, where I found I could normally do around 4 hours of studying before feeling tired out.

It's easier for me to 'clock more hours' when I have more meetings. But I try to avoid meetings.

I find that I can get most of my things done within this time and would consider myself a quite productive person.

CEA update: Q1 2021

Thanks for explaining your view! I don’t really have super strong views here, so don’t want to labour the point, but just thought I’d share my intuition for where I’m coming from. For me it makes sense to have a thresholds at the places because it does actually carve up the buckets of reactions better than the linear scale suggests.

For example, some people feel weird rating something really low and so they “express dislike” by rating it 6/10. So to me the lowest scorers and the 6/10ers are actually probably have more similar experiences than their linear... (read more)

2MaxDalton6moThanks for explaining! The guess about how people use the scale seems pretty plausible to me.
CEA update: Q1 2021

Thanks! I guess I think NPS is useful precisely because of those threshold effects, but agree not sure that it handles the discrimination between 6 and 1 well. Histograms seem great!

Hmm, I still think the threshold effects are kinda weird, and so NPS shouldn't be the main measure. (I know you're just asking for it as supplementary info, and I think we'd maybe both prefer mean + histogram.)

There's a prima facie case, that's like: the threshold effects say that you care totally about the 6/7 and 8/9 boundaries, and not-at-all about the 5/6, 7/8, 9/10 boundaries. That's weird!

I could imagine a view that's like "it's really important to have enthusiastic promoters because they help spread the word about your product" or something, but the... (read more)

CEA update: Q1 2021

Would you be able to provide a Net Promoter Score analysis of your Likelihood to Recommend metrics? I find NPS yields different, interesting information from an averaged LTR and should be very straightforward to compute.

2MaxDalton6moFor groups support calls, one staff member's NPS was 83% and another's was 55%. (They were talking to different user groups, which probably explains some of the discrepancy.)
5MaxDalton6moEA Global: Reconnect NPS was 20%

Sure! I've asked the relevant people to respond with the NPS figures if it's quick/easy for them to do so, but they might prioritize other things.

Btw, I disagree about how useful NPS is. I think it's quite a weird metric (with very strong threshold effects between 6/7 and 8/9, and no discrimination between a 6 and a 1). That's why we switched to the mean. I do think that looking at a histogram is often useful though- in most cases the mean doesn't give you a strong sense of the distribution.

CEA update: Q1 2021

Hey Brian. I'd have to ask the individuals who wrote up their docs, but the plan is definitely to eventually share more of these type of group writeups widely. They weren't written with a broad audience in mind, but I feel like several leaders would be keen to share their writeups more publicly after cleaning them up a bit. I'll nudge people on this and ask if they're keen

2BrianTan6moGot it, thanks! If they could be compiled and put on the EA Hub Resources, such as on this page [https://resources.eahub.org/tips/articles/priorities/], that would probably be the best place to compile them?
How much does performance differ between people?

Minor typo: "it’s often to reasonable to act on the assumption" probably should be "it’s often reasonable to act on the assumption"

2Max_Daniel7moThanks! Fixed in post and doc.
Some quick notes on "effective altruism"

A small and simple change that CEA could do is to un-bold the 'Effective' in their 'Effective Altruism' logo which is used on https://www.effectivealtruism.org/ and EAG t-shirts

I find the bold comes across as unnecessarily smug emphasis in Effective Altruism.

[link] Centre for the Governance of AI 2020 Annual Report

I think you might have accidentally linked to the 2019 report. The 2020 report seems to be here https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report/

3MarkusAnderljung9moThanks for the catch :) Should be updated now
What’s the low resolution version of effective altruism?

(rough note) This seems to have strands of: 'rich people focused' 'rich people are more moral' 'E2G focus'

What’s the low resolution version of effective altruism?

Nice! Could you do a version which is 70% lower resolution? 😁

7kokotajlod10moThanks! How about these: "Effective altruists believe you'll 1000x more good if you prioritize impact" "Effective altruists believe you'll 1000x more good if you actually try to do the most good you can." "Effective altruists believe you'll do 1000x more good if you shut up and calculate" "Effective altruists believe you'll do 1000x more good if you take cost-effectiveness calculations seriously" I think the third one is my favorite, haha, but the second one is what I think would actually be best.
What’s the low resolution version of effective altruism?

It might be that SF has more people who are kinda into EA such that they donate 10% to givewell, diluting out the people who are representative of more extreme self sacrifice

What’s the low resolution version of effective altruism?

Interesting about the idea that EA let's people off the moral hook easily: 'I'm rich so I just donate and I've done my moral duty and get to virtue signal'

It's interesting how that applies to people who are wealthy, work a conventional job, and donate 10% to charities, but doesn't seem like a valid criticism against those who donate way more like 50%+. That normally seems to be met with the response "wow that's impressive self sacrifice!". Same with those who might drastically shift their career

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn... (read more)

3velutvulpes10moIt might be that SF has more people who are kinda into EA such that they donate 10% to givewell, diluting out the people who are representative of more extreme self sacrifice
What’s the low resolution version of effective altruism?

'Charity for nerds' doesn't sound like an awful low res version compared to others suggested like 'moral hand-washing for rich people'.

'Charity for nerds' has nice properties like:

  • it's okay if you're not into EA (maybe you're just not nerdy enough), compared to 'EA things you're evil if you don't agree with EA'
  • selects for nerdy people, who are willing to think hard about their work
4Owen_Cotton-Barratt10moI agree with this. I think "do-gooding for nerds" might be preferable than "charity for nerds", but probably "charity for nerds" is closer to current perceptions.
1Ayman9moThanks!
What’s the low resolution version of effective altruism?

Tyler Cowen's low resolution version: "COWEN: A lot of giving is not very rational. Whether that’s good or bad, it’s a fact. And if you try to make it too rational in a particular way, a very culturally specific way, you’ll simply end up with less giving. And then also, a lot of the particular targets of effective altruism, I’m not sure, are bad ideas. So somewhere like Harvard, it has a huge endowment, it’s super non- or even anti-egalitarian. But it’s nonetheless a self-replicating cluster of creativity. And if you’re a rich person, Harvard was your alma... (read more)

Strong Longtermism, Irrefutability, and Moral Progress

Thanks for the reply and taking the time to explain your view to me :)

I'm curious: My friend has been trying to estimate the liklihood of nuclear war before 2100. It seems like this is a question that is hard to get data on, or to run tests on. I'd be interested to know what you'd recommend them to do?

Is there a way I can tell them to approach the question such that it relies on 'subjective estimates' less and 'estimates derived from actual data' more?

Or is it that you think they should drop the research question and do something else with their time, since any approach to the question would rely on subjective probability estimates that are basically useless?

0ben_chugg10moWell, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that's a fruitless and impossible endeavour. History is not an iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge at the time. We cannot pretend to know what these will be. Even if you were to gather all the available evidence we have on nuclear near misses, and generate some sort of probability based on this, the answer would look something like: "Assuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x". We can debate the merits of such a model, but I think it's clear that it would be of limited use. None of this is to say that we shouldn't be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.
Strong Longtermism, Irrefutability, and Moral Progress

Thanks for taking the time to write this :)

In your post you say "Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs."

I just wanted to understand better: Do you think its ever reasonable to make subjective probability estimates (have 'credences') over things... (read more)

1ben_chugg10moHey James! Answering this in its entirety would take a few more essays, but my short answer is: When there are no data available, I think subjective probability estimates are basically useless, and do not help in generating knowledge. I emphasize the condition when there are no data available because data is what allows us to discriminate between different models. And when data is available, well, estimates become less subjective. Now, I should say that I don't really care what's "reasonable" for someone to do - I definitely don't want to dictate how someone should think about problems. (As an aside, this is a pet peeve of mine when it comes to Bayesianism -it tells you how you must thinkin order to be a rational person. As if rationality was some law of nature to be obeyed.) In fact, I want people thinking about problems in many different ways. I want Eliezer Yudkowski applying Bayes' rule and updating in strict accordance with the rules of probability, you being inspired by your fourth grade teacher, and me ingesting four grams of shrooms with a blindfold on in order to generate as many ideas as possible. But how do we discriminate between these ideas? We subject them to ruthless criticism and see which ones stand up to scrutiny. Assigning numbers to them doesn't tell us anything (again, when there's no underlying data). In the piece I'm making a slightly different argument to the above, however. I'm criticizing the tendency for these subjective estimates to be compared with estimates derived from actual data. Whether or not someone agrees with me that Bayesianism is misguided, I would hope that they still recognize the problem in comparing numbers of the form "my best guess about x" with "here's an average effect estimate with confidence intervals over 5 well-designed RTCs".
What areas are the most promising to start new EA meta charities - A survey of 40 EAs

Thanks for this write up! I'm excited about CE looking into this area. I was wondering whether you were able to share information about the breakdown of which organisations  the 40 EAs you surveyed came from and/or which chapters were interviewed, or whether that data is anonymous? 

3Joey10moSadly not able to share that data I can say it tended to be bigger organizations and bigger chapters.
Open and Welcome Thread: December 2020

Welcome, Roger! 😊 Congrats on moving towards a vegetarian diet, even though you previously thought you wouldn't have attempted it 👏

Guerrilla Foundation Response to EA Forum Discussion

A quick guess of something that might be underpinning a worldview difference here is a differing conception of what counts as "harm". In the original post, the author suggests that a wealthy donor should try and pay reparations to reverse or prevent further harm in the specific sector in which the wealth was generated.

But I think most EAs have an unusual (but philosophically defensible) conception of harm which not only includes direct harm but also indirect harm caused by a failure to act.

So for an EA, if a wealthy donor is faced with a choice between

  1. pa
... (read more)
1Jeremy10moSeems plausible.
EA Forum Prize: Winners for October 2020

Is the prize paid out to the recipient or is the prize a donation to a charity at the recipient’s choosing?

3Aaron Gertler10moPrizes are paid out to recipients. Sometimes, they ask us to instead donate the money to a charity on their behalf, which we are also willing to do.
Careers Questions Open Thread

Hey Will! Would you be able to say anything more about why you didn't like the 2 years of college that you did? What sort of college degrees are you looking into right now? :)

2Will Kirkpatrick1yI was one of those kids who was told they were smart and didn't have to do much in high-school. As a result I got hit pretty hard in the face by the requirement of actually trying in college. Combine this with the fact that I didn't do well away from a support network and you have a pretty bad downward spiral. I eventually recovered, but boy was it a rough couple of years! Right now I'm looking at either technical work or more general purpose studying: The difference between those is a kind of along the Engineering/computer science or Economics/Business divide. I'm currently thinking that because I already have a background in engineering type work that maybe getting an economics/buisness degree to round myself out would be a good choice.
How to best address Repetitive Strain Injury (RSI)?

I found using voice dictation on my phone and iPad pretty good, often now I just send emails and messages using my phone instead of my computer.

I find the Google speech recognition on the Google keyboard for Android pretty good, as well as the Apple speech recognition on IOS devices.

Please Take the 2020 EA Survey

Thanks for organising this! I think the survey is very valuable! I was wondering if you could you say more on why you "will not be making an anonymised data set available to the community"? That seems initially to me like an interesting and useful thing for community members to have, and was wondering whether it was just a lack of resources/it being difficult, that meant that you weren't doing this anymore.

Thanks!

Roughly speaking, there seem to be two main benefits and two main costs to making an anonymised dataset public. The main costs: i) time and ii) people being turned off of the EA Survey due to believing that their data will be available and identifiable. The main benefits: iii) the community being able to access information (which isn't included in our public reports) and iv) transparency and validation from people being able to replicate our results.

Unfortunately, the dataset is so heavily anonymised in order to try to reduce cost (ii) (while simult... (read more)

2Thomas Kwa1yTo add to that, if there are concerns about data being de-anonymized, there are statistical techniques [https://www.census.gov/about/policies/privacy/statistical_safeguards.html] to mitigate it.
Evidence, cluelessness, and the long term - Hilary Greaves

On October 25th, 2020, Hilary Greaves gave a talk on ‘Cluelessness in effective altruism’  at the EA Student Summit 2020. I found the talk so valuable that I wanted to transcribe it.

I made the transcript with the help of http://trint.com/, an AI speech-to-text platform which I highly recommend. Thank you to Julia Karbing for help with editing.

3BrianTan1yThanks for linking trint.com - I hadn't heard of it before. Have you tried otter.ai though? I think it could be as good as trint, and Otter is cheaper compared to Trint. They even have a free version that works quite well.
EARadio - more EA podcasts!

I wanted to share EARadio on the Forum again. Although this project has been going for a long time, I think a lot of people probably aren't aware of its existence.

I know a lot of my EA friends often want to watch EA Global lectures but never get round to actually doing so. I think EARadio provides a great service in allowing people to consume this content in an easy and accessible way.

3CristinaSchmidtIbáñez1y+1 here. I wasn't aware of this at all. Thank you for posting this again! Same situation as Rowan's. I'll finally get through some talks I've been putting off for a while :)
4Rowan_Stanley1yOh, awesome! Thanks for [re]posting. I'm basically the kind of person you describe: hadn't heard of the project, have been wanting to get through EAG lectures but haven't made the time, like to listen to podcasts while exercising doing housework etc. This will be a great addition to my feed :)
4vaidehi_agarwalla1yThanks for sharing, I wasn't aware of this! Looks really great :)
5EdoArad1yThanks for sharing it again! There is a lot of great content there :)
Making More Sequences

I also really value sequences! I’m working on an (extremely janky) web app to read sequences of content, as a way to learn web development.

I hope to eventually make it into a nice app that people can use to easily make their own sequences of EA content from around the web, and for people to discover and read this content.

You can check it out here (doesn't work great on mobile yet, unfort) : https://react-sequences.web.app/

I’m keen to work on it more once I stop having RSI 😅, so if people do have comments and feedback would love to hear it.

I Want To Do Good - an EA puppet mini-musical!

I just saw this now and loved it, super excited for more content in the future!

Correlations Between Cause Prioritization and the Big Five Personality Traits

I believe you can edit the image size of images on old posts by dragging their bottom border down when in edit mode

2David_Moss1yThanks!
Prospecting for Gold (Owen Cotton-Barratt)

I've now changed that section to:

"On the right is a factorisation which is mathematically trivial and looks like it just makes things more complicated. I've taken the expression on the left and added in a load of things which cancel each other out. But I hope I can justify this decomposition by virtue of it being easier to interpret and measure. So I'm going to present the case for why I think it is."

Do let me know if you'd prefer something different to that :)

3velutvulpes1yI've now changed that section to: "On the right is a factorisation which is mathematically trivial and looks like it just makes things more complicated. I've taken the expression on the left and added in a load of things which cancel each other out. But I hope I can justify this decomposition by virtue of it being easier to interpret and measure. So I'm going to present the case for why I think it is." Do let me know if you'd prefer something different to that :)
Prospecting for Gold (Owen Cotton-Barratt)

This is a heavily edited transcript of the popular talk "Prospecting for Gold". We created this edited versions because we found it hard to follow the transcripts provided by CEA and thought there could be some value in condensing, clarifying and cleaning up the transcript.

You can also read a transcript of Amanda Askell's talk 'The Moral Value of Information' here: https://forum.effectivealtruism.org/posts/Kb66mhLuHiTByP6mk/the-moral-value-of-information-edited-transcript-1

Not a cookbook, but you might find http://ethical.diet/ interesting. It shows 'How many hours did animals have to live on factory farms to produce various food products?'

Defining Effective Altruism

Is there a way to read the finalised (instead of penultimate) article without purchasing the book? Perhaps, Will, you have a PDF copy you own?

Center for Global Development: The UK as an Effective Altruist

The title of the CGD article is "The UK as an Effective Altruist"

4Aaron Gertler1yWell, that explains the original title! I still think a title change would be helpful; I had to read this title a few times to make sure I hadn't missed a word.
8Linch1yI concur with Aaron that the title was insufficiently descriptive for this forum, even if the ultimate failing was due to CGD rather than Dale. "Center for Global Development: The UK as an Effective Altruist" would be reasonable.
New member--essential reading and unwritten rules?

Welcome to the community! And congratulations on your achievements so far!

It could be worth learning study skills so that you can do better in your degree and/or get your coursework done in less time, freeing up your time to learn other things, explore EA, or just have fun.

I was surprised when coming to university how much people study skills differed, and I don’t think it’s unreasonable to say that you can free up weeks (months?) of your time and save yourself a lot of stress through good study skills.

I’d recommend the cousera course called learning how t

... (read more)
3Lumpyproletariat1yThank you for the warm welcome and the advice--I just made an account on Coursera's website and am enrolled in the course you recommended. (On the presumption that the certificate isn't worth the ink I'd have to print it with, I opted not to pay for the course--if I exist mortal error plz do tell.) I've already read what 80,000 Hours had to say about being successful--applying it, now that will be the truer test.
Differential technological development

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

Load More