All of Robi Rahman's Comments + Replies

Beef cattle are not that carbon-intensive. If you're concerned about the climate, the main problem with cattle is their methane emissions.

If you eat them, your emissions, combined with other people's emissions, are going to cause a huge amount of both human and non-human suffering.

If I eat beef, my emissions combined with other people's emissions does some amount of harm. If I don't eat beef, other people's emissions do approximately the same amount of harm as there would have been if I had eaten it. The marginal harm from my food-based carbon emissions are really small compared to the marginal harm from my food-based contribution to animal suffering.

7
Lorenzo Buonanno
13d
It was marked community by the author, we can remove the tag if he wants us to. I agree it's more a request for funding than something about the EA community. AFAIK users can't remove the community tag from posts because of worries of misuse

You make some great points. If you think humanity is so immoral that a lifeless universe is better than one populated by humans, then yes, it would indeed be bad to colonize Mars, from that perspective.

I would be pretty horrified at humans taking fish aquaculture with us to Mars, in a manner as inhumane as current fish farming. However, I opened the Deep Space Food Challenge link, and it's more like what I expected: the winning entries are all plants or cellular manufacturing. (The Impact Canada page you linked to is broken.)

If we don't invent any morally ... (read more)

5
BrianK
13d
Thanks, you too! Perhaps you are right re: wild animal suffering. I’ll add that insect farming is relevant too: https://www.deepspacefoodchallenge.org/phase1winners.

Interesting argument. However, I don't think this point about poverty is right.

The problem is that [optimistic longtermism is] based on the assumption that life is an inherently good thing, and looking at the state of our world, I don’t think that’s something we can count on. Right now, it’s estimated that nearly a billion people live in extreme poverty, subsisting on less than $2.15 per day.

Poverty is arguably a relic of preindustrial society in a state of nature, and is being eliminated as technological progress raises standards of living. If we were to ... (read more)

Thanks for your engagement.

That’s an interesting point with respect to poverty. Intuitively I don’t see any reason why there won’t be famine and war and poverty in the galaxies, as there is and presumably will continue to be on Earth, but I’ll think on it more. I really doubt folks out there will live in peace, provided they remain human. One could articulate all sorts of hellscapes by looking at what it is like for many to live on Earth.

Humans by nature are immoral. For example, most members want to eat animals, and even if they know that it is wrong to e... (read more)

Shrimpify Mentoring? Shrimping What We Can? Future of Shrimp Institute?

Oh, and we can't forget about 1FTS: One for the Shrimp.

5
SofiaBalderson
22d
One for the Shrimp is one of my favourites!! 
3
AnonymousTurtle
24d
The Shrimp You Can Save Actually, all EA orgs should just rename to "The Shrimps You Can Save"

I'm very disappointed that Rethink Priorities has chosen to rebrand as Rethink Shrimp. I really think we should have gone with Reshrimp Priorities. That said, I will accept the outcome, whatever is deemed to be most effective, and in any case redouble my efforts to forecast timelines to the shrimp singularity.

1
Constance Li
23d
Yeah.. I think I agree. We have thought and re-thought enough about shrimps and it is now the time to ACT! Will change it in the post.

I don't see Shapley values mentioned anywhere in your post. I think you've made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.

5
Sam_Coggins
1mo
Wouldn't estimating Shapley values still miss a core insight of the post - that 'do-gooding' efforts are ultimately co-dependent, not simply additive? EXAMPLE: We can estimate the Shapley values for the relative contributions of different pieces of wood, matches, and newspaper to a fire. These estimated Shapley values might indicate that biggest piece of wood contributed the most fire, but miss three critical details: 1. The contribution of matches and newspaper was 'small' but essential. This didn't come up in our estimated Shapley values because our dataset didn't include instances where there was no matches or no newspaper 2. Kindling was also an essential contributor but was not included in our calculations 3. The accessibility of fire inputs had their own interacting inputs, e.g. a trusting social and economic system that enabled us to access the fire inputs We also make the high-risk assumption that the fire would be used and experienced beneficially INTERPRETED IMPLICATION: estimated Shapley values still miss, at least in part, that outcomes from our efforts are co-dependent. We therefore still mislead ourselves by attempting to frame EA as an independent exercise? (I'm not confident on this and would be keen to take on critiques)

I don't really see anything in the article to support the headline claim, and the anonymous sources don't actually work at NIST, do they?

6
SiebeRozendal
2mo
I don't know, threatening to resign is a pretty concrete thing and I don't find "revolt" such an exaggeration. You can doubt the sources and wish for more concrete evidence (a letter?), but I'd still put >50% that it's broadly correct EDIT: okay there's a clear ambiguity about how many people are threatening to resign, in a way that if it's only 1 or 2, it's clearly misleading.
6
Phib
2mo
Agreed, the evidence is solely, "according to at least two sources with direct knowledge of the situation, who asked to remain anonymous."

Rather than farmers investing more profits from growing plants into animal farming, I think the main avenue of harm is that animal feed is an input to meat production, so if the supply of feed increases, production of meat would increase.

Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.

I'm guessing preference utilitarians would typically say that only the preferences of conscious entities matter. I doubt any of them would care about satisfying an electron's "preference" to be near protons rather than ionized.

2
Matthew_Barnett
3mo
Perhaps. I don't know what most preference utilitarians believe. Are you familiar with Brian Tomasik? (He's written about suffering of fundamental particles, and also defended preference utilitarianism.)

So you think your influence on future voting behavior is more impactful than your effect on the election you vote in?

2
RedStateBlueState
3mo
That and/or acausal decision theory is at play for this current election

Gina and I eventually decided that the data collection process was too time-consuming, and we stopped partway through.

Josh You and I wrote a python script that searches Google for a list of keywords, saves the text of the web pages in the search results, and shows them to GPT and asks it questions about them from a prompt. This would quickly automate the rest of your data collection if you have the pledge signers in a list already. Email me if you want a copy.

8
Lorenzo Buonanno
3mo
You can open the web console on the linked page https://givingpledge.org/pledgerlist and type $$(".pledger-list-item").map(e => e.textContent);  (or you can just copy-paste the list directly) [    "Bill Ackman and Neri Oxman",    "Tegan and Brian Acton",    "Margaret and Sylvan Adams",    "Anil Agarwal",    "Leonard H. Ainsworth",    "Paul G. Allen (d. 2018)",    "HRH Prince Alwaleed Bin Talal Bin Abdulaziz AlSaud",    "Brian Armstrong",    "Sue Ann Arnall",    "Laura and John Arnold",    "Marcel Arsenault and Cynda Collins Arsenault",    "Lord Ashcroft KCMG PC",    "Jon and Helaine Ayers",    "Stewart and Sandy Bainum",    "Sarah and Rich Barton",    "Lynne and Marc Benioff",    "Nicolas Berggruen",    "Manoj Bhargava",    "Aneel and Allison Bhusri",    "Sheikh Dr. Mohammed Bin Musallam Bin Ham Al-Ameri",    "Steve Bing (d. 2020)",    "Sara Blakely",    "Arthur M. Blank",    "Nathan and Elizabeth Blecharczyk",    "Michael R. Bloomberg",    "David G. Booth",    "Richard and Joan Branson",    "Eli (d. 2021) and Edythe Broad",    "Charles R. Bronfman",    "Edgar M. Bronfman (d. 2013)",    "Warren Buffett",    "Charles Butt",    "Garrett Camp",    "Steve and Jean Case",    "John Caudwell",    "Brian Chesky",    "Ron and Gayle Conway",    "Scott Cook and Signe Ostby",    "Lee and Toby Cooperman",    "Joe and Kelly Craft",    "Joyce and Bill Cummings",    "Ravenel B. Curry III",    "Benoit Dageville and Marie-Florence Dageville",    "Ray and Barbara Dalio",    "Jack and Laura Dangermond",    "John Paul DeJoria",    "Ben Delo",    "Mohammed Dewji",    "Barry Diller and Diane von Furstenberg",    "Ann and John Doerr",    "Dagmar Dolby",    "DONG Fangjun",    "Glenn and Eva Dubin",    "Anne Grete Eidsvig and Kjell Inge Røkke",    "Ric and Brenda Elias",    "Larry Ellison",    "Henry Engelhardt, CBE and Diane Briere de L'Isle-Engelhardt, OBE",    "Candy and Charlie Ergen",    "Judy Faulkner",    "Charles F. Feeney (d. 2023)",    "Andrew and Nicola Forrest",    "Ted Forstman

The social value of voting in elections is something where I've seen a lot of good arguments on both sides of an issue and it's unresolved with substantial implications for how I should behave. I would really love to see a debate between Holden Karnofsky, Eric Neyman, and Toby Ord against Chris Freiman and Jacob Falkovich.

Context for people who don't follow the authors:

"Why Swing-State Voting is not Effective Altruism" by Jason Brennan and Chris Freiman: https://onlinelibrary.wiley.com/doi/abs/10.1111/jopp.12273

Eric Neyman on voting:

https://ericneyman.word... (read more)

4
RedStateBlueState
3mo
I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn't the best way of thinking about it. Your vote "influences" other people's vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).

I don't think this is empirically true. US speed limits are typically set lower than the safest driving speeds for the roads, so micromurders from speeding are often negative in areas without pedestrians.

6
Jeff Kaufman
4mo
I'm not convinced the social cost is low, and I'm not convinced for shoplifting either; hence the 'arguably'. I think insurance fraud, though, is often quite a lot like shoplifting? You're getting something for free from a large company, they have budgeted based on a non-zero amount of it, the costs are spread across all their customers, risk of death to anyone is very low, etc.

I agree, however, isn't there still the danger that as scientific research is augmented by AI, nanotechnology will become more practical? The steelmanned case for nanotech x-risk would probably argue that various things that are intractable for us to do now, have no theoretical reasons why they couldn't be done if we were slightly better at other adjacent techniques.

they were trying to do was place two carbon atoms onto a carbon surface, and they failed, as they didn't have the means to reliably image diamond surfaces

Has this limitation been ameliorated by advancements in imaging? I used to work in materials science and don't anymore, but my understanding is that scientists have very recently refined needles to one-atom width at the point, which should improve the resolution of scanning tunneling microscopy. Someone correct me if I'm wrong.

8
titotal
4mo
I asked professor Moriarty about the imaging thing, his response was: So it wasn't about the imaging tech in general, but specifically the difficulty of working with diamond. It's possible you could do something with another material, but diamondoid was chosen for a reason, as it is much stronger and more stable than other materials, which i think could be  a requirement for atomically precise atom placement.  I haven't seen any uptick in citations for drexler papers recently, so I don't think any advancements have been made on this front.

a prosecutor showing smiling photos of a couple on vacation to argue that he couldn’t have possibly murdered her

I think you meant a defense attorney, not a prosecutor.

  1. Kat is responding to other questions in this thread, but not ones about the "Sharing Information on Ben Pace" section.
  2. It's not clear that the anecdotes are from someone outside of Nonlinear who had some bad experience with Ben Pace other than Ben publishing the original post about Nonlinear.
  3. It's not clear whether Kat wants people to think that it's about some unmotivated third party, or if it's supposed to be obvious that it's Kat writing her own experience in third person. She did write in the post that you shouldn't update on it, but maybe she wants it t
... (read more)

It's not clear the anecdotes in that section are real and not made-up. Kat is dodging questions about it, so for all we know, it could be the case that everyone referenced in that section was a Nonlinear employee who feels bad due to Ben's post. Some people elsewhere in this thread theorized that it's Kat describing herself, and strangely but conspicuously, she hasn't denied it.

2
David M
4mo
Edit: I misread what you were saying. I thought you were saying 'Kat has dodged questions about whether it was true', and 'It's not clear the anecdotes are being presented as real'. Actually, Kat said it was true.
0
Rafael Harth
4mo
If it is, in fact, based someone from Nonlinear, then I'd agree that the section is bad. At that point, it would no longer be a valid example of "look, you can do this to anyone".

David probably meant "overall character of Nonlinear management" there. And in that case you might not interview the managers themselves, although you'd probably want to interview other employees to see if they were treated like Alice and Chloe.

Can you just confirm that it's something someone else told you, and not referring to yourself in third person?

Phrasings like 
"if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary"
for what is evidently a fully paid, luxurious work & travel experience... tanks the quality of the comment.

Huh? No, that is a succinct and accurate description of a disputed interpretation, and I think Nonlinear's interpretation is wrong there. They keep saying in their defense that they paid Alice (the equivalent of) $72,000 when they didn't - it's really not the same thing at all if 80% of it is comped flights, food, and hotels. At least for me, the amount of cash that would be an equivalent value to Alice's compensation package is something like $30-40,000.

I’m less interested in “debating whether a person in a villa in a tropical paradise got a vegan burger delivered fast enough” or “whether it’s appropriate for your boss to ask you to pick up their ADHD medication from a Mexican pharmacy” or “if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary”? Than in interrogating whether EA wouldn’t be better off with more “boring” organisations

Though the degree of un-professionalism displayed by all parties involved in this saga is startling, I actually think EA has a great mix of "b... (read more)

I don't follow. Can you explain how Will Aldred's comment was preposterously naive?

I think it's not actually accurate to say that

The vast majority of what they gave is disputing the evidence

as it's constantly interspersed with stuff like how great it is to work in a hot tub.

  • [Alice] chose to pay herself an annualized ~$72,000 per year - more than anyone else at the org, and far more than the ~minimum wage she earned in previous jobs. 
  • This is more than most people make at OpenPhil, according to Glassdoor.

This seems unlikely - these numbers on Glassdoor are way lower than I'd expect for most of these job titles. Can anyone from OP corroborate?

The Glassdoor numbers are outdated. We share salary information in our job postings; you can see examples here ($84K/year plus a $12k 401k contribution for an Operations Assistant) and here (a variety of roles, almost all of which start at $100k or more per year — search "compensation:" to see details).

-8
Kat Woods
4mo

I am confident many of these salaries are inaccurate. I don't know the operation-jobs pay-scales, since I've interfaced more with the grantmakers and research associates, but I would be very surprised if these are the current numbers.

When will we learn? I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects.

Huh? What's the lesson from FTX that would have improved the OpenAI situation?

-1
Vaipan
5mo
Don't trust lost-canony individuals? Don't revere a single individual and trust him with deciding the fate of a such an important org?

What are some EA/LW/etc coworking spaces that could accommodate ~10 people for ~5 days? I'm aware of Constellation and Lighthaven (Berkeley, CA), HAIST and MAIA (Cambridge, MA), Wytham Abbey and Trajan House (Oxford, UK), CEEALAR (Blackpool, UK), LEAH and LISA (London, UK), Epistea and Fixed Point (Prague, Czechia). Are there any others?

3
James Herbert
5mo
EA Netherlands' co-working office in Amsterdam

What are the stringent and permissive criteria for judging that someone has heard of EA?

8
David_Moss
6mo
The full process is described in our earlier post, and included a variety of other checks as well.  But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with your donations or career; or referring to specific EA figures, books, orgs, events etc.), or whether it was merely probable based on their comment that they had heard of effective altruism (e.g. because the responses were more vague or less specific).

They keep saying they're working on a response. It's probably around 500 pages by now.

If you're trying to maximize computational efficiency, instead of building a Dyson sphere, shouldn't you drop the sun into a black hole and harvest the Hawking radiation?

2
Vasco Grilo
6mo
Hi Robi, For reference, Anders Sandberg discussed that on The 80,000 Hours Podcast (emphasis mine): William, I am guessing you would like Anders' episodes! You can find them searching for "Anders Sandberg" here.

Upvoted your post because you made some good points, but I think your analogy between human cloning and AI training is totally wrong.

Take for example, human reproductive cloning. This is so morally abhorrent that it is not being practised anywhere in the world. There is no black market in need of a global police state to shut it down. AGI research could become equally uncool once the danger, and loss of sovereignty, it represents is sufficiently well appreciated.

There is no black market in human cloning, and no police state trying to stop it, because n... (read more)

2
Greg_Colbourn
6mo
Thanks. I also address the "get rich" point though! People can't get rich from it AGI because they lose control of it (/the world ends) before they get rich. AGI is not that useful either, because it's uncontrollable and has negative externalities that will come back and swamp any hoped for benefits, even for the producer (i.e. x-risk).

Can you name some of the red flags to watch for? I'd also be interested in hearing who some of the bad actors are (perhaps in a DM if you don't want them to know they've been spotted).

This doesn't answer your question, but: I've heard several people opine that "fiscal sponsorship" is a really bad name for what it entails. I work at Epoch, which is a fiscal sponsee of Rethink Priorities (and yes, RP uses the word "sponsee" for us and all their sponsees). My understanding is that we (Epoch) pay some kind of fee to RP (annual? maybe a percentage of our budget? idk), and in return, RP's HR people handle our HR stuff and some of their ops people spend some time doing ops work for us. This is almost the complete opposite of being "fiscally sp... (read more)

2
merilalama
6mo
Hey Robi! Yeah, I agree fiscal sponsorship can be a misleading term, since "sponsor" suggests someone who provides money. In the case of fiscal sponsorship, what the sponsor provides is tax-exempt status. I'd be somewhat reticent to use another term because this one is widely used in the nonprofit world. From Wikipedia: I do think the EA community could use a bit more clarity around what fiscal sponsorship is, though. Maybe us at RP will write some posts about this soon. I should note also that fiscal sponsors often don't provide operational support, as RP does for Epoch and other fiscally sponsored projects. So that's not what the term "fiscal sponsorship" primarily refers to. Outside of EA, I think it's more commonly just a way for non-profit projects to accept tax-exempt donations.
2
Harry Luk
7mo
Thank you for your comment. Yes, just want to confirm the costs involved:  CE’s handbook page 350 estimates costs as well: Couple reasons why the cost could be worth it is:

We just finished hiring a data analyst for October. It's possible that we'll hire another candidate in the future, but the position is not currently taking applications.

I don't think this speaks badly to their skill level and certainly not their potential, just that they start out in a really unfair circumstance, with a head filled with a bunch of bullshit that just needs to be thrown out as cleanly as possible, and Mearsheimer is a great way to do that. 

I'm out of the loop; what's the bullshit from high school civics class that needs to be thrown out of my head, and why is Mearsheimer unbalanced but also a good starting point?

Do you have examples of laws EA orgs might ask their employees to break?

Edited to clarify that my experiences were all with the same organization.

Some personal examples:

I worked for an EA-adjacent organization and was repeatedly asked, and witnessed co-workers being asked, to use campaign donation data to solicit people for political and charitable donations. This is illegal[1]. My employer openly stated they knew it was illegal, but said that it was fine because "everyone does it and we need the money". I was also asked, and witnessed other people being told, to falsify financial reports to funders to make it look like we had... (read more)

- Voluntary human challenge trials
- Run a real money prediction market for US citizens
- Random compliance stuff that startups don't always bother with: GDPR, purchased mailing lists, D&I training in california, ...

Here are some illegal (or gray-legal) things that I'd consider effectively altruistic though I predict no "EA" org will ever do:
- Produce medicine without a patent
- Pill-mill prescription-as-a-service for certain medications
- Embryo selection or human genome editing for intelligence
- Forge college degrees
- Sell organs
- Sex work earn-to-give
- Helping illegal immigration

4
Joseph Lemien
7mo
Aside from animal liberation-style direct action activities, the things that most readily comes to my mind are labor law/employment law. Hypothetical example: An organization having a team retreat in Mexico, in which they (employees who are not citizens of Mexico and who do not have the legal right to work in Mexico) do some work on their laptops. This seems very minor to me, and the risk of local tax authorities coming after some people doing a few hours of work on laptops while hanging out in their Airbnb seems miniscule. But it is something that employees travelling internationally for a team retreat should probably be aware of. Similar issues would arise with a nomadic team, that moves around from country to country. However, I don't view this as a major concern within the EA community. I'm responding more so to the idea of "are there any laws EA orgs might ask their employees to break," rather than the idea of "which of these are concerns worth bothering about."

I am very happy to clarify topics around nuclear, coming from the energy industry myself.

What part of the energy industry do you work in?

For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that's $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I'd bet they'd get more than 1% of the benefit. 

Actually, I'm not sure this is right. An evening has around 1/10 of the networking duration of a weekend, and number of connections are proportional to time spent networking and to number of participants squared. If this is 1/... (read more)

and number of connections are proportional to time spent networking and to number of participants squared

This seems wrong, 1-1s are gated by the fact that there are only so many 30 minute slots in a day. Doubling the number of attendees might allow someone to be slightly more selective in who they network with but it doesn't let them do 4x as many meetings.

Furthermore, a lion becomes more dangerous as it becomes more intelligent and capable, even if its terminal goal is not "maximize number of wildebeests eaten".

I think you're committing a typical mind fallacy if you think most people would benefit from reading HPMOR as much as you did.

There are lots of these! I saw several orgs that provide that service while I was working on EAGxNYC. I can look them up later and get back to you (but we're really busy with the conference coming up so it might take a while).

1
warrenjordan
6mo
Please share! Looking for resources as well.
1
Spencer Ericson
8mo
Amazing. Maybe I'll see you there!

I don't expect people would move into such an area for a tiny chance at receiving a payment of this size

This isn't something I expect either, and I think you may be slightly misunderstanding the mechanism by which moral hazard leads to bad outcomes.

When moral hazard hurts regular people who have their money in the banking system, it's not because a bank executive specifically tried to bankrupt their corporation to collect bailout funds from the government. Rather, it is the toxic incentive structure caused by privatized payoffs and socialized losses. These... (read more)

3
Jason
9mo
I don't think we disagree much if any -- my next point was that the people in these areas had decided to live there prior to and independently of GiveDirectly's action. To the extent they were engaging in a cost-benefit analysis, the current residents had already decided it was worth the flooding risk. At least in Florida, my understanding is that many of the more at-risk properties would not have been built at all (or at least re-built) but for the government subsidized insurance covering the bulk of losses with very high probability. Between the size of the GD payments, and the small fraction of flooded people who receive them, an analogous effect here seems unlikely to me.

It doesn't include people who have accumulated many years of experience in their fields.

2
calebp
9mo
I think if someone has accumulated many years they have also accumulated several years.
2
Zach Stein-Perlman
9mo
Idk, maybe. (But LTFF funds John Wentworth, Alex Turner, Vanessa Kosoy, Rob Long, and so forth, not to mention orgs that employ experienced people...)

I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.

Isn't social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but ev... (read more)

1
Roman Leventov
9mo
Maybe I'm Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all. The "meh" attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still don't understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because I'm a "conservative who doesn't understand new things": for example, I think much more favourably of AR and VR, I mostly agree with Chalmers' "Reality Plus", etc. I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big "if".
3
Derek Shiller
9mo
I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if they're opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it won't matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.

If the flooding is predictable, are we causing moral hazard by subsidizing farming in flood-prone areas?

6
Jason
9mo
As long as only a small percentage of people in flood-prone areas receive payments (and which villages receive them isn't too predictable), I wouldn't expect any meaningful moral-hazard effect. I don't expect people would move into such an area for a tiny chance at receiving a payment of this size at some point in the future. And the people who were already there before GiveDirectly started the program weren't motivated by the pilot program.

Scott's analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It's not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it's not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.

Hey Andreas! The conference capacity is around 500, we've admitted 404 people so far, and CEA have told me that usually 95% of accepted applicants register and 95% of registered attendees show up to the conference. Therefore we have 500-404*0.95*0.95 = 135 spots left, so ideally we'd like to admit another 150 people.

Our acceptance rate is currently 80%, with another 10% waitlisted and 9% rejected.

1
Andreas P
9mo
Alright Robi, that's excellent! There's probably an increased rate of applications towards the last few weeks so seems to be on a successful trend.
Load more