All of CalebW's Comments + Replies

From the discord: "Manifold can provide medium-term loans to users with larger invested balances to donate to charity now provided they agree to not exit their markets in a disorderly fashion or engage in any other financial shenanigans (interpreted very broadly). Feel free to DM for more details on your particular case."

I DM'd yesterday; today I received a mana loan for my invested amount, for immediate donation, due for repayment Jan 2, 202, with a requirement to not sell out of large positions before May.

There's now a Google form: https://forms.gle/XjegTMHf7oZVdLZF7

A stray observation from reading Scott Alexander's post on his 2023 forecasting competition:

Scott singles out some forecasters who had particularly strong performance both this year and last year (he notes that being near the very top in one year seems noisy, with a significant role for luck), or otherwise seem likely to have strong signals of genuine predictive outperformance. These are:
- Samotsvety
- Metaculus
- possibly Peter Wildeford 
- possibly Ezra Karger (Research Director at FRI).

I note that the first 3 above all have higher AI catastrophic/exti... (read more)

Post links to google docs as quick takes if posting posts proper feels like a high bar?

I haven't thought about this a lot, but I don't see big tech companies working with existing frontier AI players as necessarily a bad thing for race dynamics (compared to the counterfactual). It seems better than them funding or poaching talent to create a viable competitor that may not care as much about risk - I'd guess the question is how likely we'd expect them to be successful in doing so (given that Amazon is not exactly at the frontier now)?

Agree this seems bad. Without commenting on whether this would still be bad, here's one possible series of events/framing that strikes me as less bad:
- Org: We're hiring a temporary contractor and opening this up to international applicants
- Applicant: Gets the contract
- Applicant: Can I use your office as a working space during periods I'm in the states?
- Org: Sure

This maybe then just seems like the sort of thing the org and applicant would want to have good legal advice on (I presume the applicant would in fact look for a B1/B2 visa that allows business during their trip rather than just tourism)

For completeness, here's what OpenAI says in its "Governance of superintelligence" post:

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea coul

... (read more)
1
blueberry
11mo
It's interesting how OpenAI basically concedes that it's a fruitless effort further down in the very same post: It's not hard to imagine compute eventually becoming cheap and fast enough to train GPT4+ models on high-end consumer computers. How does one limit homebrewed training runs without limiting capabilities that are also used for non-training purposes?

If there was someone well-trusted by the community (in or outside of it) you trusted not to doxx you, you might ask if they'd be willing to endorse a non-specific version of events as accurate.  I do accept there's an irony in suggesting this given your bad experience with something similar previously!

1
Throwaway012723
1y
Thank you for the suggestion. I think there are very few people in my life with whom I have mutual unconditional trust. I'm not sure if it's a term commonly used but I consider it akin to unconditional love.
Answer by CalebWFeb 02, 202314
4
0

This may or may not be relevant to your situation, but I'd be more willing to accept non-specific claims at face value if a trusted third party was vouching for that interpretation.

4
CalebW
1y
If there was someone well-trusted by the community (in or outside of it) you trusted not to doxx you, you might ask if they'd be willing to endorse a non-specific version of events as accurate.  I do accept there's an irony in suggesting this given your bad experience with something similar previously!

Tl;dr - my (potentially flawed or misguided) attempt at a comment that provides my impression of Catherine as a particularly trustworthy and helpful person, with appropriate caveats and sensitivity to Throwaway's allegation.

Note: I haven't written this sort of comment before, and appreciate that it would be easy for this sort of comment to contribute to have a chilling effect on important allegations of wrongdoing coming to light, so would welcome feedback on this comment or any norms that would have been useful for me to adhere to in making it or deciding... (read more)

I appreciate the caveats in this comment.

I hope to caveat my forthcoming post with a similar level of thoughtfulness.

Any updates around the likelihood/timing  of a discussion course? :) 

[Update 26 Jul '22: the website should be operational again. Sorry again to those inconvenienced!]

Hello, 
I've recently taken over monitoring the donation swaps. There have historically been a handful of offers listed each month, but it looks like the system has broken sometime over the past few weeks - thanks to Oscar below for emailing to bring this to our attention. I'm sorry for the inconvenience for anyone who has been trying to use the service and will hopefully provide a further update in the not-too-distant future!

1
OscarD
2y
Great, thanks Caleb!
1
Lorenzo Buonanno
2y
Very happy to hear the project is still active! Thank you so much for picking this up!

Thanks for organising :)

When do you expect decisions on applications will be made by?

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in ... (read more)

1
DM
6y
Thanks for your comment! I agree with everything you have said and like the framing you suggest. This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between 'value drift' (i.e. change of internal motivation) and 'lifestyle drift' (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise's comment points out, the term 'value drift' is not ideal in the way that Joey and I used it and that: However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).

In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/

I'm weakly confident that EA thought leaders who would consider seriously the implication of ideas like quantum immortality generally take a less mystic, reductionist view of quantum mechanics, consciousness and personal identity, along the lines of the following:

It seems that the numbers in the top priority paragraph don't match up with the chart

1
Tee
7y
09/05/17 Update: Graph 1 (top priority) has been updated again
0
Peter Wildeford
7y
This is true and will be fixed. Sorry.

I'll throw in Bostrom's 'Crucial Considerations and Wise Philanthropy', on "considerations that radically change the expected value of pursuing some high-level subgoal".

A thought: EA funds could be well-suited for inclusion in wills, given that they're somewhat robust to changes in the charity effectiveness landscape

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

2
Owen Cotton-Barratt
7y
I think you get an adjustment from that, but that it should be modest. None of the arguments we have so far about how difficult to expect the problem to be seem very robust, so I think it's appropriate to have a somewhat broad prior over possible difficulties. I think the picture you link to is plausible if the horizontal axis is interpreted as a log scale. But this changes the calculation of marginal impact quite a lot, so that you probably get more marginal impact towards the left than in the middle of the curve. (I think it's conceivable to end up with well-founded beliefs that look like that curve on a linear scale, but that this requires (a) very good understanding of what the problem actually is, & (b) justified confidence that you have the correct understanding.)

Thanks Paul and Carl for getting this off the ground!

I unfortunately haven't been able to arrange to contribute tax-deductibly in time (I am outside of the US), but for anyone considering running future lotteries:

I think this is a great idea, and intend to contribute my annual donations - currently in the high 4-figures - through donation lotteries such as this if they are available in the future.

Does anyone else think that a column structure along the lines of:

Name | Contact | Your Country | Charities that are tax-deductible in your country | Charities you want to donate to | Countries where these charities are tax-deductible

would be more comprehensible?

I had to do more than a quick glance to understand the current structure, which worries me a little bit, but it might just be me.

1
AndyMorgan
7y
Yeah, I agree with this. Also, as a side note, I'd like to donate to MIRI in future but I'm currently based in Australia.

Michelle Hutchinson mentioned that Nick Beckstead plans to email her donation advice. Is it possible for others to receive this advice?

4
CarlShulman
7y
Beckstead gave his recommendation for individual donors on this GiveWell blog post.
0
CalebW
7y
This series of talks on the Effective Altruism movement at EA Global 2016: The Effective Altruism Ecosystem Embracing the Intellectual Challenge of Effective Altruism Improving the Effective Altruism Network

I think the message of SlateStarCodex's "Tuesday Shouldn't Change The Narrative" is particularly relevant to EAs - any large updates to one's beliefs about the world should have come before the election.

9
JesseClifton
7y
Agreed that large updates about things like the prevalence of regressive attitudes and the fragility of democracy should have been made before the election. But Trump's election itself has changed many EA-relevant parameters - international cooperation, x-risk, probability of animal welfare legislation, environmental policy, etc. So there may be room for substantial updates on the fact that Trump and a Republican Congress will be governing. That said, it's not immediately obvious to me how the marginal value of any EA effort has changed, and I worry about major updates being made out of a kneejerk reaction to the horribleness of someone like Trump being elected.

Has there been consideration of electoral reform with mind to proportionality as a worthwhile EA cause?

1
Fluttershy
9y
Thanks! I've never looked into the Brain Preservation Foundation, but since RomeoStevens' essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I'll have to look into it sometime.

I feel like Joey's comment here is broadly applicable enough to warrant bringing it top level:

"I think part of the reason [meta-charity is] not publicized as much as say donating directly to GW charities is for marketing/PR reasons. e.g. Many people who are new to EA might be confused or turned off by the idea of a 100% overhead charity."

In addition to Charity Science, Giving What We Can also has this meta charity logic ingrained: https://givingwhatwecan.org/impact

0
WilliamKiely
9y
Thanks.

I certainly agree with the general point that one must consider the experiential value of the life saved. However, I'm skeptical of presuming a log-relationship for consumption and happiness, both for the reason you identified (definition problems at low-incomes), and issues around self-reporting as a measure of happiness, the Easterlin Paradox, and tentative data supporting that much of the happiness from consumption may about feeling richer than other people.