All of defun 🔸's Comments + Replies

It seems like they haven't accepted any non-profit since 2022, around the time Garry Tan became YC's CEO. Garry has been very vocal against EA (specially AI Safety) on Twitter.

Would you consider applying to YC?

I imagine it only makes sense for certain types of non-profits, like OWID.

2
benrmatthews
YC does have a nonprofit program: https://www.ycombinator.com/nonprofits/ List of nonprofits YC have invested in, including 80,000 Hours: https://www.ycombinator.com/companies?nonprofit=true
4
Joey🔸
Yes but after founding to give or EF type programs.

"Dwarkesh's fundraiser to fight factory farming has now raised over $1M!" - https://x.com/Lewis_Bollard/status/1954962845994819719

Lewis Bollard on Dwarkesh's podcast: 

1
Vinoy
Fantastic podcast episode as usual. I learnt a lot.   
7
jackva
Really great resource! Lewis's vivid descriptions of how neglected this space is and how that led to a lot of picking of low-hanging fruit of really high impact things to do seemed like a super useful general EA messaging resource to me.
4
huw
All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)

I'd love to see Joey Savoie on Dwarkesh’s podcast. Can someone make it happen?

Joey with Spencer Greenberg: https://podcast.clearerthinking.org/episode/154/joey-savoie-should-you-become-a-charity-entrepreneur/

I think the pledge hits a sweet spot. It's not legally binding, so it's not really a lifelong decision, but being a public commitment helps push people to stick to their altruistic values.

https://www.givingwhatwecan.org/faq/is-a-giving-pledge-legally-binding 

Holden Karnofsky has joined Anthropic (LinkedIn profile). I haven't been able to find more information.

5
NickLaing
Let's hope he understands the power of the dark side.
7
Chris Leong
"Member of Technical Staff" - That's surprising. I assumed he was more interested in the policy angle.

Anthropic's Twitter account was hacked. It's "just" a social media account, but it raises some concerns.

Update: the post has just been deleted. They keep the updates on their status page: https://status.anthropic.com/

3
peterbarnett
Update from Anthropic: https://twitter.com/AnthropicAI/status/1869139895400399183  

Great initiative! 🙌🙌 

I've been hoping for something like this to exist. https://forum.effectivealtruism.org/posts/CK7pGbkzdojkFumX9/meta-charity-focused-on-earning-to-give

What do EtGers need?

I've been donating 20% of my income for a couple of years, and I'm planning to increase it to 30–40%. I'd love to meet like-minded people: ambitious EAs who are EtG.

Rahi Impact ... we expect to launch in May 2025.

Have you considering launching asap (eg. next month)?

I could be wrong, but Rahi Impact seems quite similar to a tech startup, and one of the best pieces of advice for most startups is to launch as soon as possible.

Thank you so much for the context 💛

My raw thoughts (apologies for the low quality):

  • I think the target audience should be high earners who are already donating >=10% of their income. (Getting people from 0% to 10% would not be in the scope of the charity. I think GWWC is already doing a great job)
  • The two main goals:
    • Motivate people to increase their donations (from 10% to 20% is probably much easier than from 0% to 10%)
    • Help people significantly increase their earnings through networking, coaching, financial advice, tax optimization, etc.
  • One of the most v
... (read more)

Founding to Give is great but it's only for (potential) entrepreneurs.

Anthropic has just launched "computer use". "developers can direct Claude to use computers the way people do".

https://www.anthropic.com/news/3-5-models-and-computer-use

Ilya's Safe Superintelligence Inc. has raised $1B.

4
NickLaing
Maybe a silly question, but does "one shot" for safe AGI mean they aren't going to release models along the way and only try and do reach the superintelligence bar? Would have thought investors wouldn't have been into that... Or are they basically just like other AI companies and will release commercial products along the way but with a compelling pitch?
huw
17
9
0

I guess one thing worth noting here is that they raised from a16z, whose leaders are notoriously critical of AI safety. Not sure how they square that circle, but I doubt it involves their investors having changed their perspectives on that issue.

9[anonymous]
Just incase anyone is reading this, I too would like a billion dollars.

Thanks for doing this, Ben!

Regarding the Founding to Give program:

  • Did you get many applicants?
  • What are their backgrounds?
  • What percentage of the selected candidates are technical?
  • What kind of profiles would you have liked to see more of?
3
Ben Williamson
I'll leave out the specific data on this but we were pleased with the number and quality of applicants for this from our first recruitment round earlier this year. I'd say in general we've got a mix of more 'CEO' and 'CTO' type candidates - ones with significant experience in building startups and fundraising, and those with significant technical experience and skill. Possibly a bit of a skew to the former so we're especially excited for applicants from a more technical side this time around.  

By becoming a nonprofit entrepreneur you will build robust career capital and reach an impact equivalent to $338,000-414,000 donated* to the best charities in the world every year (e.g., GiveWell top charities)! If you do exceptionally well, we estimate that your impact can grow to $1M in annual counterfactual donations. This makes nonprofit entrepreneurship one of the most impactful jobs you could take!
*This calculation represents our most accurate estimate as of June 2023. It is based on assessing the average impact of charities we have launched and cons

... (read more)
6
Joey🔸
That number is across all cause areas; animal welfare, family planning, and global health funders have slightly higher EVs than other cause areas where we have recommended projects.

John Schulman (OpenAI co-founder) has left OpenAI to work on AI alignment at Anthropic.

https://x.com/johnschulman2/status/1820610863499509855

Thanks for the post!

“Something must be done. This is something. Therefore this must be done.”

1. Have you seen grants that you are confident are not a good use of EA's money?

2. If so, do you think that if the grantmakers had asked for your input, you would have changed their minds about making the grant?

3. Do you think Open Philanthropy (and other grantmakers) should have external grant reviewers?

7
richard_ngo
I remain in favor of people doing work on evals, and in favor of funding talented people to work on evals. The main intervention I'd like to make here is to inform how those people work on evals, so that it's more productive. I think that should happen not on the level of grants but on the level of how they choose to conduct the research.

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

Wait, all the LLMs get 90+ on ARC? I thought LLMs were supposed to do badly on ARC.

How would it differ from the 80,000 Hours job board (filtering by AI Safety)?

Thanks again for the comment.

You think that the primary value of the paper is in its help with forecasting, right?

In that case, do you think it would be fair to ask expert forecasters if this paper is useful or not?

9
aog
I think this kind of research will help inform people about the economic impacts of AI, but I don't think the primary benefits will be for forecasters per se. Instead, I'd expect policymakers, academics, journalists, investors, and other groups of people who value academic prestige and working within established disciplines to be the main groups that would learn from research like this.  I don't think most expert AI forecasters would really value this paper. They're generally already highly informed about AI progress, and might have read relatively niche research on the topic, like Ajeya Cotra and Tom Davidson's work at OpenPhil. The methodology in this paper might seem obvious to them ("of course firms will automate when it's cost effective!"), and its conclusions wouldn't be strong or comprehensive enough to change their views. It's more plausible that future work building on this paper would inform forecasters. As you mentioned above, this work is only about computer vision systems, so it would be useful to see the methodology applied to LLMs and other kinds of AI. This paper has a relatively limited dataset, so it'd be good to see this methodology applied to more empirical evidence. Right now, I think most AI forecasters rely on either macro-level models like Davidson or simple intuitions like "we'll get explosive growth when we have automated remote workers." This line of research could eventually lead to a much more detailed economic model of AI automation, which I could imagine becoming a key source of information for forecasters.  But expert forecasters are only one group of people whose expectations about the future matter. I'd expect this research to be more valuable for other kinds of people whose opinions about AI development also matter, such as:  * Economists (Korinek, Trammell, Brynjolfsson, Chad Jones, Daniel Rock) * Policymakers (Researchers at policy think tanks and staffers in political institutions who spend a large share of their time thin

Thanks for the comment @aogara <3. I agree this paper seems very good from an academic point of view.

My main question: how does this research help in preventing existential risks from AI?

 

Other questions:

  • What are the practical implications of this paper?
  • What insights does this model provide regarding text-based task automation using LLMs?
  • Looking into one of the main computer vision tasks: self-driving cars. What insights does their model provide? (Tesla is probably ~3 years away from self-driving cars and this won't require any hardware update, so
... (read more)
4
aog
Mainly I think this paper will help inform people about the potential economic implications of AI development. These implications are important for people to understand because they contribute to AI x-risks. For example, explosive economic growth could lead to many new scientific innovations in a short period of time, with incredible upside but also serious risks, and perhaps warranting more centralized control over AI during that critical period. Another example would be automation: if most economic productivity comes from AI systems rather than human labor or other forms of capital, this will dramatically change the global balance of power and contribute to many existential risks. 

Hi calebp.

If you have time to read the papers, let me know if you think they are actually useful.

Thanks a lot for giving more context. I really appreciate it.

These were not “AI Safety” grants

These grants come from Open Philanthropy's focus area "Potential Risks from Advanced AI". I think it's fair to say they are "AI Safety" grants.

Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn't accurately reflect the grant's impact.

Fair point. I agree old papers might not accurately reflect the gr... (read more)

Sorry, I should have attached this in my previous message.

where does it say that he is a guest author?

Here.

Thanks. My impression is that they are using 'Guest author' on their blog post to differentiate who works for Epoch or is external. As far as I can tell, that usage implies nothing about the contribution of the authors to the paper.

This paper is from Epoch. Thompson is a "Guest author".

I think this paper and this article are interesting but I'd like to know why you think they are "pretty awesome from an x-risk perspective".


Epoch AI has received much less funding from Open Philanthropy ($9.1M), yet they are producing world-class work that is widely read, used, and shared.

7
PeterSlattery
This seems misleading. Some of the authors are from Epoch, but there are authors from two other universities on the paper.  Also, where does it say that he is a guest author? Neil is a research advisor for Epoch and my understanding is that he provides valuable input on a lot of their work.   
4
Zach Stein-Perlman
Thanks. I notice they have few publications.

Agree. OP's hits-based giving approach might justify the 2020 grant, but not the 2022 and 2023 grants.

Thanks for your thorough comment, Owen.

And do the amounts ($1M and $0.5M) seem reasonable to you?

As a point of reference, Epoch AI is hiring a "Project Lead, Mathematics Reasoning Benchmark". This person will receive ~$100k for a 6-month contract.

3
Owen Cotton-Barratt
There are different reference classes we might use for "reasonable" here. I believe that paying the salary just of the researchers involved to do the key work will usually be a good amount less (but maybe not if you're having to compete with AI lab salaries?). But I think that that's not very available on the open market (i.e. for funders, who aren't putting in the management time), unless someone good happens to want to research this anyway. In the reference class of academic grants, this looks relatively normal. It's a bit hard from the outside to be second-guessing the funders' decisions, since I don't know what information they had available. The decisions would look better the more there was a good prototype or other reason to feel confident that they'd produce a strong benchmark. It might be that it would be optimal to investigate getting less thorough work done for less money, but it's not obvious to me. I guess this is all a roundabout way of saying "naively it seems on the high side to me, but I can totally imagine learning information such that it would seem very reasonable". 

In the case of OpenDevin it seems like the grant is directly funding an open-source project that advances capabilities.

I'd like more transparency on this.

Very good point. Yeah, it seems like a 1/10 life has to be net negative. But a 4/10 life I'm not sure it's net negative.

The difference in subjective well-being is not as high as we might intuitively think.

(anecdotally: my grandparents were born in poverty and they say they had happy childhoods)

The average resident of a low-income country rated their satisfaction as 4.3 using a subjective 1-10 scale, while the average was 6.7 among residents of G8 countries

Doing a naive calculation: 6.7 / 4.3 = 1.56 (+56%).

The difference in the cost of saving a live between a rich and a poor country is 10x-1000x.

It would probably be good to take this into account, but I don't think it would change the outcomes that much.

5
John Salter
"Doing a naive calculation: 6.7 / 4.3 = 1.56 (+56%)." Perhaps I rate things differently to most survey respondents, but for me anything less than 5/10 is net suffering and not worth living for its own sake. Consider the difference between "saving" 10 people who will live 1/10 lives (maybe, people being tortured in a north Korean jail) and one person who will love a 10/10 life

What is missing in terms of a GPU?

Something unknown.

I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations.

Do you think it's plausible that a GPU rendering graphics is conscious? Or do you think that a GPU can only be conscious when it runs a model that mimics human behavior?

I think bacteria are unlikely to be conscious due to a lack of processing power.

Potential counterargument: microbial intelligence.

That's true for many CEOs (like Elon Musk) but Sam Altman did not over-hype any of the big OpenAI launches (ChatGPT, gpt3.5, gpt4, gpt4o, dall-e, etc.).

It's possible that he's doing it for the first time now, but I think it's unlikely.

But let's ignore Sam's claims. Why do you think LLM progress is slowing down?

I think it's likely we'll be able to use matter to make other conscious minds

Can you expand on this? Do you think that a model loaded onto a GPU could be conscious?

And do you think bacteria might be conscious?

3
Nathan Young
I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations. What is missing in terms of a GPU? I think bacteria are unlikely to be conscious due to a lack of processing power. 

I assume that ML skills are less in-supply however?

I think there's enough demand for both.

I'm currently sitting at a desk at a SWE unpaid internship LOL.

Nice!

I don't think I currently have the skills to start getting paid for SWE work sadly.

Gotcha. Probably combining your studies with internships is the best option for now.

An LLM capable of automating "mid-sized SWE jobs" would probably be able to accelerate AI research and would be capable of cyberattacks. I guess: AI labs would not release such a powerful model, they would just use it internally to reach ASI.

LLM progress is slowing down

I'm hearing this claim everywhere. I'm curious to know why you think so, given that OpenAI hasn't released GPT-5.

Sam said multiple times that GPT-5 is going to be much better than GPT-4. It could be just hype but this would hurt his reputation as soon as GPT-5 is released.

In any case, we'll probably know soon.

8
skluug
I think you should update approximately not at all from Sam Altman saying GPT-5 is going to be much better. Every CEO says every new version of their product is much better--building hype is central to their job.

Con: Programming will be automated before my other career path choices.

Are you confident about this claim?

1
sammyboiz🔸
Thanks for your responses, they are very insightful. As AI operations scale up, it feels like AI/ML engineers will become more valuable and mid-sized SWE jobs will be swallowed by LLMs and those building them. I'm very curious about your opinion on this. 

Con: Not as exciting as doing something in AI or AI safety

There's a lot of software engineering work around AI. https://x.com/gdb/status/1729893902814192096

1
sammyboiz🔸
This is something I have not considered, thank you. I assume that ML skills are less in-supply however?

An other option: try to get a SWE internship now. Then, depending on how it goes, you might want to consider dropping out.

Some of my best swe colleagues dropped out because they had full-time jobs. It probably accelerated their career by 1 or 2 years.

1
sammyboiz🔸
I'm currently sitting at a desk at a SWE unpaid internship LOL. I don't think I currently have the skills to start getting paid for SWE work sadly.

I think the burden of proof lies with those advocating for AI welfare as an EA priority.

So far, I haven't read compelling arguments to change my default.

2
Nathan Young
What's your thought on this:
2
NickLaing
I agree and I hope we get some strong arguments from those in favor. I would imagine there is already a bunch of stuff written given the recent Open Phil defunding it kerfuffle.

Have you talked with someone from Ought/Elicit? It seems like they should be able to give you useful feedback.

2
jacquesthibs
Yes, I’ve talked to them a few times in the last 2 years!

does it follow that you should spend a lot more on near-term cause areas now?

I think so.

I was quite focused on building career capital, and now I'm focused on reducing near-term animal suffering, partly because of this reasoning.

Thanks! I changed the title. (I had copied the quote from the tweet without double-checking)

Thanks for the post!

What do you think about Open Philanthropy's grants in AI Alignment? (eg. https://www.openphilanthropy.org/grants/funding-for-ai-alignment-projects-working-with-deep-learning-systems/). Do you think the EV is positive?

And what do you think about 80,000 Hours recommending people to join big AI labs?

Load more