New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Centre for Effective Altruism and Ambitious Impact (formerly Charity Entrepreneurship) are probably named the wrong way around in terms of what they actually do and IMO this feeds into the EA branding problem.  Why do I think this? "Effective" Altruism implies a value judgement that requires strong evidence to back up - like launching charities aiming to beat GiveWell benchmarks and raising large amounts of money from donors who expect to see evidence significant returns in the next 3 years or shut down.  * IMO this is very friendly to a wide business-friendly and government audience  "Ambitious Impact" implies more speculative, less easy to measure activities in pursuit of even higher impact returns. My understanding is that Open Philanthropy split from GiveWell because of the realisation that there was more marginal funding required for "Do-gooding R&D" with a lower existing evidence base.  Why do we need "Do-Gooding R&D"?  So we can find better ways to help others in the future.  To use the example of a pharmaceutical company, why don't they reduce the prices of all their currently functional drugs to help more people? So, they can fund their expensive hit-based R&D efforts. There's obviously trade-offs, but it's short sighted to pretend the low hanging fruit won't eventually be picked.  So what? IMO AIM has outcompeted CEA on a number of fronts (their training is better, their content (if not their marketing) is better, they are agile and improve over time). Probably 80% of the useful and practical things I've learned about how to do effective altruism, I've learned from them.  The AIM folks I've spoken to are frustrated that their results - based on exploiting cost-effective high-evidence base interventions - are used to launder the reputation of OP funded low evidence base "Do-gooding R&D."  I think before you should get to work on "Do-Gooding R&D", you should probably learn how the current state of Do-Gooding best practices. If we think about EA br
I’m part of a working group at CEA that’s started scoping out improvements for effectivealtruism.org. Our main goals are: 1. Improve understanding of what EA is (clarify and simplify messaging, better address common misconceptions, showcase more tangible examples of impact, people, and projects) 2. Improve perception of EA (show more of the altruistic and other-directedness parts of EA alongside the effective, pragmatic, results-driven parts, feature more testimonials and impact stories from a broader range of people, make it feel more human and up-to-date) 3. Increase high-value actions (improve navigation, increase newsletter and VP signups, make it easier to find actionable info) For the first couple of weeks, I’ll be testing how the current site performs against these goals, then move on to the redesign, which I’ll user-test against the same goals. If you’ve visited the current site and have opinions, I’d love to hear them. Some prompts that might help: * Do you remember what your first impression was? * Have you ever struggled to find specific info on the site? * Is there anything that annoys you? * What do you think could be confusing to someone who hasn't heard about EA before? * What’s been most helpful to you? What do you like? If you prefer to write your thoughts anonymously you can do so here, although I’d encourage you to comment on this quick take so others can agree or disagree vote (and I can get a sense of how much the feedback resonates).
Anthropic has just launched "computer use". "developers can direct Claude to use computers the way people do". https://www.anthropic.com/news/3-5-models-and-computer-use
[Idea to reduce investment in large training runs] OpenAI is losing lots of money every year. They need continuous injections of investor cash to keep doing large training runs. Investors will only invest in OpenAI if they expect to make a profit. They only expect to make a profit if OpenAI is able to charge more for their models than the cost of compute. Two possible ways OpenAI can charge more than the cost of compute: * Uniquely good models. This one's obvious. * Switching costs. Even if OpenAI's models are just OK, if your AI application is already programmed to use OpenAI's API, you might not want to bother rewriting it. Conclusion: If you want to reduce investment in large training runs, one way to do this would be to reduce switching costs for LLM users. Specifically, you could write a bunch of really slick open-source libraries (one for every major programming language) that abstract away details of OpenAI's API and make it super easy to drop in a competing product from Anthropic, Meta, etc. Ideally there would even be a method to abstract away various LLM-specific quirks related to prompts, confabulation, etc. This pushes LLM companies closer to a world where they're competing purely on price, which reduces profits and makes them less attractive to investors. The plan could backfire by accelerating commercial adoption of AI a little bit. My guess is that this effect wouldn't be terribly large. There is this library. Seems like adoption is a bit lower than you might expect. It has ~13K stars on Github, whereas Django (venerable Python web framework that lets you abstract away your choice of database, among other things) has ~80K. So concrete actions might take the form of: * Publicize litellm. Give talks about it, tweet about it, mention it on StackOverflow, etc. Since it uses the OpenAI format, in theory it should be easy for existing OpenAI users to drop in litellm? * Make improvements to litellm so it is even more agnostic to LLM-specific q
I've previously written a little bit about recognition in relation to mainanence/prevention, and this passage from Everybody Matters: The Extraordinary Power of Caring for Your People Like Family stood out to me as a nice reminder: Overall, the Everybody Matters could is the kind of book that could have been an article. I wouldn't recommend spending the time to read it if you are already superficially familiar with the fact that an organization can choose to treat people well (although maybe that would be revelatory for some people). It was on my  to-read list due to it's mention in the TED Talk Why good leaders make you feel safe.