BW

Brad West

Founder & CEO @ Profit for Good Initiative
1615 karmaJoined Profit4good.org/

Bio

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Comments
247

Of course, one subset of Christians or other religious believers believe that the subjects of their religious beliefs follow from (or at least accord with) their rationality. This would contrast with the position that you seem to be indicating, which I believe is called fideism, which would hold that some religious beliefs cannot be reached by rational thinking. I would be interested in seeing what portion of EAs hold their religious beliefs explicitly in violation of what they believe to be rational, but I suspect that it would be few.

In any case, I believe truthseeking is generally a good way to live for even religious people who hold certain beliefs in spite of what they take to be good reason. Ostensibly, they would simply not apply it to one set of their beliefs.

Thank you for this insightful post. While I resonate with the emphasis on the necessity of truthseeking, it's important to also highlight the positive aspects that often get overshadowed. Truthseeking is not only about exposing flaws and maintaining a critical perspective; it's also about fostering open-mindedness, generating new ideas, and empirically testing disagreements. These elements require significantly more effort and resources compared to criticism, which often leads to an oversupply of the latter and can stifle innovation if not balanced with constructive efforts.

Generating new ideas and empirically testing them involves substantial effort and investment, including developing hypotheses, designing experiments, and analyzing results. Despite these challenges, this expansive aspect of truthseeking is crucial for progress and understanding. Encouraging open-mindedness and fostering a culture of curiosity and innovation are essential. This aligns with your point about the importance of embracing unconventional, “weird” ideas, which often lie outside the consensus and require a willingness to explore and challenge the status quo.

Your post reflects a general EA attitude that emphasizes the negative aspects of epistemic virtue while often ignoring the positive. A holistic approach that includes both the critical and constructive dimensions of truthseeking can lead to a more comprehensive understanding of reality and drive meaningful progress. Balancing criticism with creativity and empirical testing, especially for unconventional ideas, can create a more dynamic and effective truthseeking community.

It may have not been totally clear from the post, which I will edit in a minute, but the intended reading order would be

  1.  "What is Profit for Good", which is included in this post 
  2. "The Motivation for Expanding Profit for Good"
  3. "From Charity Choice to Competitive Advantage"

Yield and Spread is a Profit for Good business that provides financial advice, particularly to help further effective giving. All the profit the business generates goes to effective charities. Thought it would make sense to give them a shout out here.

https://www.yieldandspread.org/

Fair enough.

I still suspect that you may be underestimating marginal AI Safety funding opportunities.

This strikes me as remarkably counterintuitive, given the enormous disparity between funding between AI capabilities spending and AI safety spending. I was also under the impression that AI capabilities were not as funding-constrained. 

To be clear, I am in favor of promoting offsetting in both contexts, although the benefits of veganism in avoiding contributing to factory farming demand, increasing demand for pro-social vegan products, and sending an important moral signal make it difficult to calculate an appropriate sum. Further, I think a deontological or virtue ethics concern with killing or eating the flesh of sentient beings also naturally arises.

In the case here though, your choices cash out in terms of your effect on X and S risks re AGI. I think an appropriate offset for the funding effect is able to reverse or more than reverse your effect without moral complication.

I honestly don't have much experience other than using GPT4, which I have found to be very helpful. 

For me, ChatGPT greatly increases the productivity of myself and my team, whereas the very modest effect of a small amount of money from my subscription I find very unlikely to be seriously furthering the acceleration of AI.

I suspect that the productivity of EAs generally is very valuable and if EAs benefit from the tool it is likely not a good idea for them to stop using it.

Given that there is so much less money going to AI safety than AI capabilities, I would think that a more sensible request would be that those using ChatGPT and thus funding OpenAI fund promising AI safety efforts... this would likely more than offset the harm caused by your funding and enable you to keep using a valuable tool. And if the benefits for you are not worth the cost of the subscription + the offset, then perhaps the benefit is not, in that case worth the harm. I would suggest that people who know more about this stuff than me recommend an AI safety fund for offsetting ChatGPT use.

Load more