Marcus Abramovitch 🔸

2534 karmaJoined

Comments
160

Vote power should scale with karma

To a point, maybe a bit less than it does currently but in general it seems to work well. 

I think people are, generally speaking, being too simplistic between "capabilities" and "alignment". I assume most people on the forum use ChatGPT/Claude or other LLM apps and don't think they pose, in their current form, much of a safety concern.

I am far more concerned of "geniuses in a data center" which Dario/Sam seem to be pushing for, than I am of more economically useful AI. 

I furthermore think that Matthew and to a lesser extent, Tamay and Ege have engaged significantly with AI risk arguments than most people.

Disclosure: I'm one of the investors in Mechanize

can you spell out the clear plan? feel free to DM me also

Happy to see this. Seeing how much was cut, I agree that GWWC was trying to do too many things. The one I at first questioned was "translations" but I think you guys have a good point regarding Google Translate and other organizations who should take over non-English languages.

I understand this. Good analogy.

I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.

I want to achieve two things (which I expect you will agree with).

  1. I want to "capture" the good done by anyone and everyone willing to contribute and I want them welcomed, accepted and appreciated by the EA community. This means that if a person who could earn $10m/year in finance and is "only" willing to contribute $1m/year (10%) to effective causes, I don't want them turned away.
  2. I want to encourage, inspire, motivate and push people to do better than they currently are (insofar as it's possible). I think that includes an Anthropic employee earning $500k/year doing mech interp, a quant trader earning $10m/year, a new grad deciding what to do with their career and a 65-year old who just heard of EA.

I think it's also reasonable for people to set limits for how much they are willing to do. 

Upvoted. I think this is a great argument. Timelines are a way overrated thing to be incessantly talking about and are often a distraction on what can be done.

Sure. But the average person working in AI is not at Jane St level like you and yes, OpenAI/Anthropic comp is extremely high.

I would also say that people still have a moral obligation. People don't choose to be smart enough to do ML work.

I don't want to argue in anyone's specific case, but I don't think it's universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .

Load more