acylhalide

944Joined Sep 2021

Bio

Studying BTech+MTech at IIT Delhi. Check out my comment (and post) history for topics of interest. Please prefer seeing recent comments over older ones as my thoughts have been updating very frequently recently. 

Discovered EA in September 2021, was involved in cryptocurrency before that, graduating 2023

profile: samueldashadrach.github.io/EA/

Last updated: June 2022

Comments
403

Re 1: agreed

Re 3: I'm more like "yeah externalities is part of the problem buts it's not the only problem and may not even be the main problem (assuming there is a "main" problem). Hence saying it's only externalities dilutes the claim.

Re 4: Yeah to some extent I agree. Although there's a lot of tendency to anthromorphosize not enough or too much when it comes to AI risk, which is a source of potentially irreducible weirdness.

For instance we by default tend to view superhuman programs that model human social dynamics as qualitatively different from programs that don't (and just do say, superhuman weather prediction instead). With the former it is easier to switch on the part of our brain that is designed to empathise with humans (and animals etc) rather than the part of our brain that does math and physics and computer science. And then we starting relating to the hypothetical AI as a human-like entity.

+1!

I also consider this in part (arguably) a defect of utilitarian frameworks of ethics in general.

Ways to justify valuing your friends as more than just instrumental ends to maximizing impartial good:

  • state that your friends have more moral weight than strangers (violate impartiality)
  • rule utilitarianism- believe that forming longterm relationships with your current friends (at the expense of replacements who may be more impactful) is actually in some indirect way leading to more net impact.

I feel neither of these fully captures why we place the demands and expectations we place on friends (descriptive) or what we should place (normative).

I agree that roles involving scarce resources - such as full-time jobs funded by EA - will (and perhaps should) tend to towards picking the most impactful people (nearterm) as opposed to picking friends. But there's a place for valuing existing friends as ends in themselves that happens orthogonal to this whole process, without the use of EA funds - for instance meeting at a private get-together or just chilling 1 on 1. EA Global happens to sit at an awkward position between the two, it involves non-trivial spending but not huge spending, but is also billed as a community event which people understandably have emotions attached to.

I wonder how these two processes (valuing people on their potential impact versus valuing existing friends) interact.

I mention friends as a relationship between two people because it's easier to discuss (maybe) and has insights that generalise to say a community leader who has formed emotional bonds with many members and vice versa the members have formed bonds with the leaders. The latter too involves boundary negotiations, expectations placed on both sides and so on. Among those include the implicit idea that their emotional bonds will not be discarded (betrayed?) the moment the leaders find more impactful replacements or different ways to do impact evaluations (such as valuing xrisk work and by extension xrisk people more relative to global health, than they did few years ago).

This is a fair argument.

Although on net I'm not sure it outweighs all the other points I mentioned :)

I would love to hear your feedback on my comment, which is against framing AI risk as primarily an externalities problem.

IMO this isn't a 100% accurate description of what the claims for AI risk being hard are. (And I would generally be against using inaccurate claims to attract people to legitimate areas)

 

For starters, arms race is probably a better terminology than externalities. When I think "externalities" I think - this technology creates X amount of private good and Y amount of public harm, and selfish actors have incentives create them. And while this description can be forcibly fit for AI x-risk, it is important to remember that x-risk is a harm from a selfish perspective too. Even if you don't care about the survival of other people, you care about your own survival, and therefore AI that causes your death or else permanently disempowers you is a harm to you too. An actor who honestly believes their AI has high probability of causing x-risk will not unilaterally deploy it anyway. Unlike, say, a fossil fuel company that will produce fossil fuel fully knowing the CO2 footprint they are creating, or a cigarette manufacturer that will produce cigarettes while having accurate statistics of the expected number of future cancer patients they are helping create.

In an arms race however, it can make sense for someone to pursue technologies that have significant x-risk, if there is also (in this case, utopian) upside they wish to capture before an opponent does.

Externalities is also not a good framing because it assumes that state regulation is sufficient to solve the problem, and prevent private actors from consuming public goods. It assumes the problem to be easy assuming govt support, and the hard part is primarily one of public protest and lobbying the government. However it is currently uncertain how much of a role state regulation will play in AI risk. It forces the problem into a conflict-theoretic frame rather an epistemic disagreement. Both kinds of problems require very different solutions.

 

However, arms race is also not the best terminology. For instance, Hayden Belfied has a post that cautions against prematurely claiming that US and China or OpenAI and Deepmind or whoever are in active arms race towards AGI. Link to post. Most progress iscurrently being pursued by unilateral actors who are (somewhat) open to reason and willing to signal in favour of motivations besides profit or military superiority. OpenAI for instance has explicitly ensured a non-profit stucture to govern them, for this reason.

Arms race is also not the best terminology because many AI x-risk researchers believe that AI x-risk is high even if you take the arms race out of the equation. Even if humanity agreed on one coordinated effort to deploy AGI, and did it slowly over a period of say a few decades, there are researchers who claim AI x-risk will be hard to mitigate, and that AI systems will be default be very hard to control. Yudkowksy (and a few others) go as far as saying we will only get one real shot at the problem in practice, in which we will risk extinction.

 

The core claims of AI x-risk are legitimately weird and I don't think it is easy to make it non-weird. Given this to be the case, I would caution against diluting the claims and prefer debating them head-on instead.

It might help if you make a concrete list of grants you're personally confused about.

It would also help if you (or someone) can identify the funding bars for each of the EA cause areas. For instance global health would measure $ / QALYs, x-risk would measure $ / 10^-6 x-risk reduced, and so on. Each cause area makes decisions mostly independently of other cause areas AFAIK, once money has been moved from general fund to a specific cause area. Within each cause area there tends to be a funding bar AFAIK, against which grants get compared.

Regarding movement building grants and early-career spending, I know nothing about it personally, and the following is speculation. But some heuristics that comes to my mind would be as follows. Someone who knows how these decisions are actually made, please feel free to correct me.

If you are willing to spend $100k / year on salary for a great full-time employee for 10 years, you should probably be willing to spend upto $500k on community building to find a similar employee whose output (measured in QALYs, x-risk reduced, etc) will be 50% higher on the same salary.

If you expect your community building effort to find more than one such person, you can multiply by the number of such people you expect to find.

Above assumes you have a fixed amount of money and a fixed amount of employees you can support. If you have an excess of money and insufficiently many potential employees who would actually produce useful output (for instance because they dont buy EA values or dont understand AI alignment), then it might makes sense to spend even more than $500k.

If you believe you're working in a field which has heavy-tailed productivity (some people are 10x or 100x more productive than others), then it makes sense to spend even more than the $500k to find these people, as opposed to support a lot of employees of average productivity on $100k/year.

I think twitter is a suboptimal place to do this; the whole platform has been optimised for the wrong things and I've decided to not use twitter as a result. The behavioural changes it causes are subtle but real. For instance it causes you to over time reduce your own bar for posting things, post more frequently, comment more frequently, be more prone to checking notifications and getting distracted from other tasks, and so on. And it becomes easier to lose track of the main task of "actually making progress on hard problem X" in favour of "I'm bored and want social interaction, let me use discussion of problem X to get that."

Just wanted to say that I enjoyed this comment. I wonder if it's worth your time to convert this into a full-fledged post! I have seen a bunch of similar comments made by you also on community dynamics as they relate to size - but those comments contained additional points not in your comment

And this prize seems correspondingly a lot more accessible (and lower effort to enter) on the face of it.

At the face of it, sure, but I don't think this should fool people into thinking it is easy. Authors of the prize have been heavily informed by reports and conversations from smart people who have spent years thinking about AI risk. IMO you need atleast a month or two to understand the jargon, norms and core ideas of the subculture well enough to write something that will be understood. After this, you need to come with original ideas or concepts or frames, which may  ofcourse build upon existing ones. And on top of that, a lot of alignment research does not look like or follow all norms of science as is usually practiced. This makes the task both easier and harder in different ways.

I agree it's easier than winning a Nobel or breakthrough prize though. 

The Wikipedia page you've linked is explicitly literary prizes though, not scientific ones.

For scientific prizes, I just googled and got clay millennium prize ($1M), Nobel prize ($1.2M) and breakthrough prize ($3M).

(There's also XPRIZE that goes up to $100M although they've previously backed out of a payment.)

I feel like it doesn't matter as much whether the work is just writing and armchair thinking, versus some practical engineering or empirical research. Reason being it doesn't reflect the actual difficulty or effort required to win the prize. Nobel prizes have been awarded for purely theoretical work, but I'm not sure about breakthrough prize. Also, this AI timelines prize by FTX doesn't disallow doing empirical research.

Load More