All of tamgent's Comments + Replies

Why isn't anyone talking about the Israel-Gaza situation much on the EA Forum? I know it's a big time for AI, but I just read that number of Palestinian deaths, the vast majority of whom are innocent people, and 65% are women and children, is approaching the level of civilians killed in Ukraine since the Russian invasion 21 months ago; just in the last 3-4 weeks. 

2
trevor1
6mo
That area is controlled by militaries, who might retaliate against people who find clever ways to smuggle aid into the conflict zone. So trying to help people there instead of elsewhere is the wrong move. EA was probably wrong to prioritize bednets over a malaria vaccine, even though lots of children would have died horribly if a malaria vaccine was invented 5 years later instead of them getting bednets now. It might seem harsh, but saving fewer lives instead of more is even more harsh for the lives of the people themselves, even if it's accidental.
2
Akash Kulgod
6mo
Hm what would you expect/hope people discuss about it here? As far as I remember, people didn't talk much about the Ukraine-Russia war either. Probably because there's not much that most EAs (or people in general) can do about it (not tractable) + not something that people aren't discussing enough (not neglected).

The Israel-Gaza situation doesn't strike me as very neglected or tractable. The eyes of much of the world are on that situation, and it's not clear to me that EA actors have much to add to the broader conversation. It's also not clear to me why we would expect that actions that EA actors could take would be expected to have a significant impact on the situation.

  • It's true that the Russian invasion also garnered heavy public attention. However, I'd suggest that it touched on existing EA knowledge bases (e.g., great power conflict and nuclear security) more t
... (read more)

Interesting that you don't think the post acknowledged your second collection of points. I thought it mostly did. 
1. The post did say it was not suggesting to shut down existing initiatives. So where people disagree on (for example) which evals to do, they can just do the ones they think are important and then both kinds get done. I think the post was identifying a third set of things we can do together, and this was not specific evals, but more about big narrative alliance when influencing large/important audiences. The post also suggested some other... (read more)

3
GideonF
1y
Ye I basically agree with this. 1. On evals, I think it is good for us to be doing as much evals as possible, firstly because both sorts of evaluations are important, but also more (even self imposed) regulatory hurdles to jump through, the better. Slow it down and bring the companies under control.  2. Indeed, the call is a broader political coalition building. Not everyone, not all the time, not on everything. But on substantially more than we currently are. 3. Yes 4. There are a number of counterarguments to this post, but I didn't include them because a) I probably can't give the strongest counterarguments to my own beliefs b) This post was already very long, and I had to cut out sections already on Actor-Network Theory and Agency and something else I can't remember c) I felt it might muddle the case I'm trying to make here if it was intersperced with counterarguments. One quick point on counterarguments is I think a counterargument would need to be strong enough to not just prove that the extreme end result is bad ( a lot more coalition building would be bad ) , but probably that the post is directionally bad (some more coalition building would be bad). 

Nice paper on the technical ways you could monitor compute usage, but governance-wise, I think we're extremely behind on anything making an approach like this remotely plausible (unless I'm missing something, which I may well be).

If we put aside the question b) in the abstract, of getting international compliance, and just focus on a) national governments regulating this for their own citizens. This likely requires some kind of regulatory authority with the remit and the authority to do this. This includes information gathering powers, which require compan... (read more)

On the competition vs caution approach, I think that often people assume government is a homogenous entity, when instead there are very different parts of government with very different remits and some remits are naturally aligned with a  caution approach and others to a competition approach.

I don't think it's obvious that Google alone is the engine of competition here, it's hard to expect any company to simply do nothing if their core revenue generator is threatened (I'm not justifying them here), they're likely to try to compete rather than give up immediately and work on other ways to monetiz. It's interesting to note that it just happened to be the case that Google's core revenue generator (search) is a possible application area of one of the LLMs, the fastest progressing/most promising area of AI research right now. I don't think OpenAI p... (read more)

2
Evan R. Murphy
1y
You're right - I wasn't very happy with my word choice calling Google the 'engine of competition' in this situation. The engine was already in place and involves the various actors working on AGI and the incentives to do so. But these recent developments with Google doubling down on AI to protect their search/ad revenue are revving up that engine.

Maybe posts themselves should have separate agree/disagree vote.

I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don't bother to even open them as they know they'll be long and legalistic.

There could be a little information summary next to the terms of use which is more accessible that explains the implications eg as you have here.

2
tamgent
1y
I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don't bother to even open them as they know they'll be long and legalistic.

I would also be interested in knowing who/which org was "owning" the relationship with FTX...

Not to assign blame, but to figure out what the right institutional responsibility/oversight should have been, and what needs to be put in place should a similar situation emerge in future. 

Surely it's the people working for the FTX Foundation who were the connection between FTX and EA.

Are people downvoting because they believe this not relevant enough to the FTX scandal? I understand it is only tangentially relevant (ie. FTX abused its customers money, they did not start a ponzi scheme). Or maybe because it is insensitive or wrong to share critical pieces of the wider area at a time like this in case people's emotions about the event get overgeneralised to related wider debates? If people disagreed with my view that the video has good arguments or is educational, they would have disagree-voted instead. My intention in sharing it was tha... (read more)

Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.


Earlier this year when it went semi-viral I watched 'The Line Goes Up', which I found pretty educational (as an outsider). Despite the title, it's about more than NFTs, and covers crypto/blockchain/DLT/so-called 'web3' stuff. It is a critical/skeptical take on the whole space with lots of good arguments (in my view).

Are people downvoting because they believe this not relevant enough to the FTX scandal? I understand it is only tangentially relevant (ie. FTX abused its customers money, they did not start a ponzi scheme). Or maybe because it is insensitive or wrong to share critical pieces of the wider area at a time like this in case people's emotions about the event get overgeneralised to related wider debates? If people disagreed with my view that the video has good arguments or is educational, they would have disagree-voted instead. My intention in sharing it was tha... (read more)

Was going to ask if you had integrity failure or failure by capture, but I think what I had in mind in these overlaps already to a large extent with what you have under rigor failure.

1
trevor1
2y
I think these are largely but not entirely covered by Linch's comment's suggestions, which provide some pretty helpful evocative examples. I do agree that capture or rupturing can happen in a broad variety of ways though.

It seems to me Jack believes that they are impactful and is wondering why they are therefore absent from EA literature. I could be wrong here, he could instead be unsure how impactful it is and assuming that if EA hasn't indexed it it's not impactful (fwiw I think this general inference pattern is pretty wrong). Seems to additionally be wondering whether he should work there, and taking into account views people from this community might have when making his decision. 

I also don't get this. I can;t help thinking about the Inner Ring essay by C.S. Lewis. I hope that's not what's happening.

I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce  regulation. 

However in the 'types of technical work' you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use - where is the piece on wh... (read more)

2
Mau
2y
Thanks for the comment! I agree these are important considerations and that there's plenty my post doesn't cover. (Part of that is because I assumed the target audience of this post--technical readers of this forum--would have limited interest in governance issues and would already be inclined to think about the impacts of their work. Though maybe I'm being too optimistic with the latter assumption.) Were there any specific misuse risks involving the tools discussed in the post that stood out to you as being especially important to consider?

Every time I've used VR (including latest ones), I feel sick and dizzy afterwards. I don't think this issue is unique to me. It feels difficult to me to imagine that most people would want to spend significant daily time in something that has such an effect and nothing in this post addressed this issue. Your prediction feels wildly wrong to me.

2
mako yass
2y
It's not unique to you, but I don't know how common it is either. It hasn't been trivial to find statistics on how many people get how much nausia and how long it takes them to find their legs. It's something I want to look at. What were you doing in VR, btw? Were you using teleportation controls? There's a hope that good enough hardware will be able to just totally avoid the nausia triggers. I've noticed that, at least for me, immersion and nausia are coincident. The more aware I am that I'm sitting in a room using VR, the less nausia. Every piece of entertainment software wants me to forget I'm using VR. We might only start to see solid trials of zero-immersion apps once VR is a practical interface device. Trying to figure out whether passthrough will make it better or worse. Unsure. People don't get nausia from wearing sunglasses, do they? Would it feel any different to that? (maybe?)

Great development. Does this mean GovAI will start inputting to more government consultations on AI and algorithms? The UK gov recently published a call for input on its AI regulation strategy - is GovAI planning to respond to it? On the regulation area - there's a lot of different areas of regulation (financial, content, communication infra, data protection, competition and consumer law), and the UK gov is taking a decentralised approach, relying on individual regulators' areas of expertise rather than creating a central body. How will GovAI stay on top of these different subject matter areas? 

2
MarkusAnderljung
2y
We've already started to do more of this. Since May, we've responded to 3 RFIs and similar (you can find them here: https://www.governance.ai/research): the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We're likely to respond to the AI regulation policy paper. Though we've already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring.  I think we'll struggle to build expertise in all of these areas, but we're likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising. 

Just to add to UK regulator stuff in the space:  the DRCF has a stream on algorithm auditing. Here is a paper with a short section on standards. Obviously it's early days, and focused on current AI systems, but it's  a start: https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook

Well I disagree but there's no need to agree - diverse approaches to a hard problem sounds good to me. 

AI doesn't exist in a vacuum, and TAI won't either. AI has messed up, is messing up and will mess up bigger as it gets more advanced. Security will never be a 100% solved problem, and aiming for zero breaches of all AI systems is unrealistic.  I think we're more likely to have better AI security with standards - do you disagree with that?  I'm not a security expert, but here some relevant considerations of one applied to TAI. See in particular the section "Assurance Requires Formal Proofs, Which Are Provably Impossible". Given the probably imposs... (read more)

2
Chris Leong
2y
Standards can help with security b/c that's more of a standard problem, but I suspect it'll be a distraction for aligning AGI.

I can respond to your message right now via a myriad of potential software because of the establishment of a technical standard, HTTP.  Additionally, all major web browsers run and interpret Javascript, in large part due to SSOs like IETF and W3C. By contrast, on mobile, we have two languages for the duopoly, and a myriad of issues I won't go into, but suffice to say there has been a failure of SSOs in the space to replicate what happened with web browsing and early internet. It may be that TAI present novel and harder challenges, but in some of the h... (read more)

2
Chris Leong
2y
I guess the success of those standards for the web doesn’t feel very relevant to the problem of aligning AI. For a start, the design of the protocols has led to countless security flaws, hardly seems robust? In addition, the technology has often evolved by messing up and then being patched later.

Thank you kindly for the summary! I was just thinking today when the paper was making the rounds - I'd really like a summary of this whilst I'm waiting on making the time to read it in full. So this is really helpful for me.

I work in this area, and can attest to the difficulty of getting resources towards capability building for detecting trends towards future risks, as opposed to simply firefighting the ones we've been neglecting. However, I think the near vs long term distinction is often unhelpful and limited, and I prefer to try to think about things i... (read more)

Sorry more like a finite budget and proportions, not probabilities.

2
Zach Stein-Perlman
2y
Sure, of course. I just don’t think that looks like adopting a particular perspective.

Agree on aggregate it's good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you're just uncertain? My guess is that you don't weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out. 

2
Zach Stein-Perlman
2y
* Of course they're not equal in either expected value relative to status quo or appropriate level of resources to spend * I don't think you can "put probabilities on each" -- probabilities of what?

Could you elaborate on what you mean by as ad tech gets stronger? Is that just because all tech gets stronger with time, or is it in response to the current shifts, like privacy sandbox?

8
Jeff Kaufman
2y
Yes, I'm also confused. Ad tech is on a path to getting weaker, with browsers making it harder and harder to connect people's behavior across sites. Privacy Sandbox, and the privacy-preserving ads APIs that other browsers are creating, are much weaker than what they're replacing.

Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:

IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction? 

I found this to be an interesting way to think about this that I hadn't considered before - thanks for taking the time to write it up.

On the philosophical side paragraph - totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual's most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference. 

Disclaimer: I personall... (read more)

This is not about the EA community, but something that comes to mind which I enjoyed is the essay Tyranny of the Structurelessness, written in the 70s. 

I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.

1
timunderwood
2y
Ummmm, so we say we want to do good, but we actually want to make friends and get laid, so we figure out ways to 'do good' that leads to lots of hanging out with interesting people,and chances to demonstrate how cool we are to them. Often these ways of 'doing good' don't actually benefit anyone who isn't part of the community. This is at least the worry, which I think is a separate problem from Goodharting, ie when the cea provides money to fly someone from the US to go to an eagx conference in Europe, I don't think there is any metric that is trying to be maximized, but rather just a vague sense that this might something something person becomes effective and then lots of impact. Now it could interact with Goodharting in a case where, for example, community organizers get funds and status primarily based on numbers of people attending events, when what actually matters is finding the right people, and having the right sorts of events.

Have you spoken to the Czech group about their early days? I'd recommend it, and can put you in touch with some folks there if you like.

Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it's all part of the same overarching problem area.

I'm not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I'm trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that's because of a misalignment of values between the u... (read more)

7
HaydnBelfield
2y
Hi both, Yes behavioural science isn't a topic I'm super familiar with, but it seems very important! I think most of the focus so far has been on shifting norms/behaviour at top AI labs, for example nudging Publication and Release Norms for Responsible AI. Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is "meaningful human control". Automation bias is an issue even up to the nuclear level - the concern is that people will more blindly trust ML systems, and won't disbelieve them as people did in several Cold War close calls (eg Petrov not believing his computer warning of an attack). See Autonomy and machine learning at the interface of nuclear weapons, computers and people. Jess Whittlestone's PhD was in Behavioural Science, now she's Head of AI Policy at the Centre for Long-Term Resilience.

So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):
- easier to manipulate their trajectory
- easier to exploit their labour
- free selection, build on top of/continue rich get richer effects of 'talented' people
- let's apply a supervised learning approach to high impact people acquisition, the training data biases won't affect it

3
Chris Leong
2y
Well, haters are gonna hate. Maybe that's too blase, but as long as we are talking about university groups rather than high schools, the PR risks don't feel too substantial.

I've wondered in the past whether it's like dropout in a neural network. (I've never looked into this and know nothing about it)

2
Linch
2y
Ooh that's a really interesting hypothesis!

Yeah I just couldn't understand his comment until I realised that he'd misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn't deter great people for having different views. So I was looking for an explanation and that's what my brain came up with. 

2
Linch
2y
Thank you, that makes sense!

Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.

So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.

I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks le... (read more)

Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.

2
Linch
2y
I guess my personal read here is that I don't think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption. 

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

I got this impression from what I understood your main point to be, something like: 

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I think there are several assumptions in both of these points that I want to unp... (read more)

2
Thomas Kwa
2y
First off, note that my comment was based on a misunderstanding of "big tent" as "big movement", not "broad spectrum of views". As Linch pointed out, there are three different questions here (and there's a 4th important one): 1. Whether impact can be collapsed to a single dimension when doing moral calculus. 2. Whether morality is objective 3. Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful 4. Whether we can identify groups of people to invest in, given the uncertainty we have Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches. I would state my main point as something like "Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they're good, given that they have large costs". I do believe that there's a tail of talented+dedicated people who will make much more impact than others, but I don't think the second half follows, just that any reallocation of resources requires weighing costs and benefits. Here are some things I think we agree on: * Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one. * Before deciding that top community-builders should work at a synagogue, we should make sure it's the highest EV thing they could be doing (taking into account uncertainty and VOI). No

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful

I think this is unhelpfully conflating at least three pretty different concepts. 

  • Whether impact can be collapsed to a single dimension when doing moral calculus.
  • Whether morality is objective
  • Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful

(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)

I guess some scientific topics have some pretty good evidence and are hard to believe are extremely wrong (e.g. physics) given how much works so well that is based on it today, and then there are other scientific/medical areas that look scientific/medical without having the same robust evidence-base. I'd like to read a small overview meta analysis with some history of each field that claims (and is widely believed) to be scientific/medical, with discussion of some of its core ideas, and an evaluation of how sure we are that it is good and real in the way that a lot of physics is. I don't want to name particular other scientific/medical areas to contrast, but I do have at least one prominently in my mind.

BC is of the past, CB is of the future! We are definitely progressing, right, right alphabet? 

I thought it was Endless Arguments

Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones! 

Some things I like about this post:
 - I like the topic, I am interested in failure and places where failure and mistake making is discussed openly feels more growthy.
- I liked that you gave lots of examples.

Some things I didn't like about this post:
- Sometimes I couldn't always see the full connections you were making, or I could but had to leap to them based on my own preconceptions, maybe they could be more explained? For example, a benefit was a stronger community, but you didn't explain the mechanism by which that leads to a stronger community. I ... (read more)

Thanks for sharing your motivations! Personally, I would have liked to read your original post, even if it was more one-sided, and got the other side elsewhere. Being helped with heuristics for making decisions is not really what I was looking for in this post - it feels paternalistic and contrived in me, and I'd enjoy you advocating earnestly for more of something you think is good.

Load more