All of SiebeRozendal's Comments + Replies

Thanks

Maybe quite some people don't like random ideas being shared on the Forum?

Ah, I wasn't aware that that wasn't the conventional definition. Thanks for the correction.

Still, I think it's important to somehow manage both sets of people and we can probably do better, though my idea is quite random.

Well, yes, but I was thinking about what to do with sociopaths that are already in the community. If your policy is "we kick out every sociopath we identify", no sociopath is going to identify themselves to you. I'm not advocating for attracting new sociopaths.

Mind you, I'm assuming here that there are plenty of sociopaths that aren't that bad, and want to do good, but suffer from the disability of not being able to care emotionally for others. I think it would be good if we could at least keep them out of powerful positions.

This was a pretty uninformed thought of how to deal with sociopaths, but it does feel like a problem worth someone thinking more deeply about.

4
Jason
12d
Maybe some of this is coming from a definitional difference -- sociopathy as a "disability of not being able to care emotionally for others" is different from it being akin to, if not an obsolete synonym for, antisocial personality disorder. I don't think calling people who lack affective empathy, without more, sociopaths is likely to be helpful.

Here's another question I have:

  • is SBF a sociopath, and should the community have a specific strategy for dealing with sociopaths?

(I think yes. Something like 1% of the population of sociopathic, and I think EA's utilitarianism attracts sociopaths at a higher level than population baseline. Many sociopaths don't inherently want to do evil, especially not those attracted to EA. If sociopaths could somehow receive integrity guidance and be excluded from powerful positions, this would limit risk from other sociopaths.)

2
Jason
14d
If you've concluded that someone is a "sociopath," wouldn't it be better to show them the door? [in quotes because there is no commonly accepted definition of this term as far as I know] I know that doesn't protect the broader society from their risk, but it's not clear to me that sociopath risk reduction makes sense as an EA cause area generally. (Ensuring that the EA community does not enable sociopathic behavior is distinct from that.)

Random idea:

Maybe we should - after this question of investigation or not has been discussed in more detail - organize community-wide vote on whether there should be an investigation or not?

4
RobBensinger
9d
Knowing what people think is useful, especially if it's a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.) Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn't want to assume that something's a good idea just because most EAs agree with it; I'd rather focus on the arguments for and against.
5
Manuel Allgaier
9d
It's easy to vote for something you don't have to pay for. If we do anything like this, an additional fundraiser to pay for it might be appropriate.
3
JWS
13d
People, the downvote button is not a disagree button. That's not really what it should be used for.

I have not been very closely connected to the EA community the last couple of years, but based on communications, I was expecting:

  • an independent and broad investigation
  • reflections by key players that "approved" and collaborated with SBF on EA endeavors, such as Will MacAskill, Nick Beckstead, and 80K.

For example, Will posted in his Quick Takes 9 months ago:

I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly del

... (read more)

It now turns out that this has changed into podcasts, which is better than nothing, but doesn't give room to conversation or accountability.

Formatting error; this is something Siebe is saying, not part of the Will quotation.

I would like to know what the disagree votes* mean here.

*At the time of this comment, it's 7 Agree - 7 Disagree

3
Vasco Grilo
15d
Hi Siebe, I think some disagree votes may be motivated by interpreting Rob's critique as being directed to EA (as in the title) instead of specific people or organisations (see EA should taboo "EA should"), although Rob does mention specific individuals (Will) and organisations (EV).

I hope you are correct! As an outsider, I find it very hard to judge without standardized non-gameable benchmarks for agents.

I hope you are correct. I find it very hard to judge without standardized, non-gameable benchmarks for agents.

I hope you are correct. As an outsider, I find it very hard to judge without standardized, non-gameable benchmarks for agents.

I really like this post, but I think the concept of buckets is a mistake. It implies that a cause has a discrete impact and "scores zero" on the other 2 dimensions, while in reality some causes might do well on 2 dimensions (or at least non-zero).

I also think over time, the community has moved more towards doing vs. donating, which has brought in a lot of practical constraints. For individuals this could be:

  • "what am I good at?"
  • "what motivates me?"
  • "what will my family think of me?"

And also for the community:

  • "which causes can we convince outsiders to
... (read more)

If anyone has good suggestions of what I could email to relevant MEPs (just Zvi's post?) that would be net-positive (e.g. low risk of bad regulation), I'd be happy to hear them.

1
James Herbert
1mo
Ping Joep at PauseAI? He's a big fan of emailing representatives and has some advice. Here's a recording of a talk he gave hosted by ERO in Amsterdam the other night - I think it contains some pointers towards the end. 
1
Maynk02
1mo
Here, "they" refers to folks from OpenAI who tried to come forward and do something about Sam's manipulative behavior or lies or whatever was happening. Anyone who may potentially provide the leaks or shed some light. Here, I am unsure about the nature of the events.  I hope it is clear now.

Thanks for re-sharing! Unfortunately, these make it quite unclear how much they've given to EA. (I assume it's a large chunk of 'GCR Capacity Building'

No they didn't, and it looks like we aren't going to see the investigation, unless somebody leaks it. But it looks to me that it had something to do with his pattern of manipulative behavior, and allegedly he lied to other board members that McCauley wanted Toner fired (this was stated in the NY Times article on Murati, I think), which sounds like the proximate cause to me.

But if such behavior came up during the investigation, I'm confused how the investigators could NOT conclude there was good reason for his firing (maybe they're not so independent?) or w... (read more)

1
Maynk02
1mo
I am not sure if leaks are a reliable source in these cases. For one, these instances don't have material evidence. Somebody (or a bunch of somebodies) can only try to come forward to take action. But I am afraid that's what they tried to do. It was like the first necessary crisis (the sooner, the better) for later events to unfold. I am unsure about their nature. Partially based on the new board's current update on choosing the new members.
7
Nick K.
1mo
From gwern's summary over on lesswrong it sounds like the actual report only stated that the firing was "not mandated", which could be interpreted as "not justified" or "not required". Is it clear from the legal context that the former is implied?

Thanks for making the list Remmelt!

Not sure how important this one is, but Air Canada recently had to comply to a refund policy made up by its own chatbot.

1
Remmelt
1mo
Thanks! Also a good example of lots of complaints being prepared now by individuals

Also worth reading:

WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its

... (read more)

Impressions:

  • None of these seem to have the relevant AI governance expertise
  • Some nonprofit expertise
  • Mostly corporate expertise
  • I wonder what's happening with the OpenPhil board seat
7
Will Howard
1mo
I'm pretty sure that's gone now. I.e. that the initial $30m for a board seat arrangement wasn't actually legally binding wrt future members of the board, it was just maintained by who the current members would allow. So now there are no EA aligned board members there is no pressure or obligation to add any. I could be wrong about this but I'm reasonably confident

Yeah agree, though the disagreement is also specific to views on AI x-risk, which I view as very different from reputation

The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

I don't know, threatening to resign is a pretty concrete thing and I don't find "revolt" such an exaggeration. You can doubt the sources and wish for mor... (read more)

Okay, got it!

The grant was also to buy a board seat, which makes it very different from a normal grant.

The 80K job promotion is indeed odd.

I think there was plenty of skepticism towards OpenAI, but maybe less so at the top

I agree with your points in general, but I'm confused at the context. Are you implying here that EA empowered and trusted Sam Altman?:

I get the sense that EAs are, as a whole, too ready to assume that other EAs have low susceptibility to corruption from these sorts of influences.

4
Jason
1mo
That [edit: sentence] wasn't specific to Altman; as the top of post notes, these propositions were influenced by both the FTX debacle and Altman.  The lack of accountability and oversight at certain EA organizations also contributed to that [edit: sentence], which was intended to assert that understating corruption risk from insiders may be a blind spot for EA more generally. That being said, I think there are some data points which could suggest that EAs and EA-related entities should have been more skeptical of Altman and OpenAI than they were. As this post notes, there was a large grant to OpenAI in 2017, and as of 2021 their non-safety related jobs were still being promoted on the 80K job board even after some worrisome signs of Altman's turn in focus. I've also seen several AI-focused posters opine that EA's involvement with OpenAI has been one of the movement's significant failings, which could suggest an error in trusting Altman's motives.
1
Ian Turner
1mo
That is explicitly true, no? Open Philanthropy was an early OpenAI donor.

I want to share a concern that hasn't been raised yet: this seems like a huge conflict of interest.

From the Power for Democracies website:

Power for Democracies was founded in 2023 by Markus N. Beeko (the former Secretary General of Amnesty International in Germany) together with Stefan Shaw and Stephan Schwahlen, the founders of the philanthropy advisory legacies.now and co-founders of effektiv-spenden.org. Power for Democracies is funded by small family foundations and individuals from Germany and Switzerland who wish to make an effective contribution i

... (read more)

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.

This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.

I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.

8
Matthew_Barnett
2mo
I'm not sure we disagree. A lot seems to depend on what is meant by "very very cautious". If it means shutting down AI as a field, I'm pretty skeptical. If it means regulating AI, then I agree, but I also think Sam Altman advocates regulation too. I agree the general population would probably endorse the statement "if a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" if given to them in a survey of some kind, but I think this statement is vague, and somewhat misleading as a frame for how people would think about AI if they were given more facts about the situation. Firstly, we're not merely talking about any technology here; we're talking about a technology that has the potential to both disempower humans, but also make their lives dramatically better. Almost every technology has risks as well as benefits. Probably the most common method people use when deciding whether to adopt a technology themselves is to check whether the risks outweigh the benefits. Just looking at the risks alone gives a misleading picture. The relevant statistic is the risk to benefit ratio, and here it's really not obvious that most people would endorse shutting down AI if they were aware of all the facts. Yes, the risks are high, but so are the benefits.  If elites were made aware of both the risks and the benefits from AI development, most of them seem likely to want to proceed cautiously, rather than not proceed at all, or pause AI for many years, as many EAs have suggested. To test this claim empirically, we can just look at what governments are already doing with regards to AI risk policy, after having been advised by experts; and as far as I can tell, all of the relevant governments are substantially interested in both innovation and safety regulation. Secondly, there's a persistent and often large gap between what people say through their words (e.g. when answering surveys) and what they actually want as measured by their behavior

Although I agree with pretty much all he writes, I feel like a crucial piece on the FTX case is missing: it's not only the failure of some individuals to exercise reasonable humility and abide by common sense virtues. It's also a failure by the community, its infrastructure, and processes to identify and correct this problem.

(The section on SBF starts with "When EAs Have Too Much Confidence".)

I don't feel I have much to say about that tbh, though I did talk about auditing financials here https://forum.effectivealtruism.org/posts/eRyC6FtN7QEkDEwMD/should-we-audit-dustin-moskovitz?commentId=qEzHRDMqfR5fJngoo

If we have another major donor with a more mysterious financial background than mine, we should totally pressure them to undergo an audit!

That said, I'm not convinced the next scandal will look anything like that, and the real problem to me was the lack of smoking guns. It's very hard to remove someone from power without that, as we've recentl... (read more)

What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?

So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)

  1. Values safety, but values personal status & power more
  2. Values safety, but believes he needs to be in control of everything & has a messiah complex
  3. Doesn't really care about safety, it was all empty talk
  4. Something else

I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.

5
Matthew_Barnett
2mo
There's an IMO fairly simple and plausible explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do. [ETA: also, presumably, Sam Altman thinks that some level of safety work is good. He just prefers a lower level of safety work/deceleration than a typical EA might recommend.] It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones. Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.

  1. and 2. seem very similar to me. I think it's something like that.

The way I envision him (obviously I don't know and might be wrong):

  • Genuinely cares about safety and doing good.
  • Also really likes the thought of having power and doing earth-shaking stuff with powerful AI.
  • Looks at AI risk arguments with a lens of motivated cognition influenced by the bullet point above.
  • Mostly thinks things will go well, but this is primarily from an instinctive feel of a high-energy CEO, who are predominantly personality-selected for optimistic attitudes. If he were to
... (read more)
1
Jelle Donders
2mo
Hard to say, but his behavior (and the accounts from other people) seems most consistent with 1.

Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.

Good luck!

Quick thoughts:

  • I think another URL (e.g. "forum.animal advocacy.com") would be more accessible?
  • A pinned post on the AA Forum explaining the initiative and what FAST is might be helpful

This is, unless you specifically want to keep it for FAST members

2
David van Beveren
2mo
Hi Siebe, Both great suggestions, will look into them— dankjewel!

Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.

A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.

4
JWS
2mo
Some initial insight on what this might look like practically, is that Trump has promised to repeal Biden's executive order on AI (YMMV on how serious you take trump's promises)

I know! I wanted to tag Jan Willem van Putten but didn't know how to do that (on mobile)

2
David M
3mo
@Jan-Willem 

I was surprised to see that the Finance position is volunteer. It seems not in line with the responsibilities?

1
tobytrem
3mo
In case this question was aimed at me, I'm just link-posting this because I thought people might be interested: I can't answer questions about the role (the above is ~all I know)

why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?

I think these are far more relevant questions than the theoretical long-termist question you ask.

People can be in ... (read more)

1
Hayven Frienby
3mo
I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?” As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?” More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc. As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values. I’ll end with thi

Heavy use of kava is associated with liver damage, but it seems much less toxic than alcohol. (I use it in my insomnia stack)

I just want to share that I think you did an excellent job explaining the arguments on the recent Politico Tech podcast, in a way that I think comes across as very grounded and reasonable, which makes me more optimistic that MIRI can make this shift. I also hope that you can nudge Eliezer more towards this style of communication, which I think would make his audience more receptive. (I thought the tone of the TIME piece didn't seem professional enough). This seems especially important if Eliezer will also focus on communications and policy instead of research.

Really interesting initiative to develop ethanol analogs. If successful, replacing ethanol with a less harmful substance could really have a big effect on global health. The CSO of the company (GABA Labs) is prof. David Nutt, a prominent figure in drug science.

I like that the regulatory pathway might be different from most recreational drugs, which would be very hard to get de-scheduled.

I'm pretty skeptical that GABAergic substances are really going to cut it, because I expect them to have pretty different effects to alcohol. We already have those (L-thean... (read more)

2
Elina Christian
3mo
Hi, I agree ETOH is extremely harmful. However, there are existing medications which act on GABA, many of which are both highly addictive and therefore highly regulated themselves. Barbituates are a (now outdated) drug class which acts on GABA, others include benzodiazepines and more modern sleep drugs like Zolpidem. All have significant side effects.  This website strikes me as very selective in how scientific it is - for example, "At higher levels (blood ethanol >400mg%, as would occur after drinking a litre of vodka) then these two effects of ethanol – the increase in GABA inhibition and the blockade of glutamate excitation – can combine to produce a lethal level of sedation and respiratory depression. In terms of health impacts, alcohol (strictly speaking, ethanol) is in a class of its own, and very different from GABA." ETOH is not that different from GABA, as you can also overdose and cause respiratory depression and death from GABA inhibition. I would like to see some more peer-reviewed studies around this new drink, and a comparison to placebo (if you're giving people this drink and saying it will enhance "conviviality and relaxation" then it probably will). As with pretty much anything health related, there's no quick fix. Things which depress the CNS are addictive, and not that dissimilar from one another. I can see the marketing opportunity for this in the "health food" arena, which makes me more skeptical of this site. I imagine, if released, it may have a similar fate to cannabinoid molecules being included in all sorts of products - allowed because they are ineffective, or vapes - with a different risk profile to the original substance.
9
John Salter
3mo
People massively underestimate the damage alcohol causes per use because of how normalised it is.

Top Anglophone universities are already quite small, and I find it hard to believe that the migration numbers are significant

I wonder whether the focus on top universities in itself carries Anglophone bias.. most other countries don't have a large disparity in talent-attraction between different universities. Instead, the talent is concentrated within universities in e.g. Honours programmes and by grades.

5
Rebecca
4mo
I think it’s hard for many other countries to a similar level of talent differential as many of the top students from those countries are at the top Anglophone universities

In terms of policy recommendations, these differences don't seem to matter.

Maybe I'm nitpicking, but I see this point often and I think it's a little too self-serving. There are definitely policy ideas in both spheres that trade-off against the others. E.g. many AI X-risk policy analysts (used to) want few players to reduce race dynamics, while such concentration of power would be bad for present-day harms. Or keeping significant chip production out of developing countries.

More generally, if governments really took x-risk seriously, they would be willing to sacrifice significant civil liberties, which wouldn't be acceptable at low x-risk estimates.

1
Daniel_Friedrich
4mo
That's a good note. But it seems to me a little like pointing out there's a friction between a free market policy and a pro-immigration policy because a) Some pro-immigration policies would be anti-free market (e.g. anti-discrimination law) b) Americans who support one tend to oppose the other While that's true, philosophically, the positions support each other and most pro-free market policies are presumably neutral or positive for immigration. Similarly, you can endorse the principles that guide AI ethics while endorsing less popular solutions because of additional, x-risk considerations. If there are disagreements, they aren't about moral principles, but empirical claims (x-risk clearly wouldn't be an outcome AI ethics proponents support). And the empirical claims themselves ("AI causes harm now" and "AI might cause harm in the future") support each other & correlated in my sample. My guess is that they actually correlate in academia as well. It seems to me the negative effects of the concentration of power can be eliminated by other policies (e.g. Digital Markets Act, Digital Services Act, tax reforms)

It's called an existential catastrophe: https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf or if you mean 1 step down, it could be a "global catastrophe".

or colloquially "doom" (though I don't think this term has the right serious connotations)

5
Oliver Sourbut
4mo
Yeah. I also sometimes use 'extinction-level' if I expect my interlocutor not to already have a clear notion of 'existential'.

Something like the recent Nonlinear post–but focused at Sam–would likely have far, far higher EV.

I felt really uncomfortable reading this

I agree with everything but the last point. Director or CEO simply refers to a name of the position, doesn't it?

4
Joseph Lemien
4mo
Yes, it refers to a position. So if this is actually someone's job title, then there kind of isn't anything wrong with it. And I sympathize with people who found or start their own organization. If I am 22 and I've never had a job before but I create a startup, I am the CEO.  So by the denotation there is nothing wrong with it. The connotation makes it a bit tricky, because (generally speaking) the title of CEO (or director, or senior manager, or similar titles) refers to people with a lot of professional experience. I perceive a certain level of ... self-aggrandizement? inflating one's reputation? status-seeking? I'm not quite sure how to articulate the somewhat icky feeling I have about people giving themselves impressive-sounding titles.

Ray Dalio is giving out free $50 donation vouchers: tisbest.org/rg/ray-dalio/

Still worked just a few minutes ago

9
Pablo
4mo
No longer working.
2
John Salter
4mo
Worked 20 minutes ago. Process took me ~5 minutes total.
7
MHR
4mo
Worked for me just now, gave $50 to The Humane League :) 

GiveWell is available (search Clear Fund)!

8
MathiasKB
4mo
Just did it, still works. You can donate to what looks like any registered US charity, so plenty of highly effective options whether you care about poverty or animal welfare.

I wanted to check if this project could become redundant by the expected arrival of TB vaccine(s) later this decade, but they had only 50% efficacy in Phase 2 trials, so treatment will indeed be needed for quite a while it seems.

2
Habiba Banu
4mo
Yes I asked some TB experts about precisely this a little while ago and I totally agree with your take: eventually there will hopefully be even better preventative measures like vaccines but they really do seem like a while off right now. So right now the WHO is keen to push on expanding access to TB preventative treatment.

A pretty poor piece of journalism in my opinion. It gets a number of facts wrong. For example:

  • Adam D'Angelo doesn't have "deep ties" to EA
  • Jaan Tallinn's comments aren't "against EA", just saying that these governance mechanisms weren't enough (I doubt many EA AI Safety advocates have claimed such a thing)
  • The claim that the board didn't consult with layers of a communications firm, based on this tweet which refers to this WSJ article which doesn't mention either of those. It could be true of course, but they weren't justified in claiming it

This looks evermore unlikely. I guess I didn't properly account for:

  • the counterfactual to OpenAI collapse was much of it moving to Microsoft, which the board would've wanted to prevent
  • the board apparently not producing any evidence of wrongdoing (I find this very surprising)

Nevertheless, I think speculating on internal politics can be a valuable exercise - being able to model the actions & power of strong bargainers (including bad faith ones) seems a valuable skill for EA.

Load more