WTK

William the Kiwi

42 karmaJoined Mar 2023Working (0-5 years)

Bio

Hi I'm William and I am new to the Effective Altruism community.

William comes from a country in the Pacific called New Zealand. He was educated at the University of Otago where he received a first class honours degree in chemistry. He is currently traveling through Europe to learn more about different cultures and ideas.

How others can help me

William is interested in learning more about Artificial Intelligence and the magnitude to which it poses an existential risk to humanity.

How I can help others

William is new to Effective Altruism but is willing to learn ways in which he can aid humanity.

Comments
17

I would agree with Remmelt here. While upskilling people is helpful, if those people then go on to increase the rate of capabilities gain by AI companies, this is reducing the time the world has available to find solutions to alignment and AI regulation.

While, as a rule, I don't disagree with an industries increasing their capabilities, I do disagree with this when those capabilities knowingly lead to human extinction.

I would agree that this is a good summary:

Improving the quality/quantity of output from safety teams within AI labs has a (much) bigger impact on perceived safety of the lab than it does on actual safety of the lab. This is therefore the dominant term in the impact of the team's work. Right now it's negative.

If perception of safety is higher than actual safety, it will lead to underinvestment of future safety, which increases the probability of failure of the system.

Of the four reasons you listed, reason 4 (safety washing) seems the most important. Safety-washing, alongside the related ethics-washing and green-washing are effective techniques that industry uses to increase peoples perception of the industry. Lizka wrote a post on this. These techniques are used by many industries, particularly by industries that produce significant externalities such as the oil industry. These techniques are used because they work, because they give people an out. It is easier to think about the shiny flowers on an ad than it is to think about the reality of an industry killing people. 

Safety-washing of AI is harmful as it gives people an out, a chance to repeat the line "well at least they are allegedly doing some safety stuff", which is a convenient distraction from the fact that AI labs are knowingly developing a technology that can cause human extinction. This distraction causes otherwise safety-conscious people to invest in or work in an industry that they would reconsider if they had access to all the information. By pointing out this distraction, we can help people make more informed decisions.

"This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks."

This seems like a absurd claim. Are 80k actually making it?

EDIT: the claim is made by Benjamin Hilton, one of 80k's analysts and the person the OP is replying too.

Hi Mo, thanks for the feedback.

  1. Good thought, I've cross-posted it to my account there.
  2. This post was spurred by a conversation I had about the upper limit of AI intelligence and the fact that it was likely very far above all humans combined. This is meant as, like you said, a pretty unobjectionable support for my then assumed conclusion. The conversion was heavily influenced by Cotra's Bioanchors report.
  3. I was estimating the brains computation ability very roughly. I guessed that there would be more detailed estimations already done, but would take time to read through and understand their premises. I'll read through the document when I have some time.
  4. These two look interesting to read.
  5. Anders Sandberg is an interesting person. I speculated someone had done calculations similar to mine, I'm not surprised that he is one of such people.

Yea, I found him to be a fascinating person when I talked to him at EAGx Warsaw.

I'm initially sceptical of getting 40% of the mass-energy out of, well, anything. Perhaps I would benefit from reading more on black holes. 

However I would in principle agree with the idea that if black holes are feasible power outputers, this would increase the theoretical maximum computation rate. 

GPT4 is clearly above the median human when it comes to a range of exams. Do we have examples of GPT4's comparison to the median human in non-exam like conditions?

Load more