beth​

I am a phd student in theoretical computer science.

Animal welfare is the best cause area.

"Technical AI Safety" is not an effective cause. And even if it was, MIRI wouldn't be a good intervention.

I blog at bethzero.com. Part of my writing there is about why I think "AI Safety" isn't doing anything of value. I'm open to writing prompts. In lieu of funding, my output there is limited by the number of fucks I give.

Useful book recommendations:

James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Recommended reading for anyone who wants to use rational thought to do good, with a bunch of case studies where it failed miserably as well as a theory on what went wrong.

Aph Ko and Syl Ko, Aphro-ism: Essays on Pop Culture, Feminism, and Black Veganism from Two Sisters. If directly useful thought is like coding up new features, then building theory is like clearing technical debt. The Ko sisters build some quality theory on veganism and intersectional feminism. Not being used to such texts, I found it a hard book to understand. I've probably listened through it 5 times by now.

Cathy O'Neil, Weapons of Math Destruction. Or any other book on the ethics of algorithms really, there are a number of them out there. If you have a STEM degree, you likely didn't learn about the very real ethical problems that you'd happen upon in your career, which will result in your not recognizing them as such. The value of changing this situation should be self-evident.

Fun book recommendations:

Jiří Matoušek, Thirty-three Miniatures: Mathematical and Algorithmic Applications of Linear Algebra. This is a math textbook, but it is honestly on of the most enjoyable books I've ever read. Matoušek is a phenomenal writer and this is his one text that can be read for actual leisure. Mostly doesn't require more knowledge than you'd learn in a basic linear algebra class.

Andrew Rowe, Sufficiently Advanced Magic. LitRPG that can be accurately judged by its cover. I enjoy the series.

Francis Spufford, Red Plenty. Fictionalized portrayal of life in Soviet Russia. Follows a bunch of people, among which Leonid Kantorovich (brilliant mathematician, inventor of linear programming on that side of the curtain).

beth​'s Comments

What should Founders Pledge research?

Fighting human rights violations around the globe.

How much EA analysis of AI safety as a cause area exists?

I believe your assessment is correct, and I fear that EA hasn't done due diligence on AI Safety, especially seeing how much effort and money is being spent on it.

I think there is a severe lack of writing on the side of "AI Safety is ineffective". A lot of basic arguments haven't been written down, including some quite low-hanging fruit.

Four practices where EAs ought to course-correct

As per my initial comment, I'd compare it to pre-WWII Netherlands banning government registration of religion. It could safe tens of thousands of people from deportation and murder.

Four practices where EAs ought to course-correct
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency.

I am not a historian, but during the Nazi regime, The Netherlands had among the highest percentages of Jews killed in all of Western Europe. I remember historians blaming this on the Dutch having thorough records of who the Jews were and where they lived. Access to information is definitely a big factor in how succesful a genocidal regime can be.

The worry is not so much about killer robots enacting a mass murder campaign. The worry is that humans will use facial recognition algorithms to help state-sanctioned ethnic cleansing. This is not a speculative worry. There are a lot of papers on Uyghur facial recognition.

EA Forum 2.0 Initial Announcement

I don't have any specific instances in mind.

Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don't think that the two concerns are separable. The actual mechanisms and results might largely overlap.

Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and what it values before participating. This makes people with oft-downvoted opinions self-select out of the community before ever letting us know that this is happening.

The hovering is helpful, thank you.

EA Forum 2.0 Initial Announcement

Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.

Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their posts and comments. The lesson to learn from getting both many upvotes and many downvotes is very different from the lesson to learn if nobody bothered to read and vote on your content.

[Link] "The AI Timelines Scam"

Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.

I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard even among places that don't participate in the "strong AI" scam. Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms, but also in how startups pretend use human labor to pretend they have advanced AI or how short self-driving car timelines are a major part of Uber's value proposition.

The emperor has no clothes. Everyone in the field likes to think they are aware of this fact already when told, but it remains helpful to point it out explicitly at every opportunity.

Defining Meta Existential Risk

This is mostly a problem with an example you use. I'm not sure whether it points to an underlying issue of your premise:

You link to the exponential growth of transistor density. But that growth is really restricted to just that: transistor density. Growing your number of transistors doesn't necessarily grow your capability to compute things you care about, both from a theoretical perspective (potential fundamental limits in the theory of computation) as well as a practical perspective (our general inability to write code that makes use of much circuitry at the same time + the need for dark silicon + Wirth's law). Other numbers, like FLOP/s, don't necessarily mean what you'd think either.

Moore's law does not posit exponential growth in amount of "compute". It is not clear that the exponential growth of transistor density translates to exponential growth of any quantity you'd actually care about. I think it is rather speculative to assume it does and even more so to assume it will continue to.

I find this forum increasingly difficult to navigate

These are some issues that actively frustrate me to the point of driving me away from this site.

  • Loading times for most pages are unbearably slow. So are most animations (like the menu from clicking your username top right).
  • Many features break badly when Javascript is turned off.
  • Text field for bio is super small and cannot be rescaled.
  • Super upvotes have their use but the super downvote just encourages harsh voting behaviour.
  • The contrast on the collapse comment button is minimal, same for a number of other places.
  • Basic features take much effort to navigate to. Going to all posts either means two clicks (hamburger menu then all posts) or clicking a link that can not always be seen without scrolling (which is a mess because the page height will change when recent comments have finished loading)
Load More