sableye

Independent Researcher @ Supervised Program for Alignment Research (SPAR)
-2 karmaJoined Seeking workYerevan, Armenia

Bio

my name is jay. i’m a relapsing ex-philosopher specialising in philosophy of mind & cognitive/evolutionary psychology.

  • these days i’m especially interested in: conscious/sentient AI (can AI feel things?), affective computing (how to simulate emotion in artificial systems?), & explainable AI (how to understand complex AI systems?), as well as AI governance.
  • i’m also currently resuming art after a decade-long hiatus. at the moment i’m focusing on figure drawing.
  • i average between 40-50k minutes/year on spotify, & ≥80% of my music diet is new material.
  • one of the four books i’m currently reading: taipei by tao lin. described by literary scholars as “an aesthetic experiment in autistic jouissance”. the other three on rotation: house of leaves by mark z. danielewski, a severed head by iris murdoch, & the ministry for the future, by kim stanley robinson.
  • i'm usually based in tbilisi, georgia, a city which has quickly captured my heart. for the month of july, i will be in yerevan, armenia.

How others can help me

i'm always interested in opportunities (courses, internships, work) at the intersection of philosophy & AI. this includes work on AI ethics & governance.

How I can help others

  • i’m trained in both science & philosophy. a seasoned interdisciplinarian, i’m a knowledgeable & rigorous interlocutor, meaning that i can help you to workshop & refine your ideas. past collaborators have praised my editorial insights. i thus feel confident in also helping to produce written work that is clear &, importantly, not academically dry but actually pleasurable to read.
  • i’m good at project management. from 2021-22, i served as chief organiser of a 3-day international conference (’inclusion beyond face value: metaphilosophy of/with social justice’). currently, i apply these skills in the research efforts i’m leading, as well as in the capacity of a co-organiser of the EELISA-funded spring school ethos+tekhnè, which focuses on interdisciplinary perspectives on generative AI.
  • i have a penchant for creating “micro-communities” based around sharing music, ritualistic food appreciation, & have founded numerous reading & writing groups.
  • other useful skills i will just mention, in no particular order: designing elegant & memorable presentation materials (slides, handouts), drawing & painting, building mechanical keyboards, interior design, mending clothes & minor alterations.

Comments
3

What claim is being made here?

Re: track record, I'm a coauthor on a position paper that we've been gradually rolling out to reviewers who are well-established in this topic.

Finally, please find information about the aims of the survey in the below comment & at this webpage.

What?

Inspired by the Philpapers survey, we are conducting a survey on experts’ beliefs about key topics pertaining to AI consciousness & moral status. These include:

🧠 Consciousness/sentience

⚖️ Moral status/moral agency

💥 Suffering risk (”S-risk”) related AI consciousness (e.g. AI suffering)

⚠️ Existential risk (”X-risk”) related to AI consciousness (e.g. resource competition with conscious AI)

Why?

Such a survey promises to enrich our understanding of key safety risks related to conscious AI in several ways.

📊 Most importantly, the results of this survey provide a general picture of experts’ views about the probability, promises, & perils of AI consciousness.

  • This summary can be used to gauge expert opinion, & make it easier for policymakers, journalists, & lay people to see where the experts stand, where they disagree, & where there is uncertainty.
  • Furthermore, the survey results can also be of use to experts themselves, who may harbour misconceptions about what most other experts believe, or, owing to their specialisation, may not be abreast of advances in other areas of AI research.
  • Overall, the survey enhances the accessibility of AI research, ultimately contributing to a more AI-literate (& hence, better prepared) populace.

⚔️ Analysing the types of answers given by respondents might help to identify fault lines between industry, academia, & policy.

📈 Repeating the survey on an annual basis can assist in monitoring trends (e.g. updates in belief in response to technological advances/breakthroughs, differences in attitudes between industry & academia, emergent policy levers, etc.).

Hey! I'm not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.

As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.