Adam Binks
Pursuing a doctoral degree (e.g. PhD)
Working (0-5 years experience)

I'm building tools for forecasting and thinking (PhD CS / HCI at St Andrews)

I also work at Clearer Thinking

Web: http://adambinks.me/

Tweeting, sometimes about EA: https://twitter.com/adambinks_

Topic Contributions

Comments

Announcing the Clearer Thinking Regrants program

Thanks Ankush! For this first round, we keep things intentionally short, but if your project progresses to later rounds then there will be plenty of opportunities to share more details.

it is a pdf that I would love to get valued and be shared with the world and anyone who wants to hear about longtermism project

Posting your ideas here on the EA Forum could be a great way to get feedback from other people interested in longtermism!

Announcing the Clearer Thinking Regrants program

Thanks Stuart, I'll DM you to work out the details here!

AI Twitter accounts to follow?

Maybe something helpful to think about is, what's your goal?

E.g. maybe:

  • You want to stay on top of new papers in AI capabilities
  • You want to feel connected to the AI safety research community
  • You want to build a network of people in AI research / AI safety research, so that in future you could ask people for advice about a career decision
  • You want to feel more motivated for your own self study in machine learning
  • You want to workshop your own ideas around AI, and get rapid feedback from researchers and thinkers

I think for some goals, Twitter is unusually helpful (e.g., workshopping early-stage ideas, building a network). Many other goals, I think you can get a higher fidelity, lower addictiveness path towards, for example staying on top of new AI safety research papers by reading the Alignment Newsletter.

Against “longtermist” as an identity

and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”

 

Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities

I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.

Introducing Asterisk

This is exciting! If you've got this far in your planning yet, I'd love to hear more about how the journal will be promoted and how you plan for readers to find you? Do you have any examples of "user stories" - stories about the kind of reader you'd hope to attract, how they'd find the journal, and what it might lead them to do subsequently?

Bad Omens in Current Community Building

It's also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn't want to keep him waiting).

Fermi estimation of the impact you might have working on AI safety

Great to see tools like this that make assumptions clear - I think not only useful as a calculator but as a concrete operalisation of your model of AI risk, which is a good starting point for discussion. Thanks for creating!

My GWWC donations: Switching from long- to near-termist opportunities?

Hi Tom! I think this idea of giving based on the signalling value is an interesting one.

One idea - I wonder if you could capture a lot of the signalling value while only moving a small part of your donation budget to non-xrisk causes?

How that would work: when you're talking to people about your GWWC donations, if you think they'd be more receptive to global health/animal ideas you can tell them about your giving to those charities. And then (if you think they'd be receptive) you can go on to say that ultimately you think the most pressing problems are xrisks, and therefore you allocate most of your donations to building humanity's capacity to prevent them.

In other words, is the signalling value scale-insensitive (compared to the real-world impact of your donations)?

Longtermist EA needs more Phase 2 work

Quick meta note to say I really enjoyed the length of this post, exploring one idea in enough detail to spark thoughts but high readable. Thank you!

Free-spending EA might be a big problem for optics and epistemics

You might be aware of this but for others reading -  there's a calculator to help you work out the value of your time.

 I think it's worth doing once (and repeating when your circumstances change, e.g. new job), then just using that as a general heuristic to make time-money tradeoffs, rather than deliberating every time.

Load More
Pursuing a doctoral degree (e.g. PhD)
Working (0-5 years experience)

http://adambinks.me/