All of JulianHazell's Comments + Replies

Critiques of EA that I want to read

EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.

 

+1 to this — it's something I've been thinking about quite a bit lately, and I'm happy you mentioned it.

I'm not convinced the EA community will be able to effectively solve the problems we're keen on tackling if we mainly rely on a (relatively) small group of people who are unusually receptive to counterintuitive ideas, especially highly technical problems like AI safety. Rather, we'll need a large coalition of people who can make prog... (read more)

On Deference and Yudkowsky's AI Risk Estimates

On the contrary, my best guess is that the “dying with dignity” style dooming is harming the community’s ability to tackle AI risk as effectively as it otherwise could

Announcing What The Future Owes Us

Let's fulfil Mill's wishes by buying some coal mines.

How should I use my professional development budget?

Ah. Duh. My bad!

Right now I'm an MSc student at the Oxford Internet Institute studying part-time for a degree in social science of the internet, with a focus on economics.

I also work on content & research at Giving What We Can, which mostly involves simplifying and translating core EA ideas to something that a general audience would like to read/watch.

This summer I will be self-studying AI governance and then joining GovAI as a summer research fellow. Provided this path seems promising for me, I'm hoping to work at the intersection of policy and resear... (read more)

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

How important is compute for AI development relative to other inputs? How certain are you of this?

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

What opportunities, if any at all, do individual donors (or people who might not have suitable backgrounds for safety/governance careers) have to positively shape the development of AI?

Software Developers: Should you apply to work at CEA?

I’m neither a software engineer nor on the job market, but I found myself reading to the end because of how much fun this post is. Well done!

Might also add my totally-completely-absolutely-unbiased opinion that working at CEA/other orgs in the CEA umbrella is amazing.

Introductory video on safeguarding the long-term future

Thanks for the feedback! I agree, sometimes conveying the potential value of future generations to a general audience can be really tricky. We're currently working on improving our feedback solicitation process, precisely so we can get input from a wide range of people like you flagged — from highly engaged EAs to members of the general public.

I do think there is a tricky line to balance between going too high level and going too granular when creating longtermist content for a wide audience, but it's something I think is extremely valuable to figure out and would like for us to continually improve at doing a good job of.

1timunderwood3mo
The main thing I think is to keep trying lots of different things (probably even if something is working really well relative to expectations). The big fact of trying to get traction with a populat audience is that you simply cannot tell ahead of time what is good.
Why you should contribute to Giving What We Can as a writer or content creator (opportunities inside!)

Thank you!

My impression is that I will keep the application open until I'm satisfied that I've found a strong team of approximately 2-4 core writers. I'm not quite sure how long that will take, however.

I could also see a desire to scale in the future, so we could onboard on a rolling basis.

Why you should contribute to Giving What We Can as a writer or content creator (opportunities inside!)

This is interesting, Adam. Thanks for sharing. I think you should consider posting this as a standalone piece on the forum, because I can imagine there will be a wide variety of opinions regarding the speed at which EA should grow. What I will say though is that I really like the idea of doing profiles on specific people — e.g.,  "How this software engineer approaches charity" — in order to relate to a wider audience. I think this is the exact kind of content we'd like to work with our members to produce, so thanks for sharing the idea!

1Adam Steinberg5mo
Thanks, Julian! It's now posted -- see link above.
Which EA orgs provide feedback on test tasks?

N=1, but when I applied to Longview Philanthropy, I received some feedback upon request after my work trial.

2Khorton5mo
Thanks Julian!
Can money buy happiness? A review of new data

One thing I would like to add is that I think it is plausible that the results might not be even close to the same if Killingsworth's study contained responses from folks living in low-income countries. For example, I wouldn't be surprised if money actually has a much stronger effect on happiness for people earning $500 ~ per year, as things like medicine, food, shelter, sanitation, etc probably bring significantly more happiness than the kinds of things bought by people that earn $400,000+ per year.

Also, even if it does make a small difference (which I fi... (read more)

Can money buy happiness? A review of new data

Thanks for the kind words Linch! Yes, I agree with the motivated reasoning point. I found myself pretty attached to the $75,000 anecdote (me being a fanboy for Kahneman probably contributed to this) even though it didn't feel quite right. Glad that this new paper allowed me to update while still aligning with one of my core beliefs.

I think Killingsworth's study captures the same idea that motivates me to do the GWWC pledge while being a bit more nuanced than Kahneman & Deaton's study. I really do hope this new research can enrich the EA community's view on the relationship between money and happiness.