M

MichaelDello

545 karmaJoined

Comments
147

Thanks for the detailed response! I've included a few reflections on the work in the conclusion section. Fair point on the internal costs - I was thinking about this as a cost but not as an impact multiplier from funding. With some more work it could be used as justification for the existence of ECA and why consumers pay their salary. ~$200k seems right for staff time plus overhead.

Yeah, "over half" was quite surprising to me too. I wonder how much of this is because organisations may only lodge a rule change request if they have a decent sense that it is likely to be successful before doing so. If individuals and smaller/outside organisations took my advice and started lodging more rule change requests that ratio would surely change.

Thank you for nudging me to expand on the specfics of what I did. I think I'll write something more detailed at some point, but for now I'll just brain dump some dot points on what we did in those 6 months - hopefully that's helpful for now.

  • ECA strategic planning to identify the energy consumer issues that are most important, tractable, and within ECA's wheelhouse (see 3-year plan as the output)
  • Deep literature review (many rounds of this at each stage - I tried to just become an expert in the topic - I used AI pretty heavily as a learning companion)
  • Internal brainstorming with seniors to identify the problem and design a solution
  • Stakeholder mapping to identify who we need to involve, consult, mobilise, etc. (e.g., using IAP2 frameworks, plotting stakeholders on interest vs power axes)
  • Building a spreadsheet based on stakeholder mapping with details of key organisations including contacts
  • Identifying the value propositions of our rule change request - i.e. 4 main/unique values that different stakeholders would receive if the rule change were successful
  • Grouping stakeholders into one of 4 value propositions
  • Several rounds of seeking and incorporating feedback with key decision makers and stakeholders (sending dot point summaries, drafts, presenting to, meeting with, etc.)
  • Multiple rounds of internal drafting and review
  • Final checks (placed a high bar on being accurate and not having typos, etc.)
  • Lodged rule change
  • Offered briefings to interested parties
  • Media releases leading to some articles in industry press
  • Developed fact sheet to help stakeholders understand issue and lower the bar to making a submission
  • Social media posts to create awareness and (primarily) to encourage submissions
  • Running 4 workshops/briefings (one for each value proposition) to mobilise stakeholders to make a submission and collect feedback
  • Responding to government consultation papers (some more literature review, feedback, brainstorming for solutions to specific issues raised)
  • Commissioning external expert analysis as needed

I was the project leader for all of this (except the strategic planning which happened before I joined) but didn't necessarily do all of it myself.

Thanks for sharing and great work, I'm inspired! I'm starting a new role at a large company in a few weeks after working at smaller organisations/academia for a while, and I'm excited to explore what's possible once I settle in.

I did a limited version of this 10 years ago at my first full time job at a large Australian company. A few colleagues came to a giving game I co-organised with a local EA chapter. I spoke to the company's philanthropic giving lead - I didn't make any headway, and found out that the company's corporate giving was based predominately on supporting communities they operated in (I was a bit naive). 

I'm really excited about this! I'll be watching it closely, because starting something similar here in Australia could be interesting. 

My experience working in policy has been that it can either be surprisingly tractable or surprisingly intractable. Achieving change in energy policy in Australia has been surprisingly easy, and achieving change in farmed animal policy in Australia has been surprisingly hard. 

I'm not sure yet which of the two would be most analogous to wild animal welfare. Farmed animal policy has strong entrenched interests, but perhaps wild animal welfare doesn't because many don't care about the issue as much one way or the other. It could be easy to get some quick wins.

$10,000 to Good Ancestors Project - all my post tax income above $59,000 for last financial year.

Between a new job and having finally paid off all my student debt, I'm excited about the next year.

A lot of people are talking about data centres in space in the last few weeks. Andrew McCalip built a model to see what it would take for space compute to get cheaper than terrestrial compute.

This quote stood out:

we should be actively goading more billionaires into spending on irrational, high-variance projects that might actually advance civilization. I feel genuine secondhand embarrassment watching people torch their fortunes on yachts and status cosplay. No one cares about your Loro Piana. If you've built an empire, the best possible use of it is to burn its capital like a torch and light up a corner of the future. Fund the ugly middle. Pay for the iteration loops. Build the cathedrals. This is how we advance civilization.

I like the sentiment, but I'm not necessarily sure space data centres are a net positive for humanity.

That said - what are some candidates for billionaire pet projects that reduce suffering? A billionaire getting fixated on making cellular agriculture dirt cheap seems promising to me.

MichaelDello
1
0
0
100% agree

Far-future effects are the most important determinant of what we ought to do

Assuming this captures x-risk considerations, the scale of the future is significantly bigger than present day.

Thanks for writing about this! I've thought about this as well, but there are a couple of reasons I haven't done this yet. Primarily, I've been thinking more lately about making sure my time is appropriately valued. I'm still fairly early-mid career, and as much as it shouldn't matter, taking a salary reduction now probably means reduced earnings potential in the future. This obviously matters less if you don't plan on working for a non-highly impactful non-profit in the near future or if you're later in your career, but I think this is worth thinking about even if you're financially secure.

I think the sort of people who frequent this forum tend to be a bit too keen to work for not much because they think the work is so important, kind of like an 'impact discount', and I think historically effective non-profits have been a bit too keen to offer impact discount salaries. This seems to be less of an issue in the last 3 years in my experience, maybe because EA-aligned orgs are getting better funded. 

I think taking impact discounts probably harm our impact in the longer term. It's not just about the money per se, but about the perception of value. Unfortunately, people tend to see your hourly rate as a reflection of how much value you provide. Young people should probably care about this a bit more, but granted it's a tough job market and it's probably a much lower priority than just getting a job.

It also, of course, depends on whether you think your current employer is the most effective place to be sending your counterfactual dollars. It's plausible that one might work at a highly impactful non-profit, but they think that another non-profit they could donate to is twice as effective (or whatever the ratio is based on their marginal tax rate, but it's probably not more than double).

Luke, you've been so strong at the helm of GWWC for so long that I'm often guilty of thinking about you and GWWC as synonymous (that's a compliment, I swear!). Well done on the amazing work you've done, and enjoy a well deserved break. I can't wait to see what you do next.

As someone who is not an AI safety researcher, I've always had trouble knowing where to donate if I wanted to reduce x-risk specifically from AI. I think I would have donated quite a larger share of my donations to AI safety over the past 10 years if something like an AI Safety Metacharity existed. Nuclear Threat Initiative tends to be my go to for x-risk donations, but I'm more worried about AI specifically lately. I'm open to being pitched on where to give for AI safety.

Regarding the model, I think it's good to flesh things out like this, so thank you for undertaking the exercise. I had a bit of a play with the model, and one thing that stood out to me is that the impact of an AI safety professional at different percentiles doesn't seem to depend on the ideal size, which doesn't seem right (I may be missing something). Shouldn't the marginal impact of one AI safety professional be lower if it turned out the ideal size of the AI safety workforce were 10 million rather than 100,000?

No problem! I think my main concern is just that you make sure the water properties at 0.5-1m depth match the water properties at the surface, or at least, you can work out how they vary to apply corrections to the satellite data. But overall I'm positive about this venture.

Load more