Owner @ Infinite Possibilities Landscape & Design
5 karmaJoined Working (6-15 years)Seeking workToronto, ON, Canada



Aspiring deep generalist & eternal optimist, able to adapt and fill roles that need attention - while wearing a smile! Practised entrepreneur and systematizer. Experienced & empathetic conflict manager. Insatiably curious and constantly looking for connections across networks & disciplines.  Willing to work remotely or relocate to maximize my impact as I shift carriers, guided by effective altruistic principles.


Yes, I have no kids but a large family. When I started to create the will, I planned to leave all of my estate to the next generation of my family. I planned to create a fund they could access to participate in extracurricular educational activities they otherwise could not.

I realized it was going to be a lot harder for me to donate 10% while working, at least at my current salary. I still needed to build a bit of a safety net and wanted to contribute to an RRSP. So, while working on the draft, I decided to add a 10% bequest to GWWC. However, upon having the final copy drafted, I ended up changing the number to 50%. Once I had decided to add the clause for the bequest pledge, raising the number was very easy as it would have minimal impact on my life and my ability to prepare for economic hardship or emergencies (compared to an income pledge, which would be more difficult). 

I believe this is an excellent way for Longtermists to have a larger impact on the future, acting as a self-imposed inheritance tax. So long as the giving is directed towards high-impact / effective charities. I have been considering how to present this belief more objectively and rationally. Discovering how many more people would create a bequest of their estate vs an income pledge if it was made easy (Q3, for example). And then look at the output of that over the span of 1-2 generations vs, convincing the smaller number of people to donate immediately based on income.

@david_reinstein did you gather any data?

I also got a lot out of the Charity Entrepreneurship talk. Lead to me applying and also doing a 1-1 with Steve. This talk changed my thinking about what to do with my carrier! 

I heard an amazing comment in the live chat of the 'Will AI be an Existential Risk: An Intro to AI Safety Risk Arguments' talk. (note this is from memory so not an actual quote)

"I feel that Human-AI alignment is no different than Human-Human Alignment, we need to get better at Human-Agent Alignment as a whole"  (note this is from memory so not an actual quote)

I found this a profound statement, I wonder if the bucket of Human-AI Alignment was to expand to include the most tractable topics/causes in Human-Agent Alignment, what might emerge that could have otherwise been missed.