I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).


Sorted by New

Wiki Contributions


Honoring Petrov Day on the EA Forum: 2021

I just want to say, I'm having a really bad day, and this made me feel a little better about myself.

So I won't be launching the missiles, but er... I was also  offered codes on LessWrong, so launching still kinda seems like blowing up my own side.

EA's abstract moral epistemology

I wrote my response (as a clueless non-ivory-tower non-academic) to Crary's second incarnation here: "Crary avoids explaining her arguments against Effective Altruism"... let me know if you want the "next 45%" sequel.

Could GiveWell create a cryptocurrency to raise a lot of money?

Holy crap you guys, what a terrible idea to offhandedly reject and ignore. Spreadlove's version of the idea isn't exactly what I would propose, and the post's formatting leaves something to be desired, but... look.

Today I was looking at top cryptocurrencies by market cap. XRP, in third place, stood out at $94 billion. There is a fixed amount of it, 100 billion tokens, of which the founders "gifted" 80 billion to their own company (and the rest, I assume, to themselves?). It's set up to that whenever you buy it, you're making the founders richer. I am considering buying some myself, despite the fact that I have no interest whatsoever in helping those founders earn billions of dollars.

If there was a  cryptocurrency that enriched the poor instead of the wealthy, I'd hop on that train right now, today, and I think a lot of non-EAs would do that too. And here you are saying no, we shouldn't do that?

Unlike Bitcoin and Ethereum, XRP is designed above all to facilitate transactions—to process transactions as quickly and efficiently as possible. This should make it better as a currency than Ethereum and  (obviously) Bitcoin. I don't know the technical details or the social details of the deals they're making with financial institutions, but I do think cryptocurrency has some potential value as an efficient medium of exchange—a system in which transaction costs are secure and competitive. When I buy something with a credit card, I do so despite the fact that Visa and affiliates are most likely getting a cut of 2.5% or more.

If a cryptocurrency could be used to cut transaction costs to, say, 15 cents, that's a genuinely valuable thing, not simply a "speculative investment vehicle". 5 cents would be substantially better still. I've dreamed for as long as 20 years of being able to pay 10-20 cents to read an article / web page, or on the flip side, to charge 10-20 cents for my own blog posts (if I ever became popular), and still today I hate the subscription model because it centralizes power and says "well, you paid us $8/mo., you may as well get your money's worth and get all your news from us". 5 cent transactions would make my dream possible (though, yes, it doesn't strictly need to be a distributed trustless blockchain system).

In addition, while speculation is rampant, the technology is young enough that it is still plausible to create a new system that is superior (in terms of cost, efficiency, security and human-centric safety) than most/all of the top 20 coins. In particular I think there is a very strong need for better trust systems. Simply put, we need a coin that is more easily safeguarded from theft, while simultaneously making it easier to avoid permanently losing one's coins.

GiveWell doesn't have the necessary expertise to build a best-in-class  crypcur (can I call it crypcur? "Crypto" ought to refer to cryptography), but you know who I bet would have some good ideas about how to build an altruistic crypcur? Sam Bankman-Fried. Would love to hear his thoughts on this. And if he doesn't have good ideas himself, he's probably already hired some people who would. But, a negative-scoring post is likely to be immediately forgotten and Sam's probably not going to notice this or comment.

But me, I imagine a 'public benefit corporation' or something, that would reserve the first $F per year of token sales for R&D costs, operating costs and marketing (where F is a floor amount such $1 million, plus maybe an amount proportional to the logarithm of the coin's popularity), so that in the beginning the money raised would only be used to support the initial cost of development and so on, but as it becomes more popular, more and more money goes to effective charities.

At what level of risk of birth defect is it not worth (trying) having a (biological) child for the median person?

While I don't have any expertise here, one of the things I hate most in life is "drive-by downvoters" who have nothing constructive to say and only stick around long enough to downvote. So mainly I'm just here to say sorry about that.

I don't see the case for the "average person" having positive externality. In my view, most people are close to neutral, a minority leave the world worse than they found it, and another minority make the world better. I suppose that whether people make the world better or worse probably depends mostly on culture, as most people are sheep and will work to fit in with their culture; the goodness or badness of the effects of that culture on the world won't make a lot of difference to their choice to conform. This explains the difference between an average person who grows up in, say, Germany in the 1940s and becomes a Nazi soldier, versus one who grows up in, say, Canada in the 2020s and becomes a plumber. It's not genetics driving this sort of thing, at any rate.

I can't answer your headline question, but would be curious to hear others' thoughts on the matter. I will say that while the expected value of having a child may be high, the risk of any kind of defects is very scary in a way that is out of proportion to the EV, since, if you get unlucky, you will probably spend >18 years with a child that you love, of course, but who probably won't be able to achieve what you hoped.

dpiepgrass's Shortform

After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that can't build arbitrary nanostructures, but can still build an interesting variety of nanostructures.

Meanwhile I would be surprised if a superintelligent AGI emerged before 2050—though if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown — AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a "nanotech risks newsletter", maybe it would teach me how nanotech is incredibly dangerous too.

dpiepgrass's Shortform

Not with a bang, but with a quadrillion tiny robots

The year is 2038. A large company has spent the last 18 years developing an additive nanofactory that can produce and scan almost any object at the atomic scale, using a supply of "element cartriges" which contain base elements and compounds that are easily broken into base elements (e.g. graphite, ammonia, silicon crystals, water, table salt). The company worked with a university to develop advanced algorithms, published freely in open-access scientific literature, for constructing virtually any molecule or molecular lattice from the elements in the cartrige, including proteins (though it is not optimized for this, as there are already other companies that specialize in bioengineering). Each factory can produce objects up to one cubic centimetre in size in a vacuum-sealed chamber, and printing something so large could require up to 2 or 3 weeks. The units also have a "3D scan" ability that builds an atomic-scale model of any outer surface by "feeling" it; this ability is also used during 3D printing to verify that the object isn't moving during fabrication, or, if the object moves, to characterize and potentially correct the problem. The units, which supercede a simpler and less flexible model, have just gone on sale for $10 million apiece. Several universities and companies order one.

In 2040, a millionaire who loves nanotechnology wants to democratize the technology, dreaming of various benefits it could bring to the world. He hires a chip designer, a nanotechnology expert and a few grad students enthusiastic about the technology, and starts a small business that designs a USB-C powered Nanofab™ based on a royalty-free nanoconstructed silicon RISC-V chip and a Linux-based OS in flash storage, plus a custom ROM designed to help load the initial firmware (as electron patterns cannot be 3D printed). Its external dimensions are 7mm x 11 mm x 4 mm and it can build objects up to 7 mm x 3 mm x 2 mm in size; it is about the same speed as the original factory it was made from, and supports 3D scanning too. In particular, the factory is designed to make hand-assembled copies of itself, by producing and ejecting a sequence of 8 pieces over 7 days, which a person can snap together into a complete unit. One must place an element cartridge on top, which is twice the size of the factory itself; the empty cartridge is also designed to be constructed by the factory and it has a connector allowing it to be quickly filled from a larger cartridge.

The millionaire's company is located in rented office space next to a university, from which it rents blocks of time on one of the university's nanofactory units. Three months after completing and testing the first factory, the office is filled with over a thousand Nanofabs, all spawned from the first copy ever made. Employees tire of snapping together factories by hand, so they rent a house and hire low-wage employees to spend their days snapping together factories and cartridges for sale to the public, while the office remains devoted to technology development. Factories with cartridges initially sell for $75 each, and refills cost $30. The blueprints for the factory are free for noncommercial use, but the cartridges are patented, proprietary modules that must be purchased (along with raw elements) from the company. The company puts service booths in malls for selling Nanofabs™ and refilling cartridges, though it also sells everything online.

By 2052 the millionaire is a billionaire, and other companies spring up to sell competing cartridges, raw materials, and eventually, high-speed nanofactories. Nanofabs are soon used by millions of people and companies for printing a vast assortment of tiny devices, from skin-adhesive smartphones, to contact lenses that can record and transcribe video/audio of every moment of every day, to carbon-fiber "ringguns", small untraceable handguns that attach to a pair of human fingers. Meanwhile, a field of "artificial life" emerges, illegal nanofabricated drugs are rampant, the presidents of China and/or Russia have long since given themselves absolute power (while maintaining a pretense that they haven't), and research into AGI has started to bear fruit....

Question: how might these developments lead to the end of human civilization? Is it more likely to be destroyed by AGIs or humans? What if the original technology hadn't been so open - might one group of humans or AGIs gain a supreme technological advantage over everyone else, or does this just delay the democratization process?

Examples of loss of jobs due to Covid in EA

I lost my job at an oil & gas software company, so now I have less money to donate to clean energy and other causes. (It's not one of the worst effects of COVID, I'll grant you.)

Googling around I'm surprised how hard it is to find articles about job losses that were not published in March or April. Oilfield services were hardest-hit after April, whereas mostly restaurants, leisure & hospitality were hit hardest in March and April. The best info I could find by sector was this BLS report ("nonprofit" was not among the categories) and this chart shows that unemployment in the U.S. jumped from 3.5% in February to 14.7% in April, and fell steadily to 10.2% in July.

Geographic diversity in EA

I think the calculator you mentioned is kinda... broken. I notice that the local cost of living is ignored and no recommendation is given for incomes under $40,000 USD (or rather the recommendation is "we recommend giving whatever you feel you can afford without undue hardship"). A "well-paying" job in a LMIC is usually below $40,000/year. My highest gross income ever was about $100,000 CAD, and for this they recommend a 1% donation. Nah, I'll stick with 10%+ thanks. You have to make over $83,000 USD for the recommendation to inch past 1%.

Geographic diversity in EA

I'm thinking that it would be relatively smarter for EAs in low-income countries to work in local nonprofits, compared to those in high-income countries who are relatively more effective by earning-to-give. Does that sound right to you?

However this does require that a suitable nonprofit job be available in your country! I just checked the 80000 Hours job board and found that the total number of jobs in the "biggest impact" category in Low-Mid Income Countries was 15, versus 336 jobs in the (less populated!) first world. It could well be that there are fewer EAs in LMICs, but probably not 22 times fewer.

Load More