All of Lukas Trötzmüller's Comments + Replies

Related: this quote from the FAQ on their website

We encourage people to do both: support charities now AND support their futures. When you support charities directly, they will spend your money directly. The difference with Give For Good is that with us, your one-time donation results in a stable income for the charity year in, year out.

"They will spend your money directly" - seems like a strong statement, makes it sounds like charities usually do not invest any of the money they receive. Is that true? I don't know, just flagging this for further discussion.

1
Rik
1y
Dear Lukas,  thanks for having such a detailed look at our website, much appreciated!  To answer your statement, see my reply to Tom, most charities that I know of have legal limits on how much money they are allowed to invest legally, it is intended for backup/reserve reasons. I don't know many that invest as a source of income (although they should!!).  best, Rik

Thanks for sharing about your initiative. I do have some significant doubts about this project.

Have you interviewed charities and asked them whether they prefer donations through your scheme vs donations made directly to them?

Is there a chance that this project has negative impact, by cannibalizing direct donations and turning them into indirect donations via your platform - potentially against the will of the charities themselves, i.e., against their judgement that they could have more impact with direct donations?

Or alternatively, looking at the opposite... (read more)

1
Rik
1y
Dear Lukas, thank you for your reply. To answer your questions: yes, all the charities on our website were 'onboarded', meaning they all specifically approved being listed there. Many actually enthusiastically embrace the idea! The reasons for this are twofold:  1. Many charities currently see their stable, periodic donations trending downwards. The reason for this is that there is an ongoing switch in how people donate. It used to be very normal to transfer a fixed amount periodically to your favorite charities. However, this is changing nowadays more towards one-time gifts based on campaigns (think ice bucket challenge) and tips from influencers and blogs/vlogs/podcasts. As a result, many charities can no count less and less on a stable, annual income. This is a challenge that Give For Good helps to solve.  2. We were told by several of the larger charities that their research has shown that the more 'methods' of giving there are, the more the overall income is. There is some degree of cannibalization between the different donation methods, but overall the income is more. So we understood from them that this is an extra method of income for them, which they expect will increase their overall income (especially long-term, given our model). Also, they expect that because of our model, we may be able to attract donors from sectors that are normally hard to reach for them (e.g. the financial sector).  Re your second question, most of our donors today enter via our general website and are not looking for one specific charity to support via our platform using our methods (which also happens). So yes, our platform generates 'extra' attention for the charities that we list that would otherwise not have occurred.  Finally, to answer your question about why our scheme and not direct donations, this is quite extensively discussed on our website, but in a nutshell there are two major benefits: 1. Over time the money that goes to the charities is much more. For example

Related: There is EA the actual movement, and EA the philosophy. I wonder how much we are losing out on by not having a clear line between the two. Maybe internally this distinction can be carefully navigated, but to an outsider it is one and the same. I wonder if that might be one of the things that could be improved about EA.

I imagine it feels challenging to share that and I applaud you for that.

While my EA experiences have been much more positive than yours, I do not doubt your account. For many of the points you mention, I can see milder versions in my own experience. I believe your post points towards something important.

Not if this just destroys momentum towards sustainable funding for AI safety and other longtermist causes.

3
freedomandutility
1y
But if you think AGI is very close then there isn’t a lot of time for you to get caught, and there isn’t a lot of time for future AI safety funding to emerge

Downvoted for several reasons: because I would expect colleagues in any work environment to hook up, because I think it's very unkind to assume sexual relations in the workplace are indicative of a problem, because I'm against outing people's sex lifes unless directly relevant to a scandal. And finally, because it seems unnecessary to mention polyamory when talking about two people hooking up.

(Retracted after more consideration. I still disagree with the wording of the comment I responded to but can now see it points towards a real problem)

[This comment is no longer endorsed by its author]Reply
Sabs
1y40
18
1

This is nonsense. Financial firms typically have strict disclosure rules about relationships between colleagues because ppl will commit fraud out of loyalty to ppl they're fucking. As, y'know, may well have happened here!

Strongly disagree-voted because "I wish they had sat down" doesn't address the publicly stated reason why Binance pulled out. It makes it seem like they had no good reason, and a good conversation would have fixed the issues. Without knowing much, this seems implausible to me.

Also, I consider "I don't think he did anything in bad faith" to be somewhat irresponsible. If SBF actually did something wrong, then EAs going around and supporting him by saying "I don't think he did anything wrong" will hurt the optics of this further.

8
Brendon
1y
It was an attack from Binance that caused the entire episode. CZ chose the nuclear option of dumping FTT, which he knew would hurt people and most of all Sam. This started with the Bankless interview, CZ and others didn't like SBF's pragmatic approach to regulation and this was the response. He could have gone to Sam and given him options and made it clear that if he didn't change course both on regulation and the use of FTT then this is what he would do. However he didn't, he went nuclear too soon. Sam should have been more sensitive to the situation and prevented it before it got to this point. In terms of bad faith, it's very much in EA reasoning that he could have thought. I'm taking X risk and the probability of massive failure is very low and the benefit is high. Therefore I can do more good but taking X risk. This fits his profile more than complete bad faith. If I do X over 10 years I can give more than if I didn't do X. Personally I think what Sam did is reckless and he shouldn't have used FTT the way he did and he should have been the leader of merkle-tree proof-of-reserves. However I think the probably of complete bad faith is very low.

Curious why this is getting downvoted. It seems like another initiative in the Applied Rationality space, which sounds quite useful to me.

While I'm personally not interested in the bootcamp, I am curious if the people who downvoted have specific criticism or reservations about the program.

1
Czynski
1y
Simple: It's another meta thing. Those have a very poor track record and seem to require extraordinary competence to be net-positive.
3
Ti Guo
2y
Thanks very much Lukas for this message! We also do not know why there are downvotes for this post.  If anyone who downvoted the post or has an idea why people downvoted, please comment below, or give us anonymous feedback at https://forms.gle/qgPojwg7q1QQ4YsSA , or book me a call at https://calendly.com/ti-guo/book-a-call This is very important to us and we sincerely appreciate the feedback. 

Location: Graz, Austria

Remote: Yes

Willing to relocate: For the right opportunity

Skills:

  • Startup Founder & Software Developer with 12 years of experience
  • Basic knowledge of most aspects of running a business
  • Strong Technical Skills
    • Computer Graphics and Game Development
    • Web Scraping and Automation
    • Performance Optimization
    • Algorithmic Problems
    • General backend development
    • Microsoft .NET
    • Desktop Development
  • Some experience in
    • Data Pipelines
    • Network Engineering
    • Computer Vision
    • Applied Math
    • Software Testing
    • UI Design & Usability
  • Other non-technica
... (read more)

I frequently catch myself, and I'm embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I'm not the only one.

Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.

I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I'm not sure these changes would actually influence the votesorted order of the postings - but they might. I suspect it would also change the nature of certain discussions.

3
Phil Tanny
2y
Admirable honesty, well done.  

In order to make this even remotely plausible, the rules for tax deductible charities would need to be far more stringent. And then you get a situation like we currently have in Austria, where not a single EA-aligned charity is tax-deductible at all.

4
RyanCarey
2y
Maybe you could do 70% with some intermediate level of stringentness. And plenty of EA charities are tax-deductible in the US, which is where much more of the wealth is.

Nevertheless it does send a certain signal to the public. The way things look is important, especially when it comes to completely legal ways to circumvent taxes - where intent plays a role.

The justification of crypto regulation requires background information that outside observers don't have. Also, it's impossible to judge from the outside whether or not tax savings was one of the arguments considered in addition to the regulatory situation.

There is no extreme poverty or starvation in democratic countries

This seems like a strong claim to me. What's your source for that?

and access to education and health care is one hundred percent, at least in older democracies. Younger ones are getting there fast.

Where do you draw the line between older and younger democracies? Isn't the US pretty old compared to other democracies [1] - and does it provide "100% access to health care" to its citizens?

all countries and all people lived in democracies the major problems of humanity would be solved or

... (read more)
3
Guy Raveh
2y
I'm not endorsing OP (I think democracy is a good cause, but not enough), but: I'm pretty sure it will. Democracies don't do enough about these at all, but the question is what this stands as an alternative for. Autocracies have two important characteristics which make them very dangerous in terms of human-created X-risks and AI in particular: 1. Leaders are much less accountable, and more prone to act violently or irresponsibly. Thus, they can drive existential risks higher using state-level resources. 2. Leaders have an incentive to consolidate their power and take away citizens' autonomy - and AI gives excellent tools to do that. If a non-democratic government somehow manages to use its control to build an aligned, superintelligent AGI - it will be used to dominate the rest of the world and create an eternal totalitarian regime. This is as bad as extinction, or worse.

pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem

While there is significant support for "AI as cause area #1", I know plenty of EAs that do not agree with this. Therefore, "generally agree upon" feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety

For viewpoints from professional AI researchers, see Vael Gates interviews with AI researchers on AGI risk.

I mention those pieces not to argue that AI risk is overblown, but rather to shed... (read more)

2
Olivia Addy
2y
Thanks for linking these posts, it's useful to see a different perspective to the one I feel gets exposed the most.

I found myself confused about the quotes, and would have liked to hear a bit more where they came from. Are these verbatim quotes from disillusioned EAs you talked to? Or are they rough reproductions? Or completely made up?

2
Helen
2y
A mix! Some things I feel or have felt myself; some paraphrases of things I've heard from others; some ~basically made up (based on vibes/memories from conversations); some ~verbatim from people who reviewed the post.

The sample is biased in many ways: Because of the places where I recruited, interviews that didn't work out because of timezone difference, people who responded too late, etc. I also started recruiting on Reddit and then dropped that in favour of Facebook.

So this should not be used as a representative sample, rather it's an attempt to get a wide variety of arguments.

I did interview some people who are worried about alignment but don't think current approaches are tractable. And quite a few people who are worried about alignment but don't think it should ge... (read more)

I'm not quite sure I read the first two paragraphs correctly. Are you saying that Cotra, Carlsmith and Bostrom are the best resources but they are not widely recommended? And people mostly read short posts, like those by Eliezer, and those are accessible but might not have the right angle for skeptics?

3
niplav
2y
Yes, I think that's a fair assessment of what I was saying. Maybe I should have said that they're not widely recommended enough on the margin, and that there are surely many other good & rigorous-ish explanations of the problem out there. I'm also always disappointed when I meet EAs who aren't deep into AI safety but curious, and the only things they have read is the List of Lethalities & the Death with Dignity post :-/ (which are maybe true but definitely not good introductions to the state of the field!)

I'm a software entrepreneur transitioning into higher-impact ventures. Had a mentoring call with Yonatan a couple of months ago. What I really liked about his approach was the structure of the call: First, gathering an overview of the issues of the table. Second, going through them in a fast-forward kind of way. And third, figuring out which ones are the most important to talk about.

The outcomes of the call directly led into the next steps I needed to explore this path.

The fact that he asked for feedback at the end of the call shows me that Yonatan is seri... (read more)

The German-speaking EA meetups I know are all very happy to switch to English whenever non-German-speakers are present. Can't imagine it would be a problem anywhere in the German-speaking world!

Would you say that inexperienced people benefit less from a Mastermind than experienced people? Or would you say that they benefit so little that a Mastermind is not worth for them?

If your claim is that Masterminds are only worthwhile for experienced people, then I disagree for two reasons:

First, the way I see Masterminds, one core aspect is that a group of peers can be much more effective in thinking through problems than a single individual. This is true even if none of my peers have any experience that I don't have. It is surely not true for any imagina... (read more)

I believe a documentary could be a great vehicle to explain EA and get people interested.

Obviously it would need to explain EA principles. But there is also room to include emotion and personal stories. Which might be much more important, in terms of the effect on the viewer.

Perhaps the emotion and personal stories could make up more than half of the film. One documentary that does this really well is "Chasing Ice". It's about James Balog, a photographer documenting climate change by filming glaciers. The film presents the science in a clear way, but it's... (read more)

Chi
2y23
0
0

Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.

What does your definition of "offsetting" include? Only projects that reduce CO2 in a very direct way (e.g. building clean power plants)?

Or would you include political advocacy and research? If so, check out the work of Founders Pledge:

  1. https://founderspledge.com/funds/climate-change-fund
  2. https://founderspledge.com/research/fp-climate-change
2
Marek Veneny
2y
Sorry for the late reply, I need to change my notification settings!  In theory, anything that reduces CO2 in the atmosphere and is labeled as such would work for my purposes. I know political advocacy and research can also move the needle, but that's not so easily "marketable" for my purposes (i.e. it's difficult to attribute causes and quantify the effects). I'll look over the links provided regardless, maybe it's something I can work with, thanks! 

According to Founders Pledge estimates, the CO2 savings from donating 100 USD (maybe 1 ton per USD, with high uncertainty) will greatly exceed the emissions from your flight (which might be on the order of magnitude of 1 ton) [1]. Donating USD 100 to Atmosfair, while less effective, would also offset this flight [2]. If you include the value of your time, the cost of the train trip might be far, far higher.

Plane emissions are further complicated, if you live in the EU, by emission certificates - which might cause a counterfactual CO2 saving, when deciding ... (read more)

4
Guy Raveh
2y
Just to make this explicit: that would imply donating that value in addition to those 100 USD.

A new report from Founders Pledge just came out - although it's just an overview article and doesn't go into much depth. https://founderspledge.com/stories/changing-landscape

3
David_Moss
2y
Thanks! It's cool they have done a study on the 'full-room' approach. I think full-room approaches are worth people looking into, but it's worth noting that they are usually less bright than using SAD lamps (and this goes for the setup described in the pre-print too). As noted, in the pre-print, they put out more light, but because you are usually much further away from the lightbulbs distributed around the room than you would be from a light box on your desk, the mean illuminance at eye level was 1433-1829 lux. By comparison, I have three of the light boxes photographed above on my desk (quite a distance, more than arm's length), and they're each around 5000-7680 lux at eye level. Of course, it's possible to get much larger amounts of light from either approach. As the pre-print notes, natural summer sunlight exposure has a few characteristics that might be advantageous over typical light therapy, i) much brighter, ii) covers the whole visual field, iii) exposure for many hours of the day, not just a short period in the morning.  I agree that (ii) might be important and potentially a significant advantage of the full room approach over light boxes. IME, one significant difficulty with getting a large amount of light from a small/concentrated source (such as a light box) is that it's subjectively very uncomfortable. Daylight, conversely, usually provides much more light but without the uncomfortable glare of having a single, very bright light in front of you. Ironically, when I tried setting up  large number of bright lightbulbs around my room previously, they are actually less comfortable than light boxes, because without being fully covered by diffusers (as SAD lamps usually are, but which reduces the brightness a lot), the single points of very bright light coming from the bulbs were more unpleasant than the light boxes. Of course, there are very many different ways you could set these up in a room, including having more bulbs, but all with diffusers, so YMMV.

Yes, that's what it is. "We" as in "the author and the reader". There is no co-author or organization involved in this.

I like your post because it puts some more backstory behind an argument that many people usually accept for face value.

I don't quite understand this argument:

If we can geoengineer or capture enough to offset 60% of our emissions in 2030, and then in 2031 we reduce our emissions by 1% (as measured at the smokestack), then the environmental damage will not fall from 40% to 39.6%, it will fall to 39%. So it's still a one-percentage-point change whether or not we do geoengineering and carbon dioxide removal.

This assumes that geoengineering will cause effec... (read more)

3
kbog
3y
Hm, I suppose I don't have reason to be confident here. But as I understand it: Stratospheric aerosol injection removes a certain wattage of solar radiation per square meter. The additional greenhouse effect from human emissions only constitutes a tiny part of our overall temperature balance, shifting us from 289 K to 291 K for instance. SAI cuts nearly the entire energy input from the Sun (excepting that which is absorbed above the stratosphere). So maybe SAI could be slightly more effective in terms of watts per square meter or CO2 tonnes offset under a high-emissions scenario, but it will be a very small difference. Would like to see an expert chime in here.

The most interesting part of your post, to me, is your risk model [1]. I would be curious to hear some more feedback from other people on it.

I turned it into a Guesstimate, making some adjustments to some of the numbers and using population figures from Austria [2].

[1] https://docs.google.com/document/d/1A0jcxj4n0BvNt_jMunHT5WSsAKFzuVJJyaaqcK9Z1HU/edit#

[2] https://www.getguesstimate.com/models/15367

Thanks for the explanation on extreme individual precautions, that made things clearer.

I'm curious what you're thinking of when you say "adopt measures that can plausibly be sustained for one year or even longer"?

I'm thinking of simple, low-cost changes to habits and my living environment that reduce chances of infection with Coronavirus and other illnesses. For example: improving personal hygiene practices (how to handle laundry, when to desinfect hands, how to keep the kitchen super clean, desinfecting electronic devices), chan... (read more)

3
eca
4y
This makes sense. To say the obvious, it is sensible for everyone to judge their risk individually and adjust precautions as we have more info. A particularly large factor is your age and comorbid conditions, as well as those of people who you would have the opportunity to infect (who may have higher risk and lower risk tolerance). I think it is likely enough that most people will consider the risk "very high" at some point before we get a treatment to recommend preparing for that eventuality.

You mention exponential spread, working from home, and avoiding travelling after March.

But what is the endgame here? How long do we need to stop travelling for? Should we apply these measures, as far as possible, starting in April and keep them up until a vaccine is available in 1-2 years? Will the number of cases level off eventually?

I assume there is no scientific consensus on these questions. If the virus is here to stay, then there might be little value in adopting extreme individual precautions for just one or two months. Afterwards, when you stop tak... (read more)

I know next to nothing about this stuff, but I was thinking that it would be good to at least avoid the virus in the period when there might not be enough hospital beds and the health system is very overwhelmed. So it might make sense to take more extreme precautions in that time.

7
eca
4y
All good questions. I don't have great answers, but here are a few things. The disease CAN burn itself out: When the density of susceptible individuals is low enough (either because many are recovered and immune or because of social distancing) the disease is predicted to burn out. Google "SIR model" for more info. It is really hard to guess when this will be, obviously. It does look like the social distancing measures taken by China, even after alleged number fudging and diagnostic shortages, made the disease spread much more slowly (and MAYBE would have burned it out if China was completely isolated from the world- very dubious though) Re extreme individual precautions and the long game I don't expect this to blow over in 1-2 months, and I wouldn't advocate that view to anyone else. The recommendations I made are intended to be risk-reducing in the medium- long term as well as the short term. If you have food stock for 1+ month, then you can choose the safest time to go to the grocery store, or leave your food delivery for 10 days to sanitize, and thereby reduce your risk. Likewise, each time you avoid travel or work from home is reduced risk. You definitely do want to avoid sheltering in place, only to desperately need food or other supplies later when the risk is higher. But as I said above, having food stocks and taken other precautions means you have more options. It's also not the case that you will always be at higher risk if you wait. While the exponential doubling is a good approximation in the short term (and important IMO for people to wrap their heads around), things like safe delivery infrastructure, overall proficiency treating the disease, and availability of medical countermeasures like remdesivir will probably improve in the medium term. I'm curious what you're thinking of when you say "adopt measures that can plausibly be sustained for one year or even longer"?
2
Tsunayoshi
4y
There will not be a vaccine soon, but anti-viral drugs are currently in an FDA approved Phase 3 trial, and from what I have heard could be both approved and available in May. There is evidence that higher temperatures will limit the spread: Africa has so far been mostly spared, and warm places like Singapore are doing much better than Japan or South Korea.

I would think hard about what the relevant resources are that you're trading off against each other. Are your hobbies important for your well-being and relaxation? Is it possible that by starting to monetize your hobbies, you might get less enjoyment out of them? Maybe it will also create some imbalance as you spend more time on them than you otherwise would? Or perhaps it's the opposite and monetizing your hobbies would actually increase the quality of your leisure time? Perhaps you can run a time-limited experiment to find out.

Also, as a full-t... (read more)

2
warrenjordan
4y
My goal isn't to become a huge blogger or streamer. The purpose of them is for leisure and any money that I make, I donate to charity. I feel like this would increase the quality of my leisure time and give me more fulfillment and satisfaction - the warm fuzzies in that article. Meanwhile, my day job is optimized for utilons. Thanks for sharing the article. It sounds like I was trying to optimize for both, while the best approach is to do separate.