Lukas Trötzmüller

Research & Writing (Freelance) @ AI Safety Field Building Hub
Working (6-15 years of experience)
Seeking work
538Graz, AustriaJoined Aug 2019

Bio

President 2021 of EA Austria | Co-Founder of EA Graz | Entrepreneur | Circling Facilitator | Running Applied Rationality Workshops

Open to meet potential cofounders or joining an existing EA-aligned startup. Read more: https://lukastr.me/about

Current Businesses: https://www.fwsim.com/

and https://www.geosci.de 

Comments
30

Curious why this is getting downvoted. It seems like another initiative in the Applied Rationality space, which sounds quite useful to me.

While I'm personally not interested in the bootcamp, I am curious if the people who downvoted have specific criticism or reservations about the program.

Location: Graz, Austria

Remote: Yes

Willing to relocate: For the right opportunity

Skills:

  • Startup Founder & Software Developer with 12 years of experience
  • Basic knowledge of most aspects of running a business
  • Strong Technical Skills
    • Computer Graphics and Game Development
    • Web Scraping and Automation
    • Performance Optimization
    • Algorithmic Problems
    • General backend development
    • Microsoft .NET
    • Desktop Development
  • Some experience in
    • Data Pipelines
    • Network Engineering
    • Computer Vision
    • Applied Math
    • Software Testing
    • UI Design & Usability
  • Other non-technical
    • Workshop Facilitation (trained circling facilitator)
    • Filmmaking
    • Photography
  • Previous Research I've done:

Résumé/CV/LinkedIn: https://lukastr.me/about

Email: lukas@fwsim.com

Available from and until: To be discussed

I frequently catch myself, and I'm embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I'm not the only one.

Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.

I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I'm not sure these changes would actually influence the votesorted order of the postings - but they might. I suspect it would also change the nature of certain discussions.

In order to make this even remotely plausible, the rules for tax deductible charities would need to be far more stringent. And then you get a situation like we currently have in Austria, where not a single EA-aligned charity is tax-deductible at all.

Nevertheless it does send a certain signal to the public. The way things look is important, especially when it comes to completely legal ways to circumvent taxes - where intent plays a role.

The justification of crypto regulation requires background information that outside observers don't have. Also, it's impossible to judge from the outside whether or not tax savings was one of the arguments considered in addition to the regulatory situation.

There is no extreme poverty or starvation in democratic countries

This seems like a strong claim to me. What's your source for that?

and access to education and health care is one hundred percent, at least in older democracies. Younger ones are getting there fast.

Where do you draw the line between older and younger democracies? Isn't the US pretty old compared to other democracies [1] - and does it provide "100% access to health care" to its citizens?

all countries and all people lived in democracies the major problems of humanity would be solved or be dramatically smaller.

Would you classify X-Risks like AI and pandemics as major problems? Do you think having more countries be democratic would reduce these risks - given that the existing democracies don't do enough on either?

[1] https://www.weforum.org/agenda/2019/08/countries-are-the-worlds-oldest-democracies

pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem

While there is significant support for "AI as cause area #1", I know plenty of EAs that do not agree with this. Therefore, "generally agree upon" feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety

For viewpoints from professional AI researchers, see Vael Gates interviews with AI researchers on AGI risk.

I mention those pieces not to argue that AI risk is overblown, but rather to shed more light on your question.

I found myself confused about the quotes, and would have liked to hear a bit more where they came from. Are these verbatim quotes from disillusioned EAs you talked to? Or are they rough reproductions? Or completely made up?

The sample is biased in many ways: Because of the places where I recruited, interviews that didn't work out because of timezone difference, people who responded too late, etc. I also started recruiting on Reddit and then dropped that in favour of Facebook.

So this should not be used as a representative sample, rather it's an attempt to get a wide variety of arguments.

I did interview some people who are worried about alignment but don't think current approaches are tractable. And quite a few people who are worried about alignment but don't think it should get more resources.

Referring to my two basic questions listed at the top of the post, I had a lot of people say "yes" to (1). So they are worried about alignment. I originally planned to provide statistics on agreement / disagreement on questions 1/2 but it turned out that it's not possible to make a clear distinction between the two questions - most people, when discussing (2) in detail, kept referring back to (1) in complex ways.

I'm not quite sure I read the first two paragraphs correctly. Are you saying that Cotra, Carlsmith and Bostrom are the best resources but they are not widely recommended? And people mostly read short posts, like those by Eliezer, and those are accessible but might not have the right angle for skeptics?

Load More