T

trevor1

181 karmaJoined Sep 2019

Comments
165

trevor1
10d74

Really grateful for the focus on construction instead of destruction. It might not be as dramatic or exciting, but it's still kind of messed up that damaging large parts of EA count as costly signals to signal credibility, even though people other than the poster are the ones who carry the entire burden of the costs.

I think another dimension of interest for cause-first vs member-first is how much faith you have in the people who make up the causes. If you think everyone is dropping the ball then you focus on the cause, whereas you focus on the people if you trust their expertise and skill enough to defer to them.

trevor1
19d
180

I only worry and work on AI safety, but I have a profound appreciation for animal welfare work, especially when it comes to sociology and public outreach. There's incredible insights to be made, and points to prove/demonstrate, by being as overwhelmingly correct about something as EA is on animal welfare. I'm really glad that this new book can focus on the last ~50 years of sociological changes since the last one; detailed research the phenomena of large numbers of people being slow to update their thinking on easily-provable moral matters is broadly applicable to global health and existential risk as well.

I've done a lot of reading about Soft Power, how elites around the world are attracted to things that are actually good, like freedom of speech and science, which ends up giving the US and Europe a big strategic advantage in global affairs; meanwhile, hard power like military force and economic influence systematically repels elites. I'm optimistic about the level of competence of people who find out about EA through animal welfare, due to their ability to recognize the sheer asymmetry between EA and non-EA takes on animal welfare. 

I just wish there was a way to scale it up more effectively e.g. at university EA groups doing outreach, since the elephant in the room is the first impression that new people get when they reflexively default to thinking "oh no a vegetarian is trying to convince me to change my diet even though I'm satisfied with it". If there was some galaxy-brained way around that e.g. a perfect combination of 40 words that lets you introduce wild animal welfare to someone without being looked at funny, it would plausibly be worth putting a lot of effort into figuring out a near-optimal template for the perfect pitch.

trevor1
22d84

That's a real shame, It was a really good idea, they were really well put together, and I found them really helpful, and they definitely looked like something that would be extremely high EV. But I almost never encountered them because they frequently didn't seem to get enough upvotes to persist on the main page for long (maybe extremely large numbers of people didn't have their posts make the cut so a few of them strong downvoted it?). I definitely thought that they should have been integrated into the site somehow.

I'll go over the archive when I have time to do some reading, since they really were a great way to find a post that interested me.

trevor1
1mo10

I noticed that the book and article you recommended were both less recent than the 80k appearance. Do you have any information about a recent project, paper, or appearance?

trevor1
2mo10

Did your experience with behavior change work influence your decision to start focusing particularly on behavioral addiction support? If so, how?

trevor1
2mo65

It seems like august!aella was really on the ball here; commenters just shouldn't make people feel bad so they just can't bring themselves to post there again. That's pretty simple and to-the-point.

Where was this originally posted?

But many of the actual claims being responded to in this fashion are not powerful snippets of propaganda, or nascent hypnotic suggestions, or psychological Trojan horses.  They aren't the workings of an antagonist.  They're just half-baked ideas, and you can either respond to a half-baked idea by helping to bake it properly...

...or you can shriek "food poisoning!" and throw it in the trash and shout out to everyone else that they need to watch out, someone's trying to poison everybody.

The problem is that there are antagonists out there, especially on twitter, and those antagonists do make full-baked snippets of propaganda, clever suggestions, and psychological trojan horses. You'll find examples everywhere you look. Twitter might feel friendly and safe, but it's actually just really good at posing like that (Goodharting); the idea that twitter is actually as friendly and safe as it feels (especially compared to EAforum and Lesswrong) is liable to cause a lot of harm, and thus is worth pointing out as harmful and wrong.

There is basically no chance of improving twitter, whereas the entire premise of this post is that EAforum and Lesswrong ought to improve, because they clearly can (and have a record of doing so).

trevor1
2mo10
  • Various private lists and works in progress

Will there be a second post or will this post be edited to include those? These omnibuses are extremely valuable and important for preventing people from reinventing the wheel, but the whole point of  centralization of knowledge is centralization of knowledge. If they're living documents, then people might miss new ideas, when ideally they would have a bunch of new potentially game-changing ideas slapped on their desk in a way that they would notice.

trevor1
2mo32

I agree that behavioral science might be important to creating a non-brittle alignment, and I am very, very, very, very, very bullish about behavioral science being critically valuable for all sorts of factors related to AI alignment support, AI macrostrategy, and AI governance (including but not limited to neurofeedback). In fact, I think that behavioral science is currently the core crux deciding AI alignment outcomes, and that it will be the main factor determining whether or not enough quant people end up going into alignment. In fact, I think that the behavioral scientists will be remembered as the superstars, and the quant people are interchangeable.

However, the overwhelming impression I get with the current ML paradigm is that we're largely stuck with black box neural networks, and that these are extremely difficult and dangerous to align at all; they have a systematic tendency to generate insurmountably complex spaghetti code that is unaligned by default. I'm not an expert here, I specialized in a handful of critical elements in AI macrostrategy, but the things I've seen so far indicates that the neural network "spaghetti code" is much harder to work with than the human alignment elements. I'm not strongly attached to this view though.

trevor1
2mo40

you may need to just "put your head down" for a couple years and just focus on studying really hard and not think too much about the distant future.

I think it's worth adding that it's a good idea to also simultaneously consider the risk of throwing your mind away. If you focus on remembering why math matters to you, that might help the mind thrive and operate at full capacity.

Don't worry about getting straight A's. B's in hard classes is better than A's in easy classes. If you can understand, say, scientific computing, MV calc, linear algebra, real analysis + functional analysis (optional), probability, bayesian statistics, and machine learning/deep learning, and you also you also background in bio (esp. if you have some research experience), you will likely be accepted to good phd programs (let alone masters) in U.S. and elsewhere.

Agreed. In my experience, it's actually easier to do extracurriculars, office hours, and independent research if you do B's in hard classes instead of A's in easy classes. A's in easy classes have low risk tolerance; in order to avoid getting problems wrong on tests, you have to study twice as long and twice as hard.

Answer by trevor1Apr 15, 202320

This seems like the perfect situation for 80k career advising, definitely give that a try. These are the kind of problems that they know the answers to.

With this caveat: I like bioinformatics a lot, but I think that 80k is currently biased against bioinformatics. Things like Neurofeedback have good odds of becoming an extremely high-impact area, because they might soon be able to make people extremely good at brainstorming, which will make solvable problems easier to solve. That would make neurology better for computer science than biochemistry. 

It will take 3+ years for Neurofeedback to start contributing to meat substitutes; but, at the same time, mean substitutes might be finished in 3+ years, and the only remaining problem will be to convince everyone to switch to meat substitutes. I think that we currently have the technology to make Neurofeedback good enough to help people solve difficult problems, and the only tasks remaining are large-scale trials, research, and engineering to adapt existing technology.

I recommend going to Berkeley/Oakland if you can. Most EA people are there, and they are very easy to talk to, even if you are not good at socializing. If 80k advising thinks it's a good idea, you could even try visiting for a couple weeks during this summer (make sure to ask if you're a good fit for that); the EA people in Berkeley/Oakland have long lists of valuable advice, including advice for getting accepted into a really good masters program at Uberkeley.

Load more