All of electroswing's Comments + Replies

On the EA forum redesign: new EAs versus seasoned EAs

In the recent Design changes announcement, many commenters reacted negatively to the design changes. 

One comment from somebody on the forum team said in response: (bolded emphasis mine)

One of our goals on the Forum team is to make the Forum accessible to people who are getting more engaged with the ideas of EA, but haven’t yet been part of the community for a long time.. Without getting into a full theory of change here, I think we’ve neglected designing for this user group a bit over the last sever

... (read more)

Question about timing of program: This program is during the school year rather than during summer break. It is also meant for EU/UK students, who may not have slack during the school year because EU/UK university admissions often specifically require very high grades with little room for error. Do you think your application pool would be stronger if this were a summer program instead? 

(pointed) Questions about "puzzle quiz": 

  1. These synonym questions (and to a lesser extent the analogy questions) are dramatically easier for native English speakers
... (read more)

https://www.aeaweb.org/articles?id=10.1257/aer.20141734 Gender Differences in Accepting and Receiving Requests for Tasks with Low Promotability "

"We examine the allocation of a task that everyone prefers be completed by someone else (writing a report, serving on a committee, etc.) and find evidence that women, more than men, volunteer, are asked to volunteer, and accept requests to volunteer for such tasks."

Promotability isn't exactly the word that applies to EA. Instead here I mean a more nebulous term like "low promotability = grunt work, lack of prestig... (read more)

Short productivity videos. For example: "what is a TAP", "what is Murphyjitsu", "what is goal factoring", etc. 

This, alongside your current "worldview expanding" content, curates an audience who is interested in tackling big questions and also cares about optimizing their personal impact. 

Expanding on the "big questions" side, I would like to see more content which inspires altruism (example). 

There is a relevant Rob Miles Computerphile video. It does not have a demo component like you are planning, but it does seem to click with laypeople (1M views, top comments generally engaged). 

Two more organizations which try to get a narrow group of people (athletes / founders) to give to effective causes: 

One more organization trying to get the everyday person to give 1%: 

1
Seth Ariel Green
1y
Nice, thanks! I didn't include niche groups but maybe I'd put them into a separate category if I were myself to do this work (?). Another one is https://www.effectivegivingquest.org/ for game creators 

Perhaps one source of downvotes is that the main idea of this post is unoriginal. Anyone putting on an intro fellowship has put some amount of thought into: 

  • Do I call it a "fellowship" to give it prestige, or do I call it a "seminar" / "reading group" to make it sound academic, or do I call it a "program" or a more neutral tone, ...
  • Do I call it "Arete" to sound fancy, or do I call it "intro" to sound welcoming, ...
  • Do I explicitly put "EA" in the title?

The one new thought here seems to be having the acronym "IDEA" stand for "Introduction to making a Di... (read more)

4
Abby Hoskin
1y
Somewhat embarrassing (for me) how you made the same arguments here, but with more clarity and detail than I have, months before my post 😅  100% endorse everything you said! Would have linked to this earlier, just didn't see it when you originally posted, sorry!

What does she think policymakers should be trying to do to prevent risks from misaligned AI? 

Now that Rational Animations has the human capital, budget, and experience to make high quality videos like this one, I think they should develop a more consistent brand.

They should have a consistent single face or voice of the channel. Popular edutainment channels often take off when viewer connects with a likeable personality. Examples: 

  • Tom Scott, VSauce, Veritasium, Physics Girl, ...
  • Channels which don't show their face in their typical format: Wendover Productions, 3Blue1Brown
  • Even high-budget channels like Vox are starting to lean into this format
... (read more)

What about a subreddit?

If it's OK to answer:  how much did you end up dictating the content/wording of the ad?

3
Bella
2y
So, I don't dictate per se, but I have input in two ways: 1. I send creators an initial doc with 'talking points' (things I'd like them to say about 80k) and example ad reads we've liked in the past 2. I review the script they'll read from and/or the final filmed video (mostly to veto anything I don't like, because rewording once the video is filmed is pretty costly) I always encourage creators to use their own tone, say things they think will resonate with their audience, and take judgement calls on what will work best in their context. IME that has higher conversion rates :) In Tom's case we talked a bit about alternatives, and especially the 'call to action' at the end. I signed off on a script and the final filmed video.

This might be better received as an April Fools' Day post.

I should clarify—I think EAs engaging in this behavior are exhibiting cult indoctrination behavior unintentionally, not intentionally. 

One specific example would be in my comment here.

I also notice that when more experienced EAs tend to talk to new EAs about x-risk from misaligned AI, they tend to present an overly narrow perspective. Sentences like "Some superintelligent AGI is going to grab all the power and then we can do nothing to stop it" are thrown around casually without stopping to examine the underlying assumptions. Then newer EAs repeat the... (read more)

I worry that the current format of this program might filter out promising candidates who are risk averse. Specifically, the fact that candidates are only granted the actual research opportunity "Assuming all goes well" is a lot of risk to take on. For driven undergraduates, the cost of a summer opportunity falling through is costly, and they might not apply just because of this uncertainty. 

Currently your structure is like PhD programs which admit students to a specific lab (who may be dropped from that lab if they're not a good fit, and in that case... (read more)

Can undergraduates who already know ML skip weeks 1-2? Can undergraduates who already know DL skip weeks 3-5?

2
ThomasW
2y
We'll consider this if there's enough demand for it! But especially for the latter option, it might make sense for students to work through the last three weeks on their own (ML Safety lectures will be public by then).

You may already have this in mind but—if you are re-running this program in summer 2023, I think it would be a good idea to announce this further in advance.

5
ThomasW
2y
I completely agree! Summer plans are often solidified quite early, so promoting earlier is better. I'm no stranger to the idea of doing things early! In this case, we saw a need the need for this program only a few weeks ago and we're now trying to fill it. If we do run it again next year, we'll announce it earlier, though there's definitely still some benefit to having applications open fairly late (e.g. for people who may not have gotten other positions because they lacked ML knowledge).

I was in the process of writing a comment trying to debunk this. My counterexample didn't work so now I'm convinced this is a pretty good post. This is a nice way of thinking about ITN quantitatively. 

The counterexample I was trying to make might still be interesting for some people to read as an illustration of this phenomenon. Here it is:

Scale "all humans" trying to solve "all problems" down to "a single high school student" trying to solve "math problems". Then tractability (measured as % of problem solved / % increase in resources) for this person... (read more)

8
MichaelStJules
2y
We've had several researchers who have been working on technical AI alignment for multiple years, and no consensus on a solution, although some might think some systems are less risky than others, and we've made progress on those. Say 20 researchers working 20 hours a week, 50 weeks a year, for 5 years. That's 20 * 20 * 5 * 50 = 100,000 hours of work. I think the number of researchers is much larger now. This also excludes a lot of the background studying, which would be duplicated. Maybe AI alignment is not "one problem", and it's not exactly rigorously posed yet (it's pre-paradigmatic), but those are also reasons to think it's especially hard. Technical AI alignment has required building a new field of research, not just using existing tools.
3
Thomas Kwa
2y
(posting this so ideas from our chat can be public) Ex ante, the tractability range is narrower than 2 orders of magnitude unless you have really strong evidence. Say you're a high school student presented with a problem of unknown difficulty, and you've already spent 100 hours on it without success. What's the probability that you solve it in the next doubling? * Obviously less than 100% * Probably more than 1%, even if it looks really hard-- you might find some trick that solves it! And you have to have a pretty strong indication that it's hard (e.g. using concepts you've tried and failed to understand) to even put your probability below 3%. There can be evidence that it's really hard (<0.1%), maybe for problems like "compute tan(10^123) to 9 decimal places" or "solve this problem that John Conway failed to solve". This means you've updated away from your ignorance prior (which spans many orders of magnitude) and now know the true structure of the problem, or something.

I think the diagram which differentiates "Stay in school" versus "Drop out" before further splitting actually has some sense. The way I read that split is, it is saying "Stay in school" versus "Do something strange".  

In some cases it might be helpful, in abstract, to figure out the pros and cons of staying in school, before recursing down the "Drop out" path. Otherwise, you could imagine a pro/con list for ORGs 1-3 having a lot of repetition: "Not wasting time taking useless required classes" is a pro for all 3, "Losing out on connections / credential" is a con for all 3, etc. 

Yannic Kilcher's youtube channel profiles fairly recent papers / "ML news" events. The videos on papers are 30-60mins, so more in depth than reading an abstract, and less time consuming than reading the paper yourself. The "ML news" videos are less technical but still a good way to keep up to date on what DeepMind, Meta, NVIDIA, etc. are up to. 

4
Sam Anschell
2y
This shows how new I am to the forum! Had I found the other post I probably would have just answered some of the questions in the comments rather than posting myself. Thank you for sharing.

You must be located in New York or another eligible state while signing up and making the bets.
 

 

Just to confirm -- do these bets require New York residency, or just being physically present in New York? What forms of identification are requested -- does it have to be a New York state ID (e.g. driver's license)? 

1
Robi Rahman
2y
Physical presence is enough. They have geolocation applets on their site and prohibit you from placing bets unless you are in an eligible location. I live in Massachusetts but went to NYC for about a day. I used my passport for identification because I didn't bring my driving license.
3[anonymous]2y
It's physical presence only. It does ask for your address for ID verification purposes.  I think ID requirements vary from person to person, so I didn't have to submit pictures of my ID, but I have some friends who had to. Doesn't have to be NY ID though.
2
david_reinstein
2y
it sounds like physical presence only, from his post. He only used a passport.

I often run into the problem of EA coming up in casual conversation and not knowing exactly how to explain what it is, and I know many others run into this problem as well.

 

Not rigorously tested or peer-reviewed but this is an approach I've found works decently. The audience is a "normal person".

My short casual pitch of EA:

"Effective altruism is about doing research to improve the effectiveness of philanthropy. Researchers can measure the effects of different interventions, like providing books versus providing malaria nets. GiveWell, an effective alt

... (read more)
1
ag4000
2y
Does the short causal pitch not run the risk of limiting EA's scope too much to philanthropy?  To me, it seems to miss the core of EA: figuring out how to better improve the world, given the resources we have.

When I say "repeating talking points", I am thinking of: 

  1. Using cached phrases and not explaining where they come from. 
  2. Conversations which go like
    • EA: We need to think about expanding our moral circle, because animals may be morally relevant. 
    • Non-EA: I don't think animals are morally relevant though.
    • EA: OK, but if animals are morally relevant, then quadrillions of lives are at stake.

(2) is kind of a caricature as written, but I have witnessed conversations like these in EA spaces. 

My evidence for this claim comes form my personal experie... (read more)

Mau
2y15
0
0

I think both (1) and (2) are sufficiently mild/non-nefarious versions of "repeating talking points" that they're very different from what people might imagine when they hear "techniques associated with cult indoctrination"--different enough that the latter phrase seems misleading.

(E.g., at least to my ears, the original phrase suggests that the communication techniques you've seen involve intentional manipulation and are rare; in contrast, (1) and (2) sound to me like very commonplace forms of ineffective (rather than intentionally manipulative) communicat... (read more)