All Posts

Sorted by Magic (New & Upvoted)

Monday, June 21st 2021
Mon, Jun 21st 2021

No posts for June 21st 2021
Shortform
2Aaron Gertler2hNEW EA MUSIC José Gonzalez (GWWC member, EA Global performer, winner of a Swedish Grammy award) just released a new song inspired by EA and (maybe?) The Precipice [https://consequence.net/2021/06/jose-gonzalez-shares-the-origins-of-new-song-head-on-stream/] . Lyrics include: It's not as direct as the songs in the Rationalist Solstice, but it's more explicitly EA-vibey than anything I can remember from his (apparently) Peter Singer-inspired 2007 album, In Our Nature.

Sunday, June 20th 2021
Sun, Jun 20th 2021

Shortform
37RyanCarey17hA case of precocious policy influence, and my pitch for more research on how to get a top policy job. Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year - 2021 - she was appointed by Biden. The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government. I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny's success is arguably a related example. This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers [https://80000hours.org/2015/07/what-are-your-odds-of-getting-into-congress-if-you-try/] in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointment s(such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [https://cdn.govexec.com/media/gbc/docs/pdfs_edit/070317whsalaries.html] [Trump years] and here [https://obamawhitehouse.archives.gov/21stcenturygov/tools/salaries] [Obama year
2
Wiki/Tag Page Edits and Discussion

Saturday, June 19th 2021
Sat, Jun 19th 2021

Wiki/Tag Page Edits and Discussion

Friday, June 18th 2021
Fri, Jun 18th 2021

Shortform
12Linch3dI've started trying my best to consistently address people on the EA Forum by username whenever I remember to do so, even when the username clearly reflects their real name (eg Habryka). I'm not sure this is the right move, but overall I think this creates slightly better cultural norms since it pushes us (slightly) towards pseudonymous commenting/"Old Internet" norms, which I think is slightly better for pushing us towards truth-seeking and judging arguments by the quality of the arguments rather than be too conscious of status-y/social monkey effects. (It's possible I'm more sensitive to this than most people). I think some years ago there used to be a belief that people will be less vicious (in the mean/dunking way) and more welcoming if we used Real Name policies, but I think reality has mostly falsified this hypothesis.
Wiki/Tag Page Edits and Discussion

Thursday, June 17th 2021
Thu, Jun 17th 2021

Wiki/Tag Page Edits and Discussion

Wednesday, June 16th 2021
Wed, Jun 16th 2021

Shortform
3Nathan_Barnard5dIf preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
1
Wiki/Tag Page Edits and Discussion

Tuesday, June 15th 2021
Tue, Jun 15th 2021

Shortform
15evelynciara6dCrazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.
2evelynciara5dPractical/economic reasons why companies might not want to build AGI systems (Originally posted on the EA Corner Discord server.) First, most companies that are using ML or data science are not using SOTA neural network models with a billion parameters, at least not directly; they're using simple models, because no competent data scientist would use a sophisticated model where a simpler one would do. Only a small number of tech companies have the resources or motivation to build large, sophisticated models (here I'm assuming, like OpenAI does, that model size correlates with "sophisticated-ness"). Second, increasing model size has diminishing returns with respect to model performance. Scaling laws usually relate model size to training loss via a power law, so every doubling of model size results in a smaller increase in training performance. And this is training performance, which is not the same as test set performance - increases in training performance above a certain threshold are considered not to matter for the model's ultimate performance. (This is why techniques like early stopping exist - you just stop training the model once its true performance stops increasing.) (Counterpoint: Software systems typically have superstar economics - e.g. the best search engine is 100x more profitable than the second-best search engine. So there could be a non-linear relationship between model performance and profitability, such that increasing a model's performance from 97% to 98% makes a huge difference in profits whereas going from 96% to 97% does not.) Third - and this reason only applies to AGI, not powerful narrow AIs - it's not clear to me how you would design an engineering process to ensure that an AGI system can perform multiple tasks very well and generalize to new tasks. Typically, when we design software, we create a test suite that evaluates its suitability for the tasks for which it's designed. Before releasing a new version of an AI system, we have to ru
1Ramiro6dIMF climate change challenge "How might we integrate climate change into economic analysis to promote green policies? To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change." https://lnkd.in/dCbZX-B [https://lnkd.in/dCbZX-B]
Wiki/Tag Page Edits and Discussion

Sunday, June 13th 2021
Sun, Jun 13th 2021

Shortform
18Linch7dI think it might be interesting/valuable for someone to create "list of numbers every EA should know", in a similar vein to Latency Numbers Every Programmer Should Know [https://gist.github.com/jboner/2841832] and Key Numbers for Cell Biologists [https://bionumbers.hms.harvard.edu/keynumbers.aspx]. One obvious reason against this is that maybe EA is too broad and the numbers we actually care about are too domain specific to specific queries/interests, but nonetheless I still think it's worth investigating.
3
Wiki/Tag Page Edits and Discussion

Saturday, June 12th 2021
Sat, Jun 12th 2021

Wiki/Tag Page Edits and Discussion

Load More Days