All of catehall's Comments + Replies

As the former COO and briefly co-CEO of Alvea, I also endorse Kyle's reflections!

Don't know where you are, but there might be enough people here in the bay for it to make sense.

1
anotherburneraccountsorry
1y
It probably doesn’t narrow it down too much for me to say that I’m in New York City (and honestly even if it does, I don’t worry so much about some people being able to guess my identity as just not wanting it to be very public), so there might be enough people by me as well.
Answer by catehallFeb 17, 202316
0
0

Hi! I've struggled loads with addictions to alcohol and other drugs, spending large chunks of my 20s and 30s totally in thrall to one substance or another. I spent several years trying and failing to get sober, and finally succeeded 2.5 years ago. I'm sorry you're going through it; it's fucking agonizing.

One thing I found indispensable in early sobriety was fellowship, and I think this is true of a very large % of people who successfully recover. 12 Step programs can be an awkward fit, but also have huge fellowships in many areas, and I was able to get a f... (read more)

3
anotherburneraccountsorry
9mo
Since posting this post, I've written more about my experience and given updates here: https://forum.effectivealtruism.org/posts/F2MfbmRAiMx2PDhaD/some-observations-on-alcoholism As a result a few more people have expressed interest in an EA recovery group, so I created a Discord chat to coordinate and discuss the possibility further: https://discord.gg/DwMw9C6p This might wind up being temporary and I'm happy to switch to something else, but it seemed like I should at least set it up to get started.
1
anotherburneraccountsorry
1y
Thanks, I appreciate you sharing this. My point is roughly: I have told most but not all of the people closest to me about it, and I'm gradually decreasing the amount I drink each night to avoid rapid withdrawal problems (and failing many nights). I think an EA fellowship/get together on this would be a great idea, though my wi fi is terrible so I personally would be unlikely to attend unless it was in-person.

Thanks so much for  your post here! I spent 5ish years as a litigator and couldn't agree more with this. As an additional bit of context for non-lawyers, how discovery works in a large civil trial, from someone who used to do it:

  1. You gather an ocean of potentially relevant documents from a wide range of sources
  2. You spend a ton of time sifting through them looking for quotes that, at least if taken out of context, might support a point you want to make
  3. You gather up all these potentially useful materials and decide what story you want to tell with them

Like a bird building a nest at a landfill, it's hard to know what throwaway comment a lawyer might make something out of.

I don't think this matters FYI -- all funds came directly or indirectly from a bankrupt entity or person

This is not legal advice and I am not a lawyer and this may be entirely wrong, but it may be helpful to know that to the best of my current understanding I have not seen any evidence of a double entity clawback occurring before, of the kind that would be involved if grants were clawed back from FTX Philanthropy grantees despite FTX Philanthropy not being bankrupt. And I have been looking.

I think you're right that "FTX Philantrophy" was  acting as a regrantor of FTX funds, and that doesn't protect its grantees. However, the fact that FTX Philantrophy isn't in the bankruptcy filing could be relevant. It might mean that the 90-day clock started when an entity in the bankruptcy transferred funds to FTX Philantrophy.  I am unclear on whether a court would see FTX Philantrophy as an insider for purposes of 11 USC 547.

More generally, the exact origin/flow of funds could potentially be a total defense in some cases. For instance, the Fut... (read more)

Quick thoughts -- this isn't intended to be legal advice, just pointing in a relevant direction. There are a couple types of "clawbacks" under bankruptcy law:

  • Preference action (11 USC 547): Generally allows clawback of most transfers by an insolvent entity or person made within 90 days of filing for bankruptcy. The concept here is that courts don't want people to be able to transfer money away to whoever they want to have it just before filing for bankruptcy. My GUESS (this really really isn't legal advice, I'm really not a bankruptcy lawyer) is that any m
... (read more)
1
joshcmorrison
1y
Edit: I threw the below together pretty quickly and now think it was wrong (because i hadn't reviewed the whole statute closely). Sorry about that... "For what it’s worth (and I haven’t been licensed/practiced as an attorney in a while) , My intuition is the charitable exception here seems pretty solid (for the grants that took the form of charitable donations to 501c3s/other charitable entitities. The key question to me doesn’t seem like if granting entity was a 501c3/foundation but whether the recipient was a charity/purpose of the donation was charitable. (The money given to individuals may be dicier but 1. I think it’s still fine and 2. It’s not that much in absolute terms such that I can’t imagine a bankruptcy trustee going hard after it). "
7
Jason
1y
"Fraudulent transfer" under 11 USC 548 is a bit of a misnomer. Subsection (a)(1) explains what makes a transfer "fraudulent." One option, subparagraph (A), requires intent to mess over creditors. But subparagraph (B) does not require any ill intent at all -- it only requires that the debtor "received less than a reasonably equivalent value in exchange" for the transfer (check), and that one of four criteria concerning the debtor's financial condition is met (e.g.,  that the debtor "was engaged in business or a transaction, or was about to engage in business or a transaction, for which any property remaining with the debtor was an unreasonably small capital"). The underlying idea is that if a company that is insolvent, on the brink of insolvency, etc. has no business handing out money to favored entities or persons in preference to the claims of its creditors. For the curious, the shorter "preference" period is more about favoring creditors close to the bankruptcy filing date.  So, e.g., if I owe my friend and the bank 100K each, I can't make a big payment to my friend and then file for bankruptcy, because that violates the bankruptcy norm of treating like creditors alike. A trustee can also seek clawbacks under 11 USC 544(b) if allowed under applicable state law, which can sometimes look back up to six years. Of course, the relevance of this discussion is dependent on whether a US bankruptcy court would apply US law if an FTX debtor filed in US bankruptcy court (for which the standards are low -- https://www.skadden.com/insights/publications/2021/06/quarterly-insights/international-companies-turn-to-us-restructurings) or sought ancillary proceedings under Chapter 15 (which I know very little about). If the target of a clawback isn't in the US, there is also a question of whether the target is effectively beyond the reach of a US court order. Complicated stuff, and I'm not qualified to offer an opinion beyond "consult with your lawyer if you think you have exposu

Do you know how likely it is that United States law applies? I haven't thought about this properly, but it seems like the main entity that is insolvent is a Antigua and Barbuda company doing business in the Bahamas? And I'm also uncertain which FTX entities were actually distributing the grants.

I dunno, man. I just want to be able to afford a house and a family while working, like, every waking hour on EA stuff. Sure, I’d work for less money, but I would be significantly less happy and healthy as a result — I know having recently worked for significantly less money. There’s some term for this - “cheerful price”? We want people to feel cared for and satisfied, not test their purity by seeing how much salary punishment they will take against the backdrop of “EA has no funding constraints.” I apologize for the spicy tone, but I think this attitude, ... (read more)

Hmm for some reason I feel like this will get me downvoted, but: I am worried that an AI with "improve animal welfare" built into its reward function is going to behave a lot less predictably with respect to human welfare. (This does not constitute a recommendation for how to resolve that tradeoff.)

5
Fai
2y
Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason. I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization problem. As a results the cars might be more expensive and they might be willing  sacrifice some human welfare, such as by causing discomfort or scare to passengers while braking violently for a rat crossing.  But maybe this is not a reason to worry. If, like how most of the stakes/wellbeing lie in the future,  most of the stakes and wellbeing lie with nonhuman animals, maybe that's a bullet we need to bite. We (longtermists) probably wouldn't say we worry that if an AI cares about the whole future it would be a lot less predictable with respect to the welfare of current people, we are likely to say this is how it should be.  Another reason to not over-worry is that human economics will probably constrain that from happening to a high extent. Using the self-driving car example again, if some companies' cars care about animals, some don't, the cars that don't will, other things being equal, be cheaper and safer for humans. So unless we so miraculously convince all car producers to take care of animals, we probably won't have the "problem" (which for me, that we won't get "that problem" is the actual problem). The point probably goes beyond just economics, politics, culture, human psychology, possibly all have similar effects. My sense is that as far as humans are in control of the development of AI, AI is more likely to be too humancentric than not being humancentric enough.
6
Charles He
2y
I think this is exactly correct and I don't think you should be downvoted?   Uh...this comment here is a quick attempt to try to answer this concern most directly.   Basically, longtermism and AI safety has the ultimate goal of improving the value of the far future, which includes all moral agents.  * So in a true, deep sense, animal welfare must already be included. So instructions  that sound like, "improve animal welfare", should be accounted for already in "AI alignment". * Now, despite the above most current visions/discussions of the far future that maximize welfare ("make the future good") focuses on people. This focus on people seems reasonable for various reasons. * If you wanted to interrogate these reasons, and figure out what kind of people, what kind of entities, or what animals are involved, this seems to involve looking at versions of "Utopia". * However, getting a strong vision of Utopia seems not super duper promising at the  immediate moment. * The reason why it's not promising is because of presentation reasons and the lower EV. Trying to have people sit around and sketch out Utopia is hard to do, and maybe we should just get everyone on board for AI safety. * This person went to a conference and wrote a giant paper (I'm not joking, it's 72 pages long), to try to understand how to present this. * Because it is relevant (for example, to this very concern and many other issues in various ways) someone I know briefly tried to poke at work at "utopia" (they spent like a weekend on it). * To get a sense of this work, the modal task in this person's "research" was a 1on1 discussion (with a person from outside EA but senior and OK with futurism). The discussions basically went like: "Ok, exploring the vision of the future is good. But let's never, ever use the word Utopia, that's GG. Also, I have no idea how to start.".

Thank you for the labor of writing this post, which was extremely helpful to me in clarifying my own thinking and concerns. I plan to share it widely.

"I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works." This made my heart sing. EA would be so much better if more people understood this.

Happy to see this being discussed :) I may come back and write more later, but a couple quick points:

  • I've been having lots of convos with different people in this vein, and am feeling optimistic there's growing momentum behind recognizing the importance of recruiting mid+ career professionals -- not as a matter of equity and diversification, but as one of bringing critical and missing talent into the movement. I think EA has, on the whole, significantly overvalued "potential" and significantly undervalued "skills" and "capacities" in the past.
  • One of the ad
... (read more)

Hi all -- Cate Hall from Alvea here. Just wanted to drop in to emphasize the "we're hiring" part at the end there. We are still rapidly expanding and well funded. If in doubt, send us a CV.

Could you please post a specific hiring request on Twitter so we can share? Also, what skills are you looking for and are the jobs remote or if based somewhere, where?

Thanks so much for your detailed comment, and sorry for not seeing it earlier!

I'm a bit unclear what's going on in the Thermo-Fischer example: The second question from the initial letter makes it sound like TF had been granted a license to export under the EAR, but I don't see a claim that the technology was covered by the Commerce Control List, and the response from Ross seems to suggest otherwise (from what I can tell, I'm behind the WSJ paywall).

In any event, I think this is just the same issue that comes up generally with regulation of dual-use technol... (read more)

Hiya -- EA lawyer here. While the US legal system is generally a mess and you can find examples of people suing for all sorts of stuff, I think the risk of giving honest feedback (especially when presented with ordinary sensitivity to people you believe to be average-or-better-intentioned) is minimal. I'd be very surprised if it contributed significantly to the bottom-line evaluation here, and would be interested to speak to any lawyer who disagreed about their reasons for doing so.

I just totally missed that the info was in the job ads -- so thank you very much for providing that information, it's really great to see.  Sorry for missing it the first time around!

2
Peter Wildeford
3y
No problem - sorry that wasn't clear!
2
Linch
3y
Feel free to apply if the salary range and other job relevant job details make sense for your personal and professional priorities! 

Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.

3
Charles He
3y
  Eh.... If I was writing a similar comment, I think I would choose to consider writing instead of “reducing variance” instead something like “improving efficiency and transparency, so organizations and candidates can maximize impact”.  Maybe instead of “standardized roles and compensation across a lot of organizations” I would say something like “mature market arising from impactful organizations so that candidates have a useful expectation of wage”. (E.g. The sense that a seasoned software developer knows what she could get paid in the Bay Area and it’s not just some uniform prior between $50k and $10M). So the main perspective for why this is relevant is shown by this comment chain where Gregory Lewis has a final comment, and his comment seems correct.   Uh. The rest of this comment is low effort and a ramble, isn’t on anyone to know, but I think I will continue to write because it’s just good to know about, or something. Why I think someone would care about this: * Depending on the cruxes of whether you accept the relevant worldview/cause area/models of talent, I think the impact and salaries being talked about here, driven by tails (e.g. “400k to 4M”) would make it unworkable to have “standardized” salaries or “ensure pay equity” that most people would mean. Like, salary caps wouldn’t work out, people would just create new entities or something, and it would just add a whole layer of chicanery.   * Credibility of the EA movement seems important, so it’s good to be aware of things like “anti-trust”, “fiduciary duty” and as Gregory Lewis puts it, “colourably illegal”. Knowing what these do would be useful if you are trying to build institutions and speak to institutions to edit AI policy and literally stop WW3.   But wait there’s more!  While the above is probably true, here’s some facts that make it even more awkward: * The count of distinct funders for AI and longtermist EA initiatives is approximately one and a half. So creating a bunch of en
6
Linch
3y
For people wondering and who haven't clicked through to the job ads on the website, below is the compensation ranges for the Researcher roles:

I'm not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there's anything we can do to be more transparent, please let us know!

I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:

Motivated reasoning feels like a huge concern for longtermism.

First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ... (read more)

It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that).

I have seen something like this happen, so I'm not claiming it doesn't, but it feels pretty confusing to me. The logic pretty clearly doesn't hold up. Even if you acce... (read more)