All of Jan_Kulveit's Comments + Replies

FWIW ... in my opinion, retaining the property might have been a more beneficial decision. 

Also, I think some people working in the space should not make an update against plans like "have a permanent venue", but plausibly should make some updates about the "major donors". My guess this almost certainly means Open Philanthropy, and also likely they had most of the actual power in this decision. 

Before delving further, it's important to outline some potential conflicts of interest and biases:
- I did co-organize or participated at multiple events a... (read more)

8
Forumite
1mo
FWIW, I thought that the choice of venue for EAG Bay Area 2024 was quite good. It was largely open plan - so lots of chance encounters. A nice mixture of privacy and openness for the 1-on-1s, which (rightly, to my mind) the event focusses on. Comfortable, but not flashy. I just got normal, professional vibes from it - it felt like a solid, appropriate choice. 

In my view the basic problem with this analysis is you probably can't lump all the camps together as one thing and evaluate them together as one entity. Format, structure, leadership and participants seem to have been very different.

7
Gavin
3mo
When producing the main estimates, Sam already uses just the virtual camps, for this reason. Could emphasise more that this probably doesn't generalise.
7
Gavin
3mo
The key thing about AISC for me was probably the "hero licence" (social encouragement, uncertainty reduction) the camp gave me. I imagine this specific impact works 20x better in person. I don't know how many attendees need any such thing (in my cohort, maybe 25%) or what impact adjustment to give this type of attendee (probably a discount, since independence and conviction is so valuable in a lot of research). Another wrinkle is the huge difference in acceptance rates between programmes. IIRC the admission rate for AISC 2018 was 80% (only possible because of the era's heavy self-selection for serious people, as Sam notes). IIRC, 2023 MATS is down around ~3%. Rejections have some cost for applicants, mostly borne by the highly uncertain ones who feel they need licencing. So this is another way AISC and MATS aren't doing the same thing, and so I wouldn't directly compare them (without noting this). Someone should be there to catch ~80% of seriously interested people. So, despite appearances, AGISF is a better comparison for AISC on this axis.
8
Sam Holton
3mo
Yes, we were particularly concerned with the fact that earlier camps were in-person and likely had a stronger selection bias for people interested in AIS (due to AI/AIS being more niche at the time) as well as a geographic selection bias. That's why I have more trust in the participant tracking data for camps 4-6 which were more recent, virtual and had a more consistent format.  Since AISC 8 is so big, it will be interesting to re-do this analysis with a single group under the same format and degree of selection.

Based on public criticisms of their work and also reading some documents about a case where we were deciding whether to admit someone to some event (and they forwarded their communication with CH). It's a limited evidence, but still some evidence.

 

This is a bit tangential/meta, but looking at the comment counter makes me want to express gratitude to the Community Health Team at CEA. 

I think here we see a 'practical demonstration' of the counterfactuals of their work:
- insane amount of attention sucked by this
- the court of public opinions on fora seems basically strictly worse at all relevant dimensions like fairness, respect of privacy or compassion to people involved

As 'something like this'  would be quite often the counterfactual to CH to trying to deal with stuff ...it makes it clear how much value they are creating by dealing with these problems, even if their process is imperfect

While I agree that the discussion here is bad at all those metrics, I'm not sure how you infer that the CH team does better at e.g. fairness or compassion.

Sorry for the delay in response.

Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take. 

Basically, from this perspective

* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach to... (read more)

Personally, I think the 1:1 meme is deeply confused.

A helpful analogy (thanks to Ollie Base) is with nutrition. Imagine someone hearing that "chia seeds are the nutritionally most valuable food, top rated in surveys" ... and subsequently deciding to eat just chia seeds, and nothing else! 

In my view, sort of obviously, intellectual conference diet consisting just of 1:1s is poor and unhealthy for almost everyone. 

In my view this is a bad decision. 

As I wrote on LW 

Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.

In particular I don't appreciate the epistemic of these moves together

1. Appeal to seeing thinks from close proximity. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this o... (read more)

Copy-pasting here from LW.

Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.

Maybe I'm getting it wrong, but it seems the model you have for why everyone is not on the ball is something like "people are approaching it too much from a theory perspective, and promising approach is very close to how empirical ML capabilities research works" &a... (read more)

(crossposted from Alignment Forum)

While the claim - the task ‘predict next token on the internet’ absolutely does not imply learning it caps at human-level intelligence - is true, some parts of the post and reasoning leading to the claims at the end of the post are confused or wrong. 

Let’s start from the end and try to figure out what goes wrong.

GPT-4 is still not as smart as a human in many ways, but it's naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.

And since the task that GPTs are be

... (read more)

You are correct with some of the criticism, but as a side-note, completeness is actually crazy. 

All real agents are bounded, and pay non-zero costs for bits, and as a consequence, don't have complete preferences. Complete agents in real world do not exist. If they existed, correct intuitive model of them wouldn't be 'rational players' but 'utterly scary god, much bigger than the universe they live in'. 

6
Habryka
1y
Oh, sorry, totally. The same is true for the other implicit assumption in VNM, which is doing bayesianism. There exist no bayesian agents. Any non-trivial bayesian agents would be similarly a terrifying alien god, much bigger than the universe they live in.
6
Jonas Moss
1y
Do I understand you correctly here? Each agent has a computable partial preference ordering x≤y that decides if it prefers x to y. We'd like this partial relation to be complete (i.e., defined for all x,y) and transitive (i.e., x≤y and y≤z implies x≤z). Now, if the relation is sufficiently non-trivial, it will be expensive to compute for some x,y. So it's better left undefined...? If so, I can surely relate to that, as I often struggle computing my preferences. Even if they are theoretically complete. But it seems to me the relationship is still defined, but might not be practical to compute. It's also possible to think of it in this way: You start out with partial preference ordering, and need to calculate one of its transitive closures. But that is computationally difficult, and not unique either. I'm unsure what these observations add to the discussion, though.
  1. a. 
    Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients. 

    b.
    I would expect the funds to have much better expertise in something like "evaluating the financial health of a company".  

    Also it seem you are somewhat shifting the goalposts: Zoe's paragraph with "On Halloween this past year, I was hanging out with a few EAs." It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people.  Your ar
... (read more)
-3
Milan_Griffes
1y
It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations:  1. Elite EAs who mentored SBF & incubated FTX 2. Random/typical EAs who Cremer would hang out with at parties  3. EA grant recipients  I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations.  I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the  Future Fund) and acting on what they noticed. 

In my view this is an example of a mistake in bounded/local consequentialism

From deontic perspective, there is a coordination problem, where "at least consistent handle" posts can be somewhat costly for the poster, but an atmosphere of an earnest discussion of real people  has large social benefits. Vice versa,  discussion with a large fraction of anonymous accounts - in particular if they are sniping at real people and each other - decreases trust, and is vulnerable to manipulation by sock puppets and nefarious players. 

Also, I think there ... (read more)

Seems worth trying

At the same time, I don't think the community post / frontpage attention mechanism is the core of what's going on. Which is, in my guess, often best understood as a fight between memeplexes about hearts and minds

The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example 

On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply kno

... (read more)
9
Milan_Griffes
1y
Two things:  1. Sequoia et al. isn't a good benchmark –  (i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didn't face this pressure.  (ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process.    2. The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions. 
3
Milan_Griffes
1y
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a): 

Also: Confido works for intuitively eliciting probabilities

I think this is a weird response to what Buck wrote. Buck also isn't paid  either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.

I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express 'I disagree', but 'I don't want to read this'.

Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be gla... (read more)

Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I'm coming from.

Thanks for all the care and effort which went into writing this!

At the same time, while reading, my reactions were most of the time "this seems a bit confused", "this likely won't help" or "this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion".

Unfortunately, to illustrate this in detail for the whole post would be a project for ...multiple weeks.

At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the ... (read more)

In the spirit of communication style you advocate for... my immediate emotional reaction to this is "Eternal September has arrived".

I dislike my comment being summarized as "brings up the "declining epistemics" argument to defend EA orgs from criticism".  In the blunt style you want, this is something between distortion and manipulation. 

On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing  my views on the debate.

I also dislike the way my comment is straw-manned by selective quotation.

In the next bul... (read more)

3
Wil Perkins
1y
Thanks for responding, I wasn’t trying to call you out and perhaps shouldn’t have quoted your comment so selectively. We seem to have opposite intuitions on this topic. My point with this post is that my visceral reaction to these arguments is that I’m being patronized. I even admit that the declining epistemic quality is a legitimate concern at the end of my post. In some of my other comments I’ve admitted that I could’ve phrased this whole issue better, for sure. I suppose to me, current charities/NGOs are so bad, and young people feel so powerless to change things, that the core EA principles could be extremely effective if spread.
2
Sharmake
1y
This. Aumann's Agreement Theorem tells us that Bayesian that have common priors and trust each other to be honest cannot disagree. The in practice version of this is that a group agreeing on similar views around certain subjects isn't automatically irrational, unless we have outside evidence or one of the conditions is wrong.

I  will try to paraphrase, please correct me if I'm wrong about this: the argument is, this particular bikeshed is important because it provides important evidence about how EA works, how trustworthy the people are, or what are the levels of transparency. I think this is a fair argument.

At the same time I don't think it works in this case, because while I think EA has important issues, this purchase does not really illuminate them.

Specifically, object level facts about this bikeshed

  • do not provide that that much evidence, beyond basic facts like "peopl
... (read more)
[anonymous]1y17
9
2

(Disclosure about step 2: I had seen the list of candidate venues, and actually visited one other place on the list. The process was in my view competent and sensible, for example in the aspect it involved talking with potential users of the venue)

Was there no less luxurious option available?

In previous discussion, Geoffrey Miller mentioned the benefits of a luxurious venue. In my opinion, the benefits of a non-luxurious venue equal or outweigh those of a luxurious venue -- for example, as a method to deter grifters. The fact that a luxurious venue wa... (read more)

For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
 

  • The amount of attentions spent on this seems to be a textbook example of bikeshedding

    Quoting Parkinson :"The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (see ambiguity aversion), so one assumes that those who work on it understand it. However, everyone
... (read more)
6
NickLaing
1y
1. Although I understand the sentiment, I don't think this is a slamdunk textbook example of "bikeshedding".  This has many features of being important, non-trivial issue (although I have low certainty). It might not be complicated technically, but there is plenty of social complexity that could have big implications. This purchase is a complex issue that raises  questions about both the identity and practical outworkings of the EA community. There could be a lot at stake in terms of community engagement and futre donations. Essays (or at least long posts) could reasonable be written on the pros and cons and issues around this purchase, which like the OP has said include - How important transparency is or isn't within the community - How promptly and comprehensively big decisions should be communicated within the EA community - Whether the purchase is actually worth the money (taking into consideration value vs renting facilities, optics, counterfactuals etc.) - How important optics should or shouldn't be in EA decision making (I'd love to see more serious maths around this)   On a related note, I personally have not found this easy to form a clear opinion on. You are right in that this is easier to analyse on than a lot of AI related stuff, but it's not easy to form an integrated opinion which considers all the issues and pros and cons. I still haven't clearly decided what I think after probably too much (maybe you're a bit right ;) ) consideration. 2. I haven't noticed the tone to be  like "I've read a tweet by Émile Torres, I got upset, and I'm writing on EA forum". That seems unfair on the well written and thought out post, and also very few of the comments I've read about this on the original post have been as shallow or emotive as this seems to insinuate. There has been plenty of intelligent, useful discussion and reflection. Perhaps this discussion could even be part of epistemic growth, as the community learns, reflects and matur
Arepo
1y45
23
6

It seems extremely uncharitable to call this bikeshedding.

It's just not that small an amount of money, relatively to one-off projects and grants in the EA world. It seems perfectly reasonable to expect that projects above a certain size have increased transparency, and it's hard to imagine this wouldn't qualify as big enough. 

These things are relative to money in EA space - if a high proportion of the actual money moving around EVF space is going to projects like this, it doesn't help to observe that billions of dollars are going from other sources to... (read more)

nikos
1y62
29
2

I think your criticism of bikeshedding somewhat misses the point people are raising. Of course the amount of money spent on WA is tiny compared to other things. The reason it's worth talking about it is that it tells you something about EA culture and how EA operates. 

This is in large parts a discussion about what culture the movement should have, what EA wants to be and how it wants to communicate to the world. The reason you care about how someone builds a bike shed is because that carries information about what kind of person they are, how trustwor... (read more)

2
ElliotJDavies
1y
Discussion around Wytham Abbey is almost certainly bikeshedding. However, don't throw the baby out with the bathwater, it's absolutely flagged an important issue with poor communications and PR.
Sabs
1y15
6
10

Not sure why this is getting negative votes or w/e, it's basically correct. And even in the PR stakes, the cost of the Abbey on the most pessimistic assumptions is absolutely peanuts compared to FTX! No one will remember, no one will care (whereas they absolutely will remember FTX, that's a real reputational long-term hit). 

Nice post, but my rough take is there is 
 

  • it's relatively common markets are inefficient, but unexploitable; trading on "everyone dies" seems a clear case of hard-to-exploit inefficiency
  • markets are not magic; impacts of one-off events with complex consequences are difficult to price in, and what all the magical market aggregation boils down to are a bunch of  human brains doing the trades; e.g. I was able to beat the market and get n-times return at point where markets were insane about covid; later, I talked about it with someone in one of
... (read more)
3
Ozzie Gooen
1y
I think there are a ton of Transformative AI scenarios where not "everyone dies". I think many AI Safety researchers are currently expecting less than a 40% chance of everyone dying.  I also really have a hard time imagining many financial traders actually seriously believing: 1. Transformative AI is likely to happen 2. It's very likely to kill everyone, conditional on happening. (95%++) Both of those are radical right now. You need to believe (1) to believe we're likely doomed soon.  I haven't seen any evidence of people with money seriously discussing (2).  
2
Yonatan Cale
1y
Sounds like I'd like a hedge fund to write the news for me (after they trade on it, no problem. but they must have great teams doing the analysis)

When the discussion is roughly at the level  'seem to me obviously worth doing ' it seem to me fine to state dissent of the form 'often seems bad or not working to me'.

Stating an opinion is not 'appeal to authority'. I think in many cases it's useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first.

I'm curious in which direction you think the suppos... (read more)

9
Jason
1y
I don't think (almost) anyone is trying to silence you here; the agreevotes on your top comment are pretty high and I'd expect a silencing campaign to target both. That suggests to me that the votes are likely due to what some perceive as an uncharitable tone toward Zoe, or possibly a belief that having the then-top comment be one that focuses heavily on Zoe's self-portrayal in the media risks derailing discussion of the original poster's main points (Zoe's potential involvement being a subpoint to a subpoint).
  • If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.

 

Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?

Also: even if the possible whistleblowers inside ... (read more)

6
Denkenberger
1y
It depends on how you define wealthiest minority, but if you mean billionaires, the majority of philanthropy is not from billionaires. EA has been unusually successful with billionaires. That means if EA mean reverts, perhaps by going mainstream, the majority of EA funding will not be from billionaires. CEA deprioritized GWWC for several years-I think if they had continued to prioritize it, funding would have gotten at least somewhat more diversified. Also, I find that talking with midcareer professionals it's much easier to mention donations rather than switching their career. So I think that more emphasis on donations from people of modest means could help EA diversify with respect to age.

Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.

 

And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept "unapproved" funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they ... (read more)

I was reacting mostly  to this part of the post

I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing 
...
Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.

I think it's fine for a co... (read more)

I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help

- this Economist piece, mentioning Zoe about 19 times
- WP
- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict”  but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
-this twitter thread
 ... (read more)

- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict”  but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”"My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict”  but still saying ... “But, yes, would we have been less likely

... (read more)

I'm not confident what the whole argument is. 

In my reading, the OP updated toward the position  "it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail" based on FTX causing large economic damage.  One of the conjectures based on this is "Implement Carla Zoe Cremer’s Recommendations".

I'm mostly arguing against the position that 'the update of probability mass on EA community building being negative due to FTX evi... (read more)

Unfortunately not in detail  - it's a lot of work to go through the whole list and comment on every proposal. My claim is not 'every item on the list is wrong', but 'the list is wrong on average' so commenting on three items does not solve possible disagreement. 

To discuss something object-level, let's look at the first one

'Whistleblower protection schemes' sound like a good proposal on paper, but the devil is in detail:

1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law.... (read more)

The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities. 

The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g... (read more)

Just wanted to flag that I personally believe
- most of Cremer's proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX
- it seems clear proposed reforms would not have prevented or influenced the FTX fiasco 
- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the s... (read more)

Jason
1y21
10
1

"It seems clear proposed reforms would not have prevented or influenced the FTX fiasco" doesn't really engage with the original poster's argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer's proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.

Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that's not really relevant to the discussion of the original post.

0
linnea
1y
I strongly downvoted this for not making any of the reasoning transparent and thus contributing little to the discussion beyond stating that "Jan believes this".  This could sometimes be reasonable for the purpose of deferring to authority, but that is riskier in this case because Jan has severe conflicts of interest due to being employed by a core EA organisation and being a stakeholder in for example a ~$4.7 million grant to buy a chateau. 

I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was. 

For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think ... (read more)

Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.

  • Set up whistleblower protection schemes for members of EA organisations 
  • Transparent listing of funding sources on each website of each institution
  • Detailed and comprehensive conflict of interest reporting in grant giving

Sorry for critical feedback, but

  • Not sure if this is best run as a non-profit on a grant. I think people who want to buy this should preferably pay for it in price of the merch.
  • Also, the products are somewhat expensive, compared to manufacturing costs. This is because it's actually for profit, just the profits are with Printful
  • The design of the website is quite bad, in my opinion. Have you considered hiring a professional?
  • Also the designs of majority products which try to do something more complicated than just print the logo on something is ...mediocre at
... (read more)
3
Ines
1y
It's an MVP—we will upgrade to a better website in due time. Hopefully the release of more products will mean there will be more options to suit a greater variety of tastes. If you have any ideas for designs or aesthetic styles that would appeal to you, I encourage you to submit them. Unfortunately the products cannot get any cheaper than they are as we cannot operate the store without using a service like Printful, and products may in fact go up in price in the future if we change the funding model. 

I disagree that the design of the products, or the website, are bad.

This is just my personal aesthetic opinion, of yours, but so is yours. I think your comment should have been phrased with more humility & awareness that you were just reporting your aesthetic taste.  I also object to the statement about this being a "step back in EA visual culture", which I think is just mean.

ETA: Also, I just checked the prices and they are almost all surprisingly cheap, so the second point seems wrong to me.

4
james
1y
I also think the website design seems a bit off to me
4
Writer
1y
Why do you consider it bad? Nothing jumps out to me that makes me think "this is bad/ugly". Your other points make more sense to me though.

Could be time as well: EAGx Prague had dedicated time where it was not possible to book 1:1s on SwapCard, and I think it worked well.

3
Neel Nanda
1y
Hmm, that feels much more annoying to me - I personally think 1-1s tend to be a much better use of conference time, and being restricted from scheduling them at certain times in the app sounds irritating (and the kind of thing that gets me to bail on Swapcard and use Calendly). For me a space is good, because if someone no shows, or I have a break and want low-intensity chat, I can go there.

For example, the OP could have contacted the person he talked to at the EAGx and asked her whether his interpretation of what she said were correct. If you read OP's other post about a conflict resulting from asking for feedback about the grant application, you have one datapoint where someone was talking with him about the unsuccessful grant application, and was surprised by OPs interpretation. 

As I mentioned in the comment, in case of this post, unless the person he talked to decides to reveal herself (and she may have good reasons not to do that), ... (read more)

7
Ariel Pontes
1y
  That is a fair point. I thought of doing that but I ended up choosing not to because I've been overthinking this post for quite a while and I wanted to get it over with. But now that you mention it I realize that it is indeed important, so I will do it :) I would also bet the same, sorry if I gave the wrong impression! I just think they have a higher bar for Romanian projects, which again, is fair enough as long as there is more transparency regarding how countries are prioritized.

I don't know what the actual grantmakers think, but if I was deciding about the funding

- you can get funding to do EA localization basically in any country, if you come up with a reasonable strategy & demonstrate competence and understanding of EA
- difficulty of coming up with a reasonable strategy in my view varies between places; e.g., if you wanted to create, for the sake of discussion, EA Norway, a reasonable strategy may be 'just' supporting people in attempts to do impactful work, supporting uni groups, routing donations and maybe engaging with N... (read more)

While I don't want to go into discussing specific details, I want to make clear I don't think 'not writing something egregious in the forum' is the standard which should be applied here.

In my view, national-level groups are potentially important and good but need to be able to do a lot of complex work to be really successful. This means my bar for 'who should do this as a funded  full-time job' is roughly similar to 'who should work at median-impact roles at CEA' or 'work as a generalist at Rethink Priorities' or similar.

I don't think the original pos... (read more)

2
Jason
1y
Since you've obviously thought a lot about this issue, it would be interesting to hear more about the extent to which your bar is dependent on the adjectives "funded" and/or "full-time." Do you think the bar is significantly lower for a half-FTE, quarter-FTE, or expenses-only funding? I don't mean with respect to the original poster, but more generally -- especially in countries where the primary language is neither English nor another language with a significant amount of EA content.

Downvoted. Let me explain why:

1. I'm not really convinced by your post that what actually happened with your grant applications was at all caused by you applying from Romania. (Personally, if I was a grantmaker, based on your EA Forum presence, I would reject your application to create/grow a new national-level group if you applied from basically anywhere. You can read more of my thinking about the topic of national EA groups in this post

2. I really dislike the type of discourse exemplified by this paragraph: "I said that I felt Romania was being di... (read more)

2
Ariel Pontes
1y
As I explained in the post, I never meant to suggest that my application was rejected primarily because it was coming from Romania. It is clear to me, however, that this was one of the reasons, because I've been told that quite explicitly. The stories I shared here are not very detailed because I didn't want to identify anybody, so I'm in a tricky position where it's hard for me to provide additional evidence. But let me clarify a few things: 1. I'm not making the allegation that the EA leadership is discriminating Romania in some evil Machiavellian way (sorry if it seemed like that). I think it's legitimate to prioritize some countries over others in principle, as long as this is transparent. My guess is that the people in charge of this decision did some calculations and concluded that Romania is not very high priority (which is fair enough), but they're afraid to be transparent about it because they're afraid it might be controversial. I personally think this reveals a dangerous pattern of avoiding controversy in EA, a pattern that is not sustainable because it just creates unnecessary drama when things do eventually come out. 2. The allegation that "Westerners have a bad impression of Romanians" didn't come from me, and I think this provides extra evidence for the view that EA is reluctant to fund Romania because, if this person was told that the problem was only with me and not at all with Romania, she wouldn't have made this allegation. I already felt that Romania was not a priority at that point, and they clearly felt something similar if they made this comment. In any case, I didn't want to focus so much on my particular story, I am more interested in having a discussion about how EAIF prioritizes countries. Do you think they don't prioritize countries differently at all? You think there would never be a situation where an application is almost good enough, but the country is too low in priority so it doesn't  get approved?

This is somewhat serious allegation, but also seems ... a bit like free-floating rumor, without much fact? Unless the person you talked to decides to reveal herself, and explain who are the people she talked to, and what exactly did they said to her, it's really hard to say what happened, and there is a decent chance your guesses are just wrong, or this is some sort of telephone-game scenario

 

Assuming the OP is telling the truth, what alternative do you expect them to do here? They have made no specific slander against anyone, they have simply mention... (read more)

3
Guy Raveh
1y
Regardless of OP's specific case, I think it would be interesting and important to know if grantmakers in the community building area have strategies or priorities in this area, and what they are. On the one hand, as Berke pointed out, there are community building projects being funded in various countries poorer than Romania. On the other hand, you can see prominent forum users writing about prioritising counties based on how rich they are, and if this point of view is also found among grantmakers, that's concerning. It's getting clearer and clearer that EA has a problem with diversity and isn't promoting it nearly enough, instead directing most of its efforts into audiences similar to already existing members. Given the increasing number of people who are troubled by this, I think transparency in this regard would be beneficial.

Personally, if I was a grantmaker, based on your EA Forum presence, I would reject your application to create/grow a new national-level group if you applied from basically anywhere.

Huh? It's a little unfair to say this without substantiating. I looked at OP's Forum history to see if there was something egregious there and didn't see anything that would justify a claim like this. Could you elaborate more?

Rationalists do a lot of argument mapping under the label double crux (and similar derivative names: crux-mapping). I would even argue that double crux approach to argument mapping is better than the standard one, and rationalists integrate explicit argument mapping to their lives more than likely any other identifiable group.

Also: more argument mapping / double-cruxing / ... is currently unlikely to create more clarity around AI safety, because are constrained by Limits to Legibility, not by ability to map arguments.

I do agree net positive is too soft, but I don't think this is what anyone is seriously advocating for here.

The main implicit theory of impact for event venues is 
venues -> events -> {ideas / intellectual progress / new people engaging with the ideas / sharing of ideas}

I think in domains like "thinking about future with powerful AI" or "how to improve humanity's epistemics" or "what are some implications of digital minds existing" it seems the case that noticeable amount of thinking and discussion is happening at various in-person gatherings.&nb... (read more)

4
Quadratic Reciprocity
1y
It would be helpful to see your thoughts on community building fleshed out more in a post or longer comment. 

The idea that EAs should take only actions which maximize EV according to some sort of straightforward calculation is wrong. 

The argmax(EV(action)) is a stupid decision strategy and people should not be criticized for not following it.

Agree with your soft max idea but “net positive EV” is too soft - as I’ve said elsewhere donating to the university you went to or your local animal shelter is still net positive EV.

I noted the same in the following sentence: To get realistic comparisons, you would need to adjust for many other factors sometimes of order 0.3-3x, like occupancy, costs of adjusting venues to your needs, costs of the staff,...

For example, if you use your venue 65% of time, you should multiply the mentioned figure by 0.65.

What's pushing in the opposite direction is for example this: if you use a rented venue, and often spend 1-2 days before and 1 day after the event on setting it up according to your needs / returning to original state, you need to accoun... (read more)

Thank you for your detailed addition of context, but I'm honestly slightly disappointed.

You accuse me of making false claims yet this response doesn't say which claims in either my post or my comment are false. In fact, you don't even quote my writing at all, and the comment is mainly about why you think this project is a good idea.

You say:

I am not trying to prevent people expressing their opinions, but would ask that any speculation (or, for example, digging for the physical addresses of vaguely related entities)

For the people who got the impression I dig... (read more)

I wanted to provide some context on the recently purchased events venue near Prague that has been discussed here in one of the subthreads. The subthread contains some misleading content, and is not clearly visible, so I am posting this as a top level comment to provide more information on the project. It is extremely long, sorry.

1. My relationship to the project
2. Context for Forum readers
3. Context for outsiders
4. About the project
5. About the venue
6. Communication timelines

1. My relationship to the project

The project is led by Irena Kotikova and was... (read more)

I am the writer of the subthread Jan is responding to. You can find my reply here.

Since I’m running the project in question (not Wytham Abbey), I would like to share my perspective as well.  (I reached out to the author of the comment, Bob, in a DM asking him to remove the previously posted addresses and we chatted briefly about some of these points privately but I also want to share my answers publicly.)

  1. ESPR can't return the property or the money at the moment because there is currently no mechanism that we are aware of that would make it possible to legally send money "back to FTX" such that it would reliably make its way back to
... (read more)

Multiple claims in this post are misleading, incomplete or false.

I'm writing a longer response to a similar comment of the same author under different post, and hope to post it reasonably soon.

4
Kris Merrells
1y
Still forthcoming?

Just flagging that the claim in the question is false - CFAR did run 4 public workshops in the autumn (although not in the mentioned venue), and did run various smaller retreats in the venue in 2022.

-1
chaosak
1y
Just flagging that your other organization European Summer Program on Rationality, in July 2022 purchased yet another castle in the Czech Republic, worth 3,5 million pounds, thanks to a "grant" from FTX Foundation. What a coincidence...
5
Chris Leong
1y
Thanks for the correction.

I'm not going to comment on the emotional/anger/PR side, but here are some numbers for the discussion to be somewhat connected with Oxford conference accommodation reality; speaking just in my personal capacity as someone who did run events in Oxford.

According to the first public price list in my google results, conference accommodation in a college in Oxford in 2020 was >£70 (standard room) + >$45 on meals + >~$1000 for 4 lecture rooms, per day. With a 30 person event, it's >£4500 per day. With 40 ppl and more lecture rooms, it would be more l... (read more)

3
Closed Limelike Curves
1y
Is Wytham Abbey being used 365 days a year, though? A couple thousand a day is a perfectly reasonable cost for a conference. But why would you need to own a space that large permanently? I’d be just as shocked at the idea of renting out Oxford College’s conference hall 365 days a year. Nobody is having that many conferences.
8
lastmistborn
1y
As someone who has experience with this, what do you (or others doing coet calculations in this thread) think about the cost/benefit of having this type of dedicated venue in a less expensive location and moving these types of events out of Oxford, which seems to be a particularly expensive area? Your calculation seems to imply that the events would be frequent enough that the staff would be working on them full time, and room and board being a major factor implies that the expectation is that most people would be traveling for them anyway. In this case, why is Oxford the basis for cost calculations?
  1. Yes / I mostly tried to describe "pure" version of the theory, not moderated by applications of other types of reasoning.

  2. I don't think the way I'm using 'deontology ' and 'virtue ethics' reduce to 'conventional morality' either.

For example, I currently have something like a deontic commitment to not do/say things which would predictably damage epistemics - either of me or of others. (Even if the epistemic damage would have an upside like someone taking some good actions)

I think this is ultimately more of an 'attempt at approximating true consequent... (read more)

Reposting from twitter: It's a moderate update on the prevalence of naive  utilitarians among EAs.

Expanded:

Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey  defense of utilitarianism too easy. 
1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things
2. The default response is "but this is naive  consequentialism, no one actually does  that" 
3.  You may wonder that while  pe... (read more)

  • In practice I think utilitarians should adopt mostly a skillful combination of virtue ethics, deontic rules, and explicit calculations. 
  • I think what does the FTX case provides some evidence for, is some fraction of smart EAs exposed to utilitarianism being prone to attempt to rely on the explicit act utilitarianism, despite the warnings.

    I think part of the story here is the a weird status dynamic where...
    1. I would basically trust some  people to try the explicit direct utilitarian thing: eg I think it is fine for Derek Parfit or Toby Ord. 
    2
... (read more)

Yeah, I think it's a severe problem that if you are good at decision theory you can in fact validly grab big old chunks of deontology directly out of consequentialism including lots of the cautionary parts, or to put it perhaps a bit more sharply, a coherent superintelligence with a nice utility function does not in fact need deontology; and if you tell that to a certain kind of person they will in fact decide that they'd be cooler if they were superintelligences so they must be really skillful at deriving deontology from decision theory and therefore they... (read more)

I would suggest to actually read, and try to understand  the post?

The papers you link mostly use the notion of 'consequentializing'  in the sense that you can re-cast many other theories as consequentialist. But often this is almost trivial, if you allow yourself the degree of freedom of 'changing what's considered good'  on the consequentialist side (as some of the papers do). For some weird reason, you have a deontic theory prohibiting people to drink blue liquids? Fine, you  can represent that in consequentialist terms, by ranking al... (read more)

Pretty confident. I typically have a stack of drafts of the type this was, and they  end up public at some point. 

Btw, I think parts of Ways money can make things worse  describe what I think EA community actually did wrong even ex post : succumb to the epistemic distortion field too much. 

Crossposting from LW

Here is a sceptical take: anyone who is prone to getting convinced by this post to switch to attempts at “buying time” interventions from attempts at do technical AI safety is pretty likely not a good fit to try any high-powered buying-time interventions. 

The whole thing reads a bit like "AI governance" and "AI strategy" reinvented under a different name, seemingly without bothering to understand what's the current understanding.

Figuring out that AI strategy and governance are maybe important, in late 2022, after spending substanti... (read more)

Load more