All of calebp's Comments + Replies

calebp's Shortform

(crosspost of a comment on imposter syndrome that I sometimes refer to)

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing p... (read more)

Interesting vs. Important Work - A Place EA is Prioritizing Poorly

the organizations you listed are also highly selective so only a few people will end up working at them.
 

Which organisations?  I think I only mentioned CFAR which I am not sure is very selective right now (due to not running hiring rounds).

Interesting vs. Important Work - A Place EA is Prioritizing Poorly

 .... But the number of people we need working on them should probably be more limited than the current trajectory ....

 

I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them - and list a few concrete suggestions for how to do that.

I feel like the op was mostly talking about direct work. Even if they weren't I think most of the impact that EA will have will eventually cash out as direct work so it would be a bit surprisin... (read more)

1Davidmanheim11d
"I feel like the op was mostly talking about direct work." No - see various other comment threads
Interesting vs. Important Work - A Place EA is Prioritizing Poorly

(I think CSER has struggled to get funding for a some of its work, but this seems like a special case so I don't think it's much of a counter argument)

I think if this claim is true it's less because of motivated reasoning arguments/status of interesting work, and more because object level research is correlated with a bunch of things that make it harder to fund.

I still don't think I actually buy this claim though, it seems if anything easier to get funding to do prosaic alignment/strategy type work than theory (for example).

Interesting vs. Important Work - A Place EA is Prioritizing Poorly

I agree in principle with this argument but ....

Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA[1].

I really don't think there are many people at all putting substantial resources into any of these areas.

  • Theoretical AIS work seems really important and depending on your definition of 'theoretical' there are probably 20-100 FTE working on this per year. I would happily have at least 10-100x this amount of work if the level of qualit
... (read more)
5Chris Leong12d
"Who is actually working on infinite ethics?"- I'm actually very interested in this question myself. If you know anyone working on infinite ethics, please connect me :-).

Those may seem like the wrong metrics to be looking at given that the proportion of people doing direct work in EA is small compared to all the people engaging with EA. The organizations you listed are also highly selective so only a few people will end up working at them. I think the bias reveals itself when opportunities such as MLAB come up and the number of applicants is overwhelming compared to the number of positions available, not to mention the additional people who may want to work in these areas but don't apply for various reasons. I think if one... (read more)

I claim that if you look at funding at what EA organizations are viewed as central - and again, GPI, FHI, CSER, and MIRI are all on the list, the emphasis on academic and intellectual work becomes clearer. I would claim the same is true for what types of work are easy to get funding for. Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.

Hiring Programmers in Academia


That the money is coming from a grant doesn't resolve this: the university would still not let you pay a higher salary because you need to go through university HR and follow their approach to compensation. 
 

 

Would the following solution work?
1. academic applies for funding but ask for it not to be paid out until they make a hire
2. academic finds a hire 
3. uni pays hire as normal
4. funder tops up the hires salary to market rate (or whatever was agreed on)

Alternatively, you can just get rid of step 3 but maybe the hire loses benefits like a uni affiliation, pension contribution etc.

calebp's Shortform

More EAs should give rationalists a chance

My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.

I now think that I just met the wrong rationalists early on. The rationalists that I most admire:

  • Care deeply about their va
... (read more)
calebp's Shortform

‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one

I sometimes see criticisms around

  • EA is too elitist
  • EA is too focussed on exceptionally smart people

I do think that you can have a very outsized impact even if you're not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.

A few quick claims that push towards focusing on attracting outliers:

  • The main problems that we have are technical in nature (particularl
... (read more)
calebp's Shortform

I adjust upwards on EAs who haven't come from excellent groups

I spend a substantial amount of my time interacting with community builders and doing things that look like community building.

It's pretty hard to get a sense of someone's values, epistemics, agency .... by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.

I worry that there are some people who were introduced to some set of good ideas first, and then ... (read more)

2jwpieters18d
It's all about the Caleb points man
Announcing the Center for Space Governance

Sounds exciting.

The main thing that I am interested in when I read announcement posts or websites from very young orgs is who is on the core team.

I don't know if this has been left out intentionally, but if you did want to add this to the post I'd be interested in seeing that.

Thanks Caleb. This is being led by myself and Gustavs as mentioned in the post, and we are currently in the process of bringing on board our first team members. We will be adding everyone on the website once that has been finalized.

Critiques of EA that I want to read

I found this helpful and I feel like it resolved some cruxes for me. Thank you for taking the time to respond!

Critiques of EA that I want to read

Thanks for writing this post, I think it raises some interesting points and I'd be interested in reading several of these critiques.

(Adding a few thoughts on some of the funding related things, but I encourage critiques of these points if someone wants to write them)

Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.

I'm not aware of this happening very much, at least between EA Funds, Open Phil and FTX (but it's plausible to me that this does happen ... (read more)

Thanks for the response!

RE 5d chess - I think I've experienced this a few times at organizations I've worked with (e.g. multiple funders saying, "we think its likely someone else will fund this, so are not/only partially funding it, though we want the entire thing funded," and then the project ends up not fully funded, and the org has to go back with a new ask/figure things out. This is the sort of interaction I'm thinking of here. It seems costly for organizations and funders. But I've got like an n=2 here, so it might just be chance (though one person at... (read more)

Transcript of Twitter Discussion on EA from June 2022

I know this isn't the point of the thread but I feel the need to say that if people think a better laptop will increase their productivity they should apply to the EAIF.

https://funds.effectivealtruism.org/funds/ea-community

(If you work at an EA org, I think that your organisation normally should pay unless they aren't able to for legal/bureaucratic reasons)

Is the time crunch for AI Safety Movement Building now?

I think that Holden assigns more than a 10% chance to AGI in the next 15 years, the post that you linked to says 'more than a 10% chance we'll see transformative AI within 15 years'.

Sam Bankman-Fried should spend $100M on short-term projects now

SBF/FTX already gives quite a lot to neartermist projects afaict. He's also pretty open about being vegan and living a frugal lifestyle. I'm not saying that this mitigates optics issues, just that I expect to see diminishing marginal returns on this kind of donation wrt optics gains.

https://ftx.com/foundation

5Yitz2mo
Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.
Some unfun lessons I learned as a junior grantmaker

The policy that you referenced is the most up-to-date policy that we have but, I do intend to publish a polished version of the COI policy on our site at some point. I am not sure right now when I will have the capacity for this but thank you for the nudge.

Some unfun lessons I learned as a junior grantmaker

My impression is that Linch's description of their actions above is consistent with our current COI policy. The Fund chairs and I have some visibility over COI matters, and fund managers often flag cases when they are unsure what the policy should be, and then I or the fund Chairs can weigh in with our suggestion. 

Often we suggest proceeding as usual or a partial but not full recusal (e.g. the fund manager should participate in discussion but not vote on the grant themselves).

5ofer3mo
Thank you for the info! I understand that you recently replaced Jonas as the head of the EA Funds. In January, Jonas indicated [https://forum.effectivealtruism.org/posts/ek5ZctFxwh4QFigN7/ea-funds-has-appointed-new-fund-managers?commentId=KHgvFs43yP9epm49X] that the EA Funds intends to publish a polished CoI policy. Is there still such an intention?
Deferring

(I think that the pushing towards a score thing wasn't a crux in downvoting, I think there are lots of reasons to downvote things that aren't harmful as outlined in the 'how to use the form post/moderator guidelines')

I think that karma is supposed to be a proxy for the relative value that a post provides.

I'm not sure what you mean by zero-sum here, but I would have thought that the control system type approach is better as the steady-state values will be pushed towards the mean of what users see as the true value of the post. I think that this score + tota... (read more)

Deferring

I don't think we should only downvote harmful things, we should instead look at the amount of karma and use our votes to push the score to the value we think the post should be at.

I downvoted the comment because:

  • Saying things like "... obviously push an agenda...." And "I'm pretty sure anyone reading this... " Has persuasiony vibes which I don't like.
  • Saying "this post says people should defer to authority" is a bit of a straw/weak man and isn't very charitable.

Using votes to push towards the score we think it should be at sounds worse than just individually voting according to some thresholds of how good/helpful/whatever a post needs to be? I'm worried about zero sum (so really negative sum because of the effort) attempts to move karma around where different people are pushing in different ways, where it's hard to know how to interpret the results, compared to people straightforwardly voting without regard to others' votes.

At least, if we should be voting to push things towards our best guess I think the karma system should be reformed to something that plays nice with that -- e.g. each individual gives their preferred score, and the displayed karma is the median.

Deferring

I think I roughly agree althought I haven't thought much about the epistemic vs authority deferring thing before.

Idk if you were too terse, it seemed fine to me. That said, I would have predicted this would be around 70 karma by now, so I may be poorly calibrated on what is appealing to other people.

Deferring

Thanks for writing this, I thought it was great.

(Apologies if this is already included, I have checked the post a few times but possible that I missed where it's mentioned.)

Edit: I think you mention this in social defering (point 2).

One dynamic that I'm particularly worried about is belief double counting due to deference. You can imagine the following scenario:

Jemima: "People who's name starts with J are generally super smart."

Mark: [is a bit confused, but defers because Jemima has more experience with having a name that starts with J] "hmm, that seems ri... (read more)

3Owen Cotton-Barratt3mo
Yeah I briefly alluded to this but your explanation is much more readable (maybe I'm being too terse throughout?). My take is "this dynamic is worrying, but seems overall less damaging than deferral interfering with belief formation, or than conflation between epistemic deferring and deferring to authority".
EA Tours of Service

Thanks for writing this, it's a cool idea.

I'll consider doing this when I next run a hiring round!

Effective altruism’s odd attitude to mental health

I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don't think the anecdote you chose highlights much of a tension.

>  I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.

I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going... (read more)

7MichaelPlant3mo
Hmm, not sure you've spotted the tension. The tension arises from recognising that X is a problem in your social world, but not then incorporating that thought into your cause prioritisation thinking. This is the puzzling phenomenon I've observed re mental health. Of course, if someone has - as you have - both recognised the problem in their social world, and then also considered whether it is a global priority, then they've connected their thinking in the way I hope they would! FWIW, I think improving the mental health of EAs is plausibly very sensible purely on productivity grounds, but I wasn't making a claim here about that either way.
8IanDavidMoss3mo
I think you could be right about this AND that Michael's anecdote could also be pointing to something true about the idea that personal or proximate experience with a problem could increase the salience of it for people conducting supposedly dispassionate analysis. We shouldn't pretend that the cognitive biases that apply to everyone else don't also apply to people in the EA community, even if the manifestation is sometimes more subtle.

Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.

If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).

Three Reflections from 101 EA Global Conversations

I really liked this post, one of the best things that I have read here in a while.

+1 for taking weird ideas seriously and considering wide action spaces being underrated.

2Akash3mo
Thank you, Caleb!
My experience with imposter syndrome — and how to (partly) overcome it

This is a bit weird and not really a framing that I expect to be helpful for most people here, I recommend that you probably don't internalise the following or maybe even read it. I think that it is worth this comment partly for transparency and in case it is useful to a few people.

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredi... (read more)

My experience with imposter syndrome — and how to (partly) overcome it

I think this might be one of my current hypothesis for why I am doing what I am doing. 

Or maybe I think I think it's ~60% likely  I'm ok at my job, and 40% likely I have fooled other people into thinking I'm ok at my job.

FTX/CEA - show us your numbers!

I as an individual would endorse someone hiring an MEL consultant to do this for the information value and would also bet on this not providing much value due to the analysis being poor at $100.

Terms to be worked out of course, but if someone was interested in hiring the low context consultant, I'd be interested in working out the terms.

FTX/CEA - show us your numbers!

Fwiw, I personally would be excited about CEA spending much more on this at their current level of certainty if there were ways to mitigate optics, community health, and tail risk issues.

FTX/CEA - show us your numbers!

Oh right, I didn't pick up on the ftx said they'd like to see if this was popular thing. This resolves part of this for me (at least on the ftx as opposed to the CEA side).

Is the EA Librarian still a thing? If so, what is the current turnaround?

Hi Jeremy, I'm very sorry with how slow the turn around has been recently.

I've been at a very low capacity to manage the project after being ill, and I'm sorry that we haven't gotten back to you yet. I also don't feel like I can indicate turnaround time right now due to having some of the librarians leave recently.

We will certainly aim to answer all submitted questions but I expect that I will close the form this/next week, at least until I work out a more sustainable model.

1Jeremy4mo
Sorry to hear about your illness. I hope you feel better! It was very helpful the one time I used it, so I hope you guys can find a way to keep it going. Thanks!
FTX/CEA - show us your numbers!

Broken into a different comment so people can vote more clearly

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

I think that there is likely different epistemic standards between cause areas such that this is a pretty complicated question and people underpreciate how much of a challenge this is for the EA movement.

FTX/CEA - show us your numbers!

I think what I'm getting at is that burden of proof is generally an unhelpful framing, and an action that you could take that might be helpful is communicating your model that makes you sceptical of their spending.

Hiring consultancies to do this seems like it's not going to go well unless it's rethink priorities or they have lot of context and on the margin I think it's reasonable for CEA to say no, they have better things to do.

I feel confused about the following but I think that as someone that runs an EA org you could easily have reached out directly to... (read more)

1Jack Lewars4mo
Like you, I'm fairly relaxed about asking people publicly to be transparent. Specifically in this context, though, someone from FTX said they would be open to doing this if the idea was popular, which prompted the post. As a sidenote, I think also that MEL consultancies are adept at understanding context quickly and would be a good option (or something that EA could found itself - see Rossa's comment). My wife is an MEL consultant, which informs my view of this. But that's not to say they are necessarily the best option.
FTX/CEA - show us your numbers!

I think the main crux here is that even if Jessica/CEA agrees that the sign of the impact is positive, it still falls in the neutral bracket because on the CEA worldview the impact is roughly negligible relative to the programs that they are excited about. 

If you disagree with this maybe you agree with the weaker claim of the impact being comparatively negligible weighted by the resources these companies consume? (there's some kind of nuance to 'consuming resources' in profitable companies, but I guess this is more gesturing at a leaving value on the table framing as opposed to just is the organisation locally net negative or positive.

FTX/CEA - show us your numbers!

One of the the EA forum norms that I like to see is people explaining why they downvoted a post/comment so I'm a bit annoyed that NegativeNuno's comment that supported this norm was fairly heavily downvoted (without explanation).

FTX/CEA - show us your numbers!

I kind of like the general sentiment but I'm a bit annoyed that it's just assumed that your burden of proof is so strongly on the funders.

Maybe you want to share your BOTEC first, particularly given the framing of the post is "I want to see the numbers because I'm concerned" as opposed to just curiosity?

9Jack Lewars4mo
I'm not sure why the burden wouldn't fall on people making the distribution of funds? (Incidentally, I'm using this to mean that the funders could also hire external consultancies etc. to produce this.) But, more to the point, I wrote this really hoping that both organisations would say "sure, here it is" and we could go from there. That might really have helped bring people together. (NB: I realise FTX haven't engaged with this yet.) In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.
4freedomandutility4mo
I think it makes sense to have the burden of proof mostly on the funders given that they presumably have more info about all their activities, plus having the burden set this way has instrumental benefits of encouraging transparency which could lead to useful critiques, and extra reputation-related incentives to use good reasoning and do a good job of judging what grants do and do not meet a cost-effectiveness bar.
Free-spending EA might be a big problem for optics and epistemics

I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility.

 

I meant from a LT worldview.
 

One concrete action - pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety).&

... (read more)
1freedomandutility4mo
Yeah it’s hard to tell whether we disagree on the value of the same quality of conversation or on what the expected quality of conversation is. Just to clarify though, I meant inviting people who are both already into EA and already into AI Safety, so there wouldn’t be a need to communicate EA to anyone. I also don’t actually know if anyone has tried something like this - I think it would be a good thing to try out.
Free-spending EA might be a big problem for optics and epistemics

I don't think this is right because there's aren't good mechanisms to convert money into utility. I don't think there are reasonable counterfactuals to this money that aren't already maxed out.

That said f you can point to some actions that should get a few hundred pounds in Lt community building that aren't due to a lack of money and seem positive in EV, I'd be happy to fund these actions (in a personal capacity).

3freedomandutility4mo
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility. I also think it's very difficult for counterfactuals to become maxed out, especially in any form of community building. One concrete action - pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety). I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don't care about EA, to do the work.
Free-spending EA might be a big problem for optics and epistemics

I think that it is worth separating out two different potential problems here.

1. It is bad that we wasted money that could have directly helped people.
2. It is bad that we alienated people by spending money.

I am much more sympathetic to (2) than (1). 

Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.

2freedomandutility4mo
I think if we value longtermist/meta community building extremely highly, that’s actually a strong reason in favour of placing lots of value on that couple hundred of pounds - in this kind of scenario, a lot of the counterfactual use of the money would be using it usefully towards longtermist / meta community building.
Public reports are now optional for EA Funds grantees

(I am the new interim project lead for EA funds and will be running EA funds going forward.)

I completely understand that you want to know that your donations are used in a way that you think is good for the world. We refer private grants to private funders so that you know that your money is not being used for projects that you get little or no visibility on.

I think that EA Funds is mostly for donors that are happy to lean on the judgment of our fund managers. Sometimes our fund managers may well fund things like mental health support if they think is one ... (read more)

Go apply for 80K Advising - (Yes, right now)

I've done 80k coaching ~3 times. If you think your context has changed enough for it to be useful again, then 80k will probably agree. Giving you a coaching session is pretty low cost relative to the outputs (from their perspective). 

Fwiw I didn't find the coaching very useful but the advisors are lovely,  I enjoyed our chat, and the time investment is low. Since I last spoke to 80k there has been more of a push on 'being ambitious' so maybe their advice would be more useful for me now? Idk.

I think Julia is really great, and is creating a lot of value for the community. Maybe from some world views she is creating as much or more value than Will (e.g. if you think that Will is having a negative influence because he is quite patient longtermist???) but I think that under most reasonable views it's an exaggeration to say that she's as important as Will (where importance is code for expected impact).

5Charles He5mo
No, I don't think my answer involves a subtext/proxy for values/worldview. I’m really exactly saying that, whatever your worldview, to the degree you think EA is good, the class of people like Julia Wise might have allowed EA to happen. A group of talented people working together can often fragment or diminish. It's possible this fragility can be easy to overlook. As you say, maybe I'm wrong in some way. Maybe these virtues/efforts aren't concentrated in Julia and are performed by many without an explicit role. Maybe it includes Will himself, who probably understands the issues perfectly well.
Where would we set up the next EA hubs?

I think I have the opposite impression although I haven't spent much time in the bay.

When I lived in London I didn't feel connected to oxf/cam communities. I think that oxf and cam uni groups are trying to collaborate more but it takes about 2 hours to get from one place to the other.

EA Librarian Update

People can still use the forum to submit questions. 

Above I didn't mean to suggest that I was reacting to Ian's message and had taken the action to add the question to our system after seeing the message above. I had already added the question (and will continue to add any questions submitted on the forum), we are just running a bit slow at the moment.

What will be some of the most impactful applications of advanced AI in the near term?

(EA Librarian)

You can look at the "timelines" section of MetaculusExtras and scroll through the period of interest to see the relevant Metaculus forecasts. There is no way to exclude questions unrelated to AI, so you'll have to do the filtering manually. Very few people seem to be aware of this resource, but I think it's pretty helpful for allowing one to form a more concrete and accurate picture of what the future will be like in the coming years/decades.

2IanDavidMoss5mo
Really interesting, thanks!
EA Librarian Update

I've added your question to our system, but our turnaround time is a bit slow at the moment.

I'll make sure that answers include

(EA Librarian)

At the start so you know it has been answered by the librarians.

3Kirsten5mo
Are questions asked via the tag going to be added to your system going forward or should people stop using that route?
2Peter Wildeford5mo
Thanks! Sorry I missed that
We're Aligned AI, we're aiming to align AI

For more discussion on this, you can see the comment threads on this lesswrong post and this alignment forum post

If for some reason OP thinks this is unhelpful feel free to remove my comment.

We need 40,000h or maybe even 20,000h

Thanks for writing this, this is definitely something that I have heard before in EA groups that I have been part of.

I don't know your situation but I think that EA is probably, particularly in need of value-aligned mid/senior people in quite a few areas.

I don't work at 80k but my sense is that they have chosen to work with early-stage people as they have more flexibility in career direction and more malleable views. Crucially, I don't think that 80k chose this demographic because young and inexperienced people are likely to have a comparative advantage in... (read more)

7martyna6mo
OK, me back! As for the part about 80k objectives, it's hard for me to tell. I haven't done enough research to have an opinion on that matter. But I guess, influencing anyone with EA values gives a lot of positive returns. - That's the thing, I don't fit any of them. I'm a ux/ui designer (I'll post my links on the bottom) and after 2 years of (let's call it) research I realized two things: 1. In those few companies or orgs that ever posted offers for uxd, UX is not necessarily as impactful or even important. It just needs to be as good as to not let "things fall apart". So, not much impact on the cause area there. 2. Earning to give makes much more sense. I can still do what I love and am good at, but it comes with this pressure to overexcell myself to give more in near future. In regards to mentoring others, that's complicated too. How can I advise others if I can't even move on with my own goals that I defined so clearly for myself? That's rhetorical of course. I struggle with impostor syndrome a lot and I've been looking for a job more actively since November (I want to move up and upskill). Being declined over and over again doesn't help with the syndrome. But I'm not saying "no", only "not now". I've done some brainstorming with senior IT people, but I'm not very good at cold emailing them, so I haven't had as many insights as I wish to. EAG/x - I am signing up for 5 events this year, this is a great way of meeting people smarter than me and in every age! Thank you a lot @calebp! Links: https://www.linkedin.com/in/martynatomaszewska/ [https://www.linkedin.com/in/martynatomaszewska/] - feel free to connect. https://eahub.org/profile/martyna/ [https://eahub.org/profile/martyna/] - let me know if it's available, if not I'll make it public.
2martyna6mo
Thank you so much for that, I'll get back with my response in a few h if you don't mind.
Complexities of wedding gifts; Thoughts?

I don't have plans to get married soon but I have thought a bit about whether there could be a way to substitute some gift for the equivalent amount of time from the person. I then can ask the person to read an article or watch a short video or similar as the gift to me.

So, for example, my friends would probably spend something like £40 on a gift for me if I were getting married now. I imagine that they value their time pretty low so I could maybe get between 1-4 hours of their time instead. I can think of quite a lot of content that they could read/watch ... (read more)

EA Forum feature suggestion thread

Mouse over probability distributions
Likelihood qualifiers (likely, unlikely) are a common source of miscommunication. Lots of content on the forum feels pretty nuanced to me and subtle differences in priors can often be cruxes e.g. most important century.

A step in the right direction could be being able to add prob. distributions as tooltips (maybe using an elicit like interface or maybe just 'freehand') to illustrate these qualifiers better. The user can highlight a word in their draft and press a button, this will being up the prob. distribution entering... (read more)

Load More