All of PabloAMC's Comments + Replies

I already give everything, except what's required for the bare living necessities, away.

While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.

FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.

I divide my donation strategy into two components:

  1. The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.

  2. Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existe

... (read more)

I think the title is a bit unfortunate at the very least. I am also skeptical of the article's thesis of highlighting population growth as the problem itself.

5
Daniel_Eth
11mo
I want to echo sentiment that this piece would be improved a fair bit if the word "plague" wasn't in the title. The current wording could be misinterpreted to imply that the humans involved in this "plague" are a "disease" which should be "eradicated," paralleling some old racist talking-points. (Of course, having both read the piece and noticed from scrolling over your profile that you yourself are Black, I obviously don't think you were trying to imply anything like that – I just figured you might want this feedback as to why your piece might be generating an averse reaction from some readers.)

You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks there’s a chance to get more information on what the objectives are supposed to mean.

I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the phy... (read more)

Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiency—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).

I’d be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some un... (read more)

9
RobBensinger
1y
If I understand you right, you're thinking of scenarios like "the AI initially tries to create lots of watery looking stuff, but then it later realizes that watery looking stuff can be made of different substances (e.g., oxygen paired with protium vs. deuterium)". We can imagine different outcomes here, like: 1. Some part of the AI feels like protium is important for "real water", while another part feels that deuterium is important for "real water". So the AI spends a lot of its resources going back and forth between the two goals, undoing its own work regularly. 2. The AI thinks about its values, and realizes that (for some complicated reason related to how it does reflection and how its goals work) it's really deuterium-containing water that it likes, not protium-containing water. So it switches to making heavy water exclusively.  3. The AI thinks about its values, and realizes that (for some complicated reason related to how it does reflection and how its goals work) it wants to put 90% of its resources into producing heavy water, and 10% into producing light water. Whether 1 counts as "one agent that's internally conflicted" versus "multiple agents in a tug-of-war for control" might turn out to be a matter of semantics, depending on whether there turns out to be a crisp and natural interpretation of the word "agent". Whether 2 counts as "the agent self-modifying to change its goals" versus "the agent keeping the same goals but changing its probability distribution about which physical things those goals are pointing at", may also turn out to be an unimportant or arbitrary distinction. It at least doesn't seem very important from a human perspective: the first kind of agent may have a different internal design than the second kind of agent, but the behaviors are likely to look the same from the outside, since sufficiently coherent agents optimize expected utility (probability times utility) in practice, and it may be hard to say from the outside which part

With respect to the last question I think it is perhaps a bit unfair. I think they have clearly stated they unconditionally condemn racism, and I have a strong prior that they mean it. Why wouldn’t they, after all?

3
Guy Raveh
1y
Tegmark wrongly alluded that the newspaper does not have and advocate the pro-Nazi views that they have been demonstrated to.

But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.

The HuggingFace RL course might be an alternative in the Deep Learning - RL discussion above: https://github.com/huggingface/deep-rl-class

1
Gabriel Mukobi
2y
Good find, added!

Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.

I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.

7
Miguel Lima Medín
2y
I agree the baseline scenario is that current trends will go on. In geology the resources availability trend (for both fossil energy and mining) follows the Hubbert's curve. It doesn't follow a straight line up to the infinite. After a period of going up, it follows a period of going down, once we pass the peak. The peak doesn't mean that the resource is completely depleted, but it means that the amount we can extract this year is less than previous year. To the date I'm not aware of any other scientific explanation better than Hubbert's curve, and this should be our baseline. It is more difficult to predict exactly where we are in the Hubbert's curve for each resource and whether the peak will happen this decade, but it is a fact that it will take place.

I think I quite disagree with this post because batteries are improving quite a lot, and if we are capable of also improving Hydrogen production and usage, things should work pretty well. Finally, nuclear fusion no longer seems so far away. Of course, I agree with the author that this transition will take quite a long time, especially in developing countries, but I expect this to work out well anyways. One key argument of the author is that we are limited in the amount of different metals available, but Li is very common on Earth, even if not super cheap, so I am not totally convinced by this. Similar thoughts apply to land usage.

8
Corentin Biteau
2y
Oh, yes, batteries and hydrogen are improving a lot, I mentioned that. But the issues lie elsewhere. For batteries, it's the fact that there are not enough materials for seasonal storage, even if you doubled battery capacity - plus the energy cost of making batteries. For hydrogen, it's the explosiveness, the dependency on platinum and the fact that it leaks easily without costly infrastructure. For fusion, it's the time required - even fast tracks scenarios plan that it may provide 1% of energy by 2060. And as pointed out by Jaime Sevilla, Li being common is not the main point - what matters is the energy cost of mining. Less energy going to mining means less metals. Another topic that matters is the time- lithium mines take 7 to 15 years to go from exploration to production. Current plans by the IAE are already deemed irrealists given the current mining pipeline.  There are summarized version of this in post 1. I also have a specific document where I put a details versions on the issues at hand. You might want to check the ones on metals, storage, hydrogen, and fusion.
2
Miguel Lima Medín
2y
Materials: I recommend chapter 8 and 9 of this paper https://www.15-15-15.org/webzine/download/100-decarbonization-with-100-renewable-energy-systems-through-power-to-gas-and-direct-electrification/ [Update]: I noticed in the attached document by Corentin there are is already information about materials. It is probably better to check first there.
1
Nebulus
2y
While I have the same intuition as you, I wonder if the author means other kinds of metal could be the bottleneck. Also, my intuition points that minerals are not a bottleneck if we can make it cost-efficient to extract extraterrestrial minerals (ie, asteroids and comets). But can we? 
8
Jaime Sevilla
2y
I don't buy many of the claims in the post, but I think this comment is a bit uncharitative, since it posits many technological developments that might not happen. I think I am more interested in the discussion of what is likely to happen in the absence of technological improvements, as an upper bound of the scale of the problem. We may then look into factoring technological improvement to get a more realistic picture. I understood the point of the author as not that metals are uncommon, but that they are very energy intensive to extract?

In the Spanish community we often have conversations in English, and I think at least 80% of the members are comfortable with both.

Maybe worth having some 'recap discussions in Spanish' or a few Spanish-only sessions for the remaining 20%. I expect there are a good number of people who are comfortable in English but much more comfortable, much more efficient, and more willing to speak out in their native language.

I am, and am interested in technical AI Safety

The point 1 is correct, but there is a difference: when you research it's often needed to live near a research group. Distillation is more open to remote and asynchronous work.

Thanks for the answer. The problem is that this is likely pointing in the wrong direction. Immigration has by itself quite large benefits for immigrants and almost all studies of the impact of immigration find positive or no effect for locals. From "Good economics for hard times" by Duflo and Barnejee there is only one case where locals ended up worse off: during the URRS, Hungarian workers were allowed to work but not live in East Germany, forcing them to spend their money at home. Overall, it is well known that open border situations would probably boost... (read more)

I think it is wrong to say that Syrian refugee crisis might have cost Germany 0.5T. My source: https://www.igmchicago.org/surveys/refugees-in-germany-2/. To be fair though I have not found a posterior analysis, and I am far from an expert.

2
Hauke Hillebrandt
2y
Thanks for the link - I think the economists surveyed were not unanimous in saying that it's a slam dunk win, and as I wrote 'might' and 'big, if true' - also note that I'm citing a link from the very left-wing think tank associated with the German Green party.  Also see that while the case for immigration boosting the economy in the long-run is strong based on economic theory, there might still be upfront cost that could have bad effect such as displacing traditional aid: https://www.givingwhatwecan.org/blog/using-aid-to-finance-the-refugee-crisis-a-worrying-trend  It could also be that, a la David Autor's China shock literature, while the  average economic effects of migration are positive some low-skilled domestic workers might have increased competition, which can cause populism. For instance, immigration can predict Brexit votes. Again: big, if true and there should be more analysis. The main lesson here is if you're dealing with trillion dollar numbers, it might be very important.

My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.

No need to apologize! I think your idea might be even better than mine :)

Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.

Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.

A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

I also think that it would be worth exploring ways to give feedback with as little time cost as possible.

A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more fa... (read more)

9
Linch
2y
A solution that I'm more excited about is one-to-many channels of feedback where people can try to generalize from the feedback that others receive.  I think this post by Nuño is a good example in this genre, as are the EAIF and LTFF payout reports. Perhaps some grantmakers can also prioritize public comms  even more than they already do (e.g. public posts on this Forum), but of course this is also very costly.

[I]f feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

That sounds worse to me. Conferences are rare and hence conference-time is more valuable than non-conference time. Also, I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision.

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.

See also the link by Michael above.

My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.

1[comment deleted]2y

Can you give an example of communication that you feel suggests "only AI safety matters"?

Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.

That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the... (read more)

Without thinking much over it, I'd say yes. I'm not sure buying a book will get it more coverage in the news though.

I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.

Hey James!

I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.

2
james.lucassen
2y
Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.

I agree with the post, and the same has already been noticed previously.

However, there is also a risk from this: as a community, we have to struggle to avoid being elitist, and should be welcoming to everyone, even those whose personal circumstances are not ideal to change the world.

Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks

6
Sjlver
2y
Oh... and for some companies, all you need to do to start a community is get some EA-related stickers that people can put on their laptops ;-) (It's a bit tongue-in-cheek, but I'm only half joking... most companies have things like this. At Google, laptop stickers were trendy, fashionable, and in high demand. I'm sure that after being at Xanadu for a while, you'll find an idea that works well for this particular company)

At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).

There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.

When considering starting an EA c... (read more)

Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!

8
JasperGo
2y
Thanks Pablo, good luck to you too! I'll apply to a few interesting remote positions and have some independent projects in mind. I'll see :)

Thanks a lot Max, I really appreciate it.

So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.

I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?

On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.

6
Brendon_Wong
2y
Yep, there's definitely return persistence with top VCs, and the last time I checked I recall there was uncertainty around whether that was due to enhanced deal flow or actual better judgement. I think that just taking the average is one decentralized approach, but certainly not representative of decentralized decision making systems and approaches as a whole. Even the Good Judgement Project can be considered a decentralized system to identify good grantmakers. Identifying superforecasters requires having everyone do predictions and then find the best forecasters among them, whereas I do not believe the route to become a funder/grantmaker is that democratized. For example, there's currently no way to measure what various people think of a grant proposal, fund that regardless of what occurs (there can be rules about not funding downside risk stuff, of course), and then look back and see who was actually accurate. There haven't actually been real prediction markets implemented at a large scale (Kalshi aside, which is very new), so it's not clear whether that's true. Denise quotes Tetlock mentioning that objection here. I also think that determining what to fund requires certain values and preferences, not necessarily assessing what's successful. So viewpoint diversity would be valuable. For example, before longtermism became mainstream in EA, it would have been better to allocate some fraction of funding towards that viewpoint, and likewise with other viewpoints that exist today. A test of who makes grants to successful individuals doesn't protect against funding the wrong aims altogether, or certain theories of change that turn out to not be that impactful. Centralized funding isn't representative of the diversity of community views and theories of change by default (I don't see funding orgs allocating some fraction of funding towards novel theories of change as a policy).

One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.

5
Brendon_Wong
2y
Do you have any evidence for this? There's definitely evidence to suggest that decentralized decision making can outperform centralized decision making; for example, prediction markets and crowdsourcing. I think it's dangerous to automatically assume that all centralized thinking and institutions are better than decentralized thinking and institutions.

EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.

Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.

5
Stefan_Schubert
2y
I don't think a consensus on what cause is most effective is incompatible with cause-neutrality as it's usually conceived (which I called cause-impartiality here).

Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.

Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.

In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)

Hey Lukas!

If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier.

Note that even MIRI sometimes does this

  1. We could not yet create a beneficial AI system even via brute force. Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other
... (read more)
7
Lukas_Gloor
2y
It sounds like our views are close! I agree that this would be immensely valuable if it works. Therefore, I think it's important to try it. I suspect it likely won't succeed because it's hard to usefully simplify problems in a pre-paradigmatic field. I feel like if you can do that, maybe you've already solved the hardest part of the problem. (I think most of my intuitions about the difficulty of usefully simplifying AI alignment relate to it being a pre-paradigmatic field. However, maybe the necessity of "security mindset" for alignment also plays into it.) In my view, progress in pre-paradigmatic fields often comes from a single individual or a tight-knit group with high-bandwidth internal communication. It doesn't come from lots of people working on a list of simplified problems. (But maybe the picture I'm painting is too black-and-white. I agree that there's some use to getting inputs from a broader set of people, and occasionally people who isn't usually very creative can have a great insight, etc.) That's true. What I said sounded like a blanket dismissal of original thinking in academia, but that's not how I meant it. Basically, my picture of the situation is as follows: Few people are capable of making major breakthroughs in pre-paradigmatic fields because that requires a rare kind of creativity and originality (and probably also being a genius). There are people like that in academia, but they have their quirks and they'd mostly already be working on AI alignment if they had the relevant background. For the sort of people I'm thinking about, they are drawn to problems like AI risk or AI alignment. They likely wouldn't need things to be simplified. If they look at a simplified problem, their mind immediately jumps to all the implications of the general principle and they think through the more advanced version of the problem because that's way more interesting and way more relevant. In any case, there are a bunch of people like that in long-termist EA

Yes, I do indeed :)

You can frame it if you want as: founders should aim to expand the range of academic opportunities, and engage more with academics.

Hi Steven,

Possible claim 2: "We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better." You didn't exactly say this, but sorta imply it. I disagree.

I don't agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don't think money is the issue.

Thanks

2
Steven Byrnes
2y
OK, thanks for clarifying. So my proposal would be: if a person wants to do / found / fund an AGI-x-risk-mitigating research project, they should consider their background, their situation, the specific nature of the research project, etc., and decide on a case-by-case basis whether the best home for that research project is academia (e.g. CHAI) versus industry (e.g. DeepMind, Anthropic) versus nonprofits (e.g. MIRI) versus independent research. And a priori, it could be any of those. Do you agree with that?

Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.

Hey Simon, thanks for answering!

We won't solve AI safety by just throwing a bunch of (ML) researchers on it.

Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place.

AGI will (likely) be quite different from current ML systems.

I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would al... (read more)

2
Simon Skade
2y
Wow, the "quite" wasn't meant that strongly, though I agree that I should have expressed myself a bit clearer/differently. And the work of Chris Olah, etc. isn't useless anyway, but yeah AGI won't run on transformers and not a lot of what we found won't be that useful, but we still get experience in how to figure out the principles, and some principles will likely transfer. And AGI forecasting is hard, but certainly not useless/impossible, but you do have high uncertainties. Breakthroughs happen when one understands the problem deeply. I think agree with the "not when people float around vague ideas" part, though I'm not sure what you mean with that. If you mean "academia of philosophy has a problem", then I agree. If you mean "there is no way Einstein could derive special or general relativity mostly from thought experiments", then I disagree, though you do indeed be skilled to use thought experiments. I don't see any bad kind of "floating around with vague ideas" in the AI safety community, but I'm happy to hear concrete examples from you where you think academia methodology is better! (And I do btw. think that we need that Einstein-like reasoning, which is hard, but otherwise we basically have no chance of solving the problem in time.) I still don't see why academia should be better at finding solutions. It can find solutions on easy problems. That's why so many people in academia are goodharting all the time. Finding easy subproblems of which the solutions allow us to solve AI safety is (very likely) much harder than solving those subproblems. Yes, in history there were some Einsteins in academia that could even solve hard problems, but those are very rare, and getting those brilliant not-goodharting people to work on AI safety is uncontroversially good I would say. But there might be better/easier/faster options than building the academic field of AI safety to find those people and make them work on AI safety. Still, I'm not saying it's a bad idea to promot
3
Lukas_Gloor
2y
If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier. But we probably agree that insofar as some original-thinking genius reasoners can produce useful shovel-ready research questions for not-so-original-thinking academics (who may or may not be geniuses at other skills) to unbottleneck all the talent there, they should do it. The question seems to be "is it possible?" I think the best judges are the people who are already doing work that the alignment community deems valuable. If all of EA is currently thinking about AI alignment in a way that's so confused that the experts from within can't even recognize talent, then we're in trouble anyway. If EAs who have specialized on this for years are so vastly confused about it, academia will be even more confused. Independently of the above argument that we're in trouble if we can't even recognize talent, I also feel pretty convinced that we can on first-order grounds. It seems pretty obvious to me that work tests or interviews conducted by community experts do an okay job at recognizing talent. They probably don't do a perfect job, but it's still good enough. I think the biggest problem is that few people in EA have the expertise to do it well (and those people tend to be very busy), so grantmakers or career advice teams with talent scouts (such as 80,000 Hours) are bottlecked by expert time that would go into evaluations and assessments.

I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn't. If someone comes to me with such kind of argument I will go defensive really quickly, and he'll have to waste a lot of effort to convince me there is a slight chance that he's right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.

Perh

... (read more)
3
Chris Leong
2y
Well, if you have a low risk preference it is possible to incrementally push things out.

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer

I think it would be an AGI very capable of chemistry :-)

one might even wonder what learnable quantum circuits / neural networks would entail.

Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air... (read more)

From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.

I think this is right.

You don't say that some of the top AI safety orgs are trying to hire you.

I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.

Then you have to consider how useful quantum algorithms are to existential risk.

I think it is qu... (read more)

Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?

The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a ... (read more)

Load more