This sounds reasonable to me actually. The rest of the post was about making a specific case for funding my entrepreneurial work, rather than expounding on widespread bottlenecks entrepreneurs seem to face to get funded for doing good work and developing it further. I started writing a 10-page draft to try to more detachedly analyse work by and interactions between entrepreneurs and funders.
This does resonate with me. There are quite some projects that I worked on making happen behind the scenes that I wouldn't want to stamp my name on. I've talked with others who mentioned similar bottlenecks (e.g. GoodGrowth people in 2019). Thank you for your good wishes, JJ!
Thank you for the clarification! This makes a lot of sense.
The Forum's moderators have had some discussion in the past on whether job listings should ever appear on Frontpage; it was a close call, but we think a few such posts once in a while is okay. However, I expect that there are many more potential job applicants than potential grantmakers on the Forum, so posts like this are less likely to be relevant to a random reader than a job listing.
Could you disambiguate some terms here? I see I misread this paragraph before. I'm more confused now about what you're specifically saying. E.g. - were you trying to say that there are 'many more potential grantees than grantmakers' (clearly true though this post was more aimed at smaller funders looking for an argued case)- or were you implying I was posting as a job applicant (that doesn't seem right, as explained two comments above)
Hence, posts like this should be "Personal Blog" unless they involve discussion of other topics as well.
Most of the introductory paragraphs of this post were pointing to more general gaps in entrepreneurial support (i.e. other topics).To be clear, I think the decision you made may have been reasonable. However, this post doesn't match the criteria you stated for setting posts as Personal Blog. I think for moderation to be credible here, the criteria and underlying reasons must be clear to readers.
Thank you for sharing your reasoning.I empathise with that a post like mine could trigger a series of other people basically posting open requests for jobs. From a purely pragmatic standpoint, I get where the Forum's moderators are coming from – drawing the line before it becomes a slippery slope.Note that this post does not seem to be a job listing (edit: I misread that – I'm confused what you actually mean with posts of this type), unless you really stretch the meaning of that category.
I would appreciate if Forum moderators work out specifically how to deal with edge cases like this one. It would set a bad precedent if your decision convinces readers more that for their future write-up they should come up with a snazzy new project name and sprinkle in opaque orgspeak. Note: Rupert is a friend of mine, but I wasn't aware that he had read this post before he posted his.
Interesting! Let me watch it
Looking for more projects like these
AI Safety Camp is seeking funding to professionalise management.
Feel free to email me on remmelt[at}effectiefaltruisme.nl. Happy to share an overview of past participant outcomes + sanity checks, and a new strategy draft.
First off, I really appreciate the straightshooter conclusion of 'QC is unlikely to be helpful to address current bottlenecks in AI alignment.' even while you both spent many hours looking into it.
Second, I'm curious to hear any thoughts on the amateur speculation I threw at Pablo in a chat at the last AI Safety Camp:
Would quantum computing afford the mechanisms for improved prediction of the actions that correlated agents would decide on?
As a toy model, I'm imagining hundreds of almost-homogenous reinforcement learning agents within a narrow distribution of slightly divergent maps of the state space, probability weightings/policies, and environmental inputs. Would current quantum computing techniques, assuming the hardware to run them on is available, be able to more quickly/precisely derive the % portions of those agents at say State1 would take Action1, Action2, or Action3?
I have a broad vague sense that if that set-up works out, you could leverage that to create a 'regulator agent' for monitoring some 'multi-agent system' composed of quasi-homogenous autonomous 'selfish agents' (e.g. each negotiating on behalf of their respective human interest group) that has a meaningful influence on our physical environment. This regulator would interface directly with a few of the selfish agents. If that selfish agent subset are about to select Action1, it will predict what % of other, slightly divergent algorithms would also decide Action1. If the regulator prognoses that an excessive number of Action1s will be taken – leading to reduced rewards to or robustness of the collective (e.g. Tragedy of the Commons case of overutilisation of local resources) – it would override that decision by commanding a compensating number of the agents to instead select the collectively-conservative Action2.
That's a lot of jargon, half of which I feel I have little clue about... But curious to read any arguments you have on how this would (not) work.