> ... I was deeply confused, until I heard a dear friend and colleague in academic AI, one who’s long been skeptical of AI-doom scenarios, explain why he signed the open letter. He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5? I realized that, while six months might not suffice to save human civilization, it’s just enough for the more immediate concern of getting papers into academic AI conferences.
In other words, the people who wrote and are signing open letters to slow down AI scaling appear to be more concerned with their inability to benefit from and control the dialog around AI scaling than any societal risks posed by these advances in the near term. Meanwhile, to the folks at organizations like Microsoft/OpenAI, Alphabet, Facebook, etc., the scaling of AI looks like a shiny rainbow with a big pot of gold -- money, fame, glory, etc. -- on the other side. Why would they want to slow down now?
I don’t think Scott is serious about that (or if he is, he’s being uncharitable.) I think what the quoted speaker is saying is that nobody is able to keep up with what these models are doing internally. Even OpenAI (and Meta et al.) only seem to be making “so much progress” by pressing the accelerator to the floor and letting the steering take care of itself. And one of the major lessons of technological progress is that deep understanding (at least when humans are necessary for that, gulp) is much slower than engineering, largely because the latter can be parallelized and scaled.
>In other words, the people who wrote and are signing open letters to slow down AI scaling appear to be more concerned with their inability to benefit from and control the dialog around AI scaling than any societal risks posed by these advances in the near term.
That's just a joke the author makes. He is not seriously suggesting this is the case.
Just judging from the volume of papers, it seems to me that there are more academics writing papers on "AI safety" and "AI ethics" than there are academics publishing research papers on actual AI. It's become one of the hottest topics among legal academics, philosophers, ethicists, and a variety of connected disciplines, in addition to its niche among some computer scientists, and the amount of work to get to a paper in these fields is an order of magnitude less than actually publishing technical research.
In other words, the people who wrote and are signing open letters to slow down AI scaling appear to be more concerned with their inability to benefit from and control the dialog around AI scaling than any societal risks posed by these advances in the near term. Meanwhile, to the folks at organizations like Microsoft/OpenAI, Alphabet, Facebook, etc., the scaling of AI looks like a shiny rainbow with a big pot of gold -- money, fame, glory, etc. -- on the other side. Why would they want to slow down now?