> On the other hand, I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse.
Says the quantum computing professor turned so-called 'AI safety employee' at O̶p̶e̶n̶AI.com who would rather watch an unregulated hallucination-laden language model run off the rails to be sold as the new AI snake-oil than to actually admit about the huge risks of GPT-4's black-box nature, poor explainability and transparent reasoning methods that is explained in the letter.
Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again. I guess he has a large amount of golden handcuffs to defend with another total straw-man of an argument.
> Readers, as they do, asked me to respond. Alright, alright. While the open letter is presumably targeted at OpenAI more than any other entity, and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.
Says the quantum computing professor turned so-called 'AI safety employee' at O̶p̶e̶n̶AI.com who would rather watch an unregulated hallucination-laden language model run off the rails to be sold as the new AI snake-oil than to actually admit about the huge risks of GPT-4's black-box nature, poor explainability and transparent reasoning methods that is explained in the letter.
Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again. I guess he has a large amount of golden handcuffs to defend with another total straw-man of an argument.