Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> On the other hand, I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse.

Says the quantum computing professor turned so-called 'AI safety employee' at O̶p̶e̶n̶AI.com who would rather watch an unregulated hallucination-laden language model run off the rails to be sold as the new AI snake-oil than to actually admit about the huge risks of GPT-4's black-box nature, poor explainability and transparent reasoning methods that is explained in the letter.

Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again. I guess he has a large amount of golden handcuffs to defend with another total straw-man of an argument.



> Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again.

From the article:

> ... and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.

(Not to say that OpenAI's name isn't dumb, or that there won't be issues from people directly plugging LLMs into important decisions.)


No conflict, no interest?


I'm not saying a conflict of interest can't exist, I'm just saying it's false that he didn't disclose his affiliation with OpenAI.


> Readers, as they do, asked me to respond. Alright, alright. While the open letter is presumably targeted at OpenAI more than any other entity, and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: