I always found it funny OpenAI staff tried to delay the release of GPT to the world because they feared the consequences of giving the public such a power. Hearing stuff like this makes it even funnier:
> In the pit, [Sutskever] had placed a wooden effigy that he’d commissioned from a local artist, and began a dramatic performance. This effigy, he explained represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it. … Sutskever doused the effigy in lighter fluid and lit on fire.
Sutskever was one the people behind the coup of Sam Altman over AI safety concerns. He also said this in 2022:
> "It may be that today's large neural networks are slightly conscious." [1]
A good question is are these AI safety proponents a bit loony or do they actually believe this stuff. Maybe it's both.
That's the sort of convenient framing that lets you get away with hand wavy statements which the public eats up, like calling LLM development 'superintelligence'.
It's good for a conversation starter on Twitter or a pitch deck, but there is real measurable technology they are producing and it's pretty clear what it is and what it isn't.
In 2021 they were already discussing these safety ideas in a grandiose way, when all they had was basic lego building blocks (GPT 1). This isn't just a thought experiment to them.
As they well should have. Because they have more foresight than a pile of rocks.
They knew they had the beginnings of an incredibly capable technology at their hands, and they knew that intelligence is an extremely dangerous thing.
And so far? The capabilities of today's systems are already impressive, and keep improving. If you're thinking "what those systems are doing today isn't that bad", you shouldn't be. Concern yourself with the capabilities of a bleeding edge AI from year 2035.
> In the pit, [Sutskever] had placed a wooden effigy that he’d commissioned from a local artist, and began a dramatic performance. This effigy, he explained represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it. … Sutskever doused the effigy in lighter fluid and lit on fire.
Sutskever was one the people behind the coup of Sam Altman over AI safety concerns. He also said this in 2022:
> "It may be that today's large neural networks are slightly conscious." [1]
A good question is are these AI safety proponents a bit loony or do they actually believe this stuff. Maybe it's both.
[1] https://futurism.com/the-byte/openai-already-sentient