Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A great many would likely join you, making this entire fiasco a time-wasting distraction, at best, and grave risk at worst. The technologies will continue to be developed, moratorium or not. A moratorium only enables the hidden to get there first.

The risks need to be discussed and understood, along with the benefits, and publicly. That's the only sensible way forward. Denying that the technology is here already and pretending it can be "paused" doesn't assist in alleviating their concerns.

It's absurd to think any of it can be put back inside the box it came out of. Now that it is here, how best to mitigate any bad sides it may have? Simple, continue to develop it - as it will be the only viable source of effective counter-measures.



> A moratorium only enables the hidden to get there first.

That's simply not true. Nobody would have gotten where GPT is today without transformers. That's not a trivial bit of insight anybody could have had. Stopping research funding and publications will prevent rapid evolution.


I mean given the current state. The technology is already sufficiently advanced and in so many peoples hands that "stopping" it now is just an exercise in pushing it underground. Only the opposite can be a useful safeguard.

Rapid evolution is well underway. Lone individuals are able to push the envelope of what's possible even with just a new basic interop, maybe in an afternoon. It's much too late to be discussing things like moratoriums.

Maybe such things could prevent emergence when the basics don't exist yet, but not when we're all already walking around holding a capable factory in our hands and can create a new product line in a few lines of Python.


It's almost impossible to tell.

Yes, plenty of low hanging fruit around; Heck, I can probably literally ask chatGPT to implement for me a few ideas I've got.

OTOH, I've known since secondary school of two distinct ways to make a chemical weapon using only things commonly found in normal kitchens, and absolutely none of the post 9/11 aftershock attacks that got in the news over the next decade did anything remotely so simple, so that example makes me confident that even bad rules passed in haste — as many of them were and remain — can actually help.

(And that's despite my GCSE Chemistry being only grade B).


Right, it's amazing to me the extent to which people are throwing their hands in there and saying "There's absolutely NOTHING that can be done!!! We must accept the AGIs however they will manifest"...

Clearly, it's a very hard problem with massive uncertainties. But we can take actions that will significantly decrease the risk of utter catastrophe.

I don't even think world-ending catastrophe is that likely. But it seems a real enough possibility that we should take it seriously.


I suspect that the people who are saying "nothing can be done" are people who want nothing to be done.


You're not financially incentivized, in most instances, to make chemical bombs with undersink materials.


Of course they would. It's just ridiculous.

If people are genuinely concerned about lack of access to the OpenAI models then work at training open ones!

OpenAI has a maybe 6 month lead and that's nothing. Plus it's much easier being the follower when you know what is possible.

(To be clear, I know at least a few of the projects already working on this. I just want to make it clear that is the intellectually honest approach).


I think their lead might be a bit bigger than that. ChatGPT 3.5 was released 4 months ago and I still haven't seen another LLM come close to it.


A slightly more paranoid me asks whether there’s some magic they’re using that no one is completely aware of. Watching Google fumble around makes me more paranoid that that’s the case.


Have you tried Anthropic LLC's "Claude"? Between it and ChatGPT I'm hard pressed to say which is better, though I'm tempted to give the edge to Claude.


Alpaca on 13B Llama is enough to convince me that it on 65B Llama would match GPT 3.5 for most tasks.

Perplexity AI's app is definitely better than GPT 3.5 for many things although it isn't clear how they are doing everything there.


There is already public discussion - even here - about benefits and risks, and I hope also some understanding. Otherwise the general public doesn't have a good understanding of too many issues anyway, so... what else would you suggest can be done for this particular matter? When the discussion is over and everything understood? Can such a moment actually exist? I think now is just as good as last/next year.


I hope that we'll eventually reach a point where a good public discussion about the risks/benefits can be had. Right now, though, it's simply impossible. The fog of hype actively prevents it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: