I think that “AI” (there is not even a drop of an actual AI in the current technology) is overhyped and barely can handle any of jobs people wanted it to do. It’s maybe good for sketching, be that code, text or images, but not for final product, and won’t be any time soon. As it was with all “AI” products in the past, it’s over advertised by ML bros and grifters.
Half an hour trying to force one of the models to spit a manga-style picture of a character... with correct number of arms and legs. No successes so far. It's frankly disgusting most of the time, prob due to uncanny valley-like effect - it's kinda close, but not quite there. Not to mention, I'm sure I did not put "amputee" in the prompt, but I got a few pictures that should be honestly tagged with "guro"... and I wasn't even trying for NSFW!
Same experience with code: Github Copilot is 90% right 100% of the time. Its suggestions for comments and docstrings are so bad I would never accept them in a code review, basically your old "a++ # adds 1 to a"; the generated code is always wrong, and the bigger the chunk of code generated the more wrong it is. It's kind of OK as a replacement for Ctrl+C/Ctrl+V and for generating boilerplate... which shouldn't be there in good code anyway.
I suspect that for every impressive output we see there are tens of thousands of trash outputs someone had to wade through. Further, the less popular your prompt and the smaller the representation of what you want in the model, the harder it is to get something even remotely resembling what you want. Though, I'm not a "prompt engineer" (WTF is that...), so maybe I'm just using it wrong...
You guys need to understand, that the skill ceiling for AI art is very high. Its not some iphone-selfie tier technology.
Go to Civitai.com and huggingface, to start comprehending how many AI models there are, and how powerful the latest models are.
It takes at least two weeks of intense usage to get the hang of the basics/parameters and produce good output. AI art in the end will probably be dominated by professionals because of the increasing skill requirements.
But that's exactly what I'm saying? Do you believe such professionals to hit the nail on the head on the first try at the prompt? If it's complex enough, even mastery won't shield you from the need to experiment, and discard failed experiments. Looking at how it is for programmers, rewriting a single line of code multiple times is quite normal before arriving at something that works. And programming languages are artificial and much simpler than natural languages, so I suspect that it'll require even more experimentation during crafting of those prompts.
Pray tell, what are you going to do the moment it is not overhyped?
Sitting around on our asses going "We don't have AGI/ASI yet, nothing to worry about" is kinda like saying 'nuclear bombs are nothing to worry about' in 1940 while watching your neighbor create an every larger pile of uranium. To think we're not going to accomplish this eventually is foolishness, and when we've already accomplished it, it's going to be very difficult to setup a legal framework that reigns in corporate interests around it.
What are you even going to do? We have no agency to do anything with the march of society and technology. Business interests rule and we are fed information that aligns our interests with those as much as possible. I mean climate change is a much more real evil than startrekian fantasies about ai, and yet no one here is doing anything or marching in the street or blowing up oil refineries. There should be ten threads a day on the environment here if people gave a crap like they should. That should give you a preview of the level of political engagement of the current residents of the earth.
I’m gonna be doing whatever I’m gonna be doing. Job market changes all the time, and people adapt. I feel pretty secured (at least for some time) for my field, which is a lot of R&N in wireless networks/radio and embedded systems.
My point is that current “AI” is just very well learned models, but it can’t really do intelligent work on its knowledge, can’t innovate, can’t infer, doesn’t have an actual thought process.