Yeah I am not an absolute GenAI hater. I’ve used it quite a bit myself, and I think there are ways to be creative about it. However 95% of what we see online, especially in the Ads space, is bottom of the barrel quality. It is obvious basic AI generated images/video most of the time and for me is an instant “I’m not going to bother with that product” marker.
One of the worst one was an allegedly “illustrated history” book. All Ads were of AI generated history book pages with tons of historical inconsistencies. Looked up the real book and it actually looked decent: hand drawn, well formatted etc. Why not use pictures of the actual book instead of whatever mess I was seeing.
I might be wrong but to me most of the art is looks AI generated, and the few pages they show just don’t make any historical sense. Yet they sell it as “hand drawn”. From the animations seems like some stuff was AI generated and then redrawn by hand? But the drawing themselves are plain weird: the nonsensical castle. The archers and scoped crossbow in a page about medieval crossbows, the silly submarine
Yeah I like NeuralViz too. Honestly the writing is what makes it funny in the first place. The AI imagery just adds that extra weirdness on top. Like my faves being the street interviews an the gluron that reviews the Zillow page etc
Yeah for me the three main issues are:
- overly defensive programming. In python that means try except everywhere without catching specific exceptions, hasattr checks, when replacing an approach by a new one adding a whole “backward compatibility” thing in case we need to keep the old approach etc. That leads to obfuscated errors, silent fails, bad values triggering old code etc
- plain editing things it is not supposed to. That is “change A into B” and it does “ok I do B but I also removed C and D because they had nothing to do with A” or “I also changed C in E which doesn’t cover all the edge cases but I liked it better”
- keep re-implementing logic instead of reusing
Oh, the defensive programming! That thing must have been trained on job interview code, or some enterprise stuff. Heaps of "improvements" and "corrections" that retry, stub, and simply avoid correctly doing stuff for no reason (fix deserialization bug that thing just caused? no, why! Let's instead assume API and docs are wrong and stuff is failing silently so let's retry all api calls N times, then insert some insane "default value in case API is unreachable" then run it, corrupt local db by writing that default everywhere, run some brain damaged test that checks that all values are present (they are, clonk just nuked them), claim extraordinary success and commit it with a message containing emoji medals and rockets).
And these "oh, I understand, C is completely incorrect" then proceeding to completely sabotage and invalidate everything.
Or assembling some nuclear python script like some McGyver and running it, to nuke even the repo itself if possible.
Best AAA comedy text adventure. Poor people who are forced to "work" like that. But cleanup work will be glorious. If companies will survive that long.
It seems like it is fiction, from what I could find. I was doubting it at times but it feels like for how old it was some of the tech wasn’t quite there then.
It's fiction, there's breadcrumbs at the top that list it as in the "Fiction" category. qntm is good at plausible sci-fi, e.g. https://qntm.org/mmacevedo
Obviously much simpler Neural Nets, but we did have some models in my domain whose role was to speed up design evaluation.
Eg you want to find a really good design. Designs are fairly easy to generate, but expensive to evaluate and score. Understand we can quickly generate millions of designs but evaluating one can take 100ms-1s. With simulations that are not easy to GPU parallelize. We ended up training models that try to predict said score. They don’t predict things perfectly, but you can be 99% sure that the actual score designs is within a certain distance of said score.
So if normally you want to get the 10 best design out of your 1 million, we can now first have the model predict the best 1000 and you can be reasonably certain your top 10 is a subset of these 1000. So you only need to run your simulation on these 1000.
Yes, Levenshtein in that case give too big an exploration space. A keyboard edit distance would probably work better. Delete and swap are still 1 but replace and add should be within say 1-key at most
My guess is also that not all typos are equal. Should have a stricter edit version for 1-keystroke-away filtered edits (that is delete, swap or add 1 key away / replace one key away) instead of pure Levenshtein. Like Fqcebook is a more likely typo than Fjcebook but they are both edit-1
If I understand correctly from the paper what qualifies as an edit distance of 1 is pure Levenshtein distance-1 right?
Just curious because while the edit-1 space can be fairly big, I’d assume all edits have very different probabilities. So the squatted domains probably skew to a higher probability edit. By that I mean mostly keyboard edit typos, eg on a phone: the “cwt” typo is more likely than “cpt” for “cat” because of an and w keyboard proximity. Wonder what the squatting rate is when you filter for edit within one key stroke for example (only really change the add and replace types of edits, not delete or swap)
Yeah, same as a French speaker first living in the US, I have to sometimes refrain myself from calling things “just fine”, “will do” or “not bad”. These are still used in American English, but I tend to use them for cases were people normally use more positive/stronger version.
Like at a grocery store: “is that enough? That will do yes -> yes that’s perfect”
Scales of goodness of expressions are shifted relative to English: "good" (gut) to a German means "it totally fulfills all my needs and expectations, so it is perfect for my purpose". "very good" (sehr gut) means "it exceeds all my expectations" and to a German already sounds like total hyperbole. Anything like "delightful" or "excellent" to a German sounds either totally sleazy or sarcastic.
When something is not perfect but adequate and we are happy with it, we would say something like "not bad", "it's fine" or "you can leave it like that". Which to the english speaking world has totally different connotations and can lead to rather interesting misunderstandings.
And especially "not bad" ("nicht schlecht") can be confusing in that it is sometimes something rather positive. It, in German and said in the right tone of voice" can mean "this is suprisingly good".
One of the worst one was an allegedly “illustrated history” book. All Ads were of AI generated history book pages with tons of historical inconsistencies. Looked up the real book and it actually looked decent: hand drawn, well formatted etc. Why not use pictures of the actual book instead of whatever mess I was seeing.
However I also keep getting Ads for this other historical book that drives me nuts: https://www.kickstarter.com/projects/vilno/the-codex-book
I might be wrong but to me most of the art is looks AI generated, and the few pages they show just don’t make any historical sense. Yet they sell it as “hand drawn”. From the animations seems like some stuff was AI generated and then redrawn by hand? But the drawing themselves are plain weird: the nonsensical castle. The archers and scoped crossbow in a page about medieval crossbows, the silly submarine