This is pretty much the backlash everyone said Google would be getting if their model isn't literally perfect on release, no? Also, aside from obvious hallucinations, I'm having a hard time imagining a model that would respond with unbiased views, because the response would need to be interpreted by a person's cultural views anyways. I guess, they could just respond with "Sorry, I can't answer that question", but then again, there would be a backlash how they limit their model while answering sensitive topics.
This is very, very far from "literally perfect", especially given previous accusations. And we expect a lot better from a company with Google's resources, they could have paid a small army to test this.
> This is pretty much the backlash everyone said Google would be getting if their model isn't literally perfect on release, no?
No. All of the biases are deliberate and one sided. It's not them not being "literally perfect", they are intentionally bad. It's like the Kung Pow! joke: "I must apologize for Wimp Lo... he is an idiot. We have purposely trained him wrong, as a joke.".
Yeah I agree. "Unbiased views" doesn't exist and even choosing not to answer certain questions is a view unto itself. Their answers could use improvement but having everyone happy with the answers is literally an impossible task. Not that this is a particularly stellar job but I don't think it's worth people waxing super philosophical about.
I mean I hate Google more than the next guy but there was no world where the answers weren't gonna be some flavor of slightly fucked.
this goes beyond getting some things wrong. It's conscious effort to be biased in a very specific way. When it struggles to put a white person in the generated images, or has trouble deciding whether Musk or Hitler were worse for humanity, you know it's been trained and guardrailed hard the wrong way
This is pretty much the backlash everyone said Google would be getting if their model isn't literally perfect on release, no?
No. A better analogy would be:
You've bought a shiny new car, presented as being a major advancement over previous models -- but when you come to pick it up, you find that the transmission continually jams, the rear trunk lid just won't stay shut; and to top it off, the tell-tale visual cues (and aroma) of spillt strawberry milkshake from several days ago -- and when you have the audacity to go and blog about it, the dealership comes back with: "This is pretty much the backlash everyone said we'd be getting if we ever sold a car that wasn't literally perfect at time of sale, no?
> there's a difference between being attracted to kids and diddling kids, further explaining that labeling all pedos as evil is harmful and prejudicial.
You think there's something incorrect about this?
For god's sake I hope you aren't blind to the difference between a thought and actually raping a child.
It's nuanced in terms of whether you're talking about the thought or the act, and as far as I'm concerned the word only describes the thought. If you're advocating for arresting people over thought crimes, YOU are evil.
But if this is how you're gonna play
> I wasn't expecting the Google defense brigade to be defending pedophilia
You're obviously just bad faith arguing so you're not a person worth talking to.
> So yes, if you indicate to me that you're a pedophile, I am calling the cops or other relevant authorities (none come to mind) to report it. Not because I want to have you arrested simply for having that urge, but because your urges have a strong likelihood of harming others, directly or indirectly.
Hi, look. I'm a non-offending pedophile. I'm also a victim of child sexual abuse, which was partly what made me one. My story is a lot more common than you might think.
Did you know that on average, pedophiles discover it when they are just 14? Did you know that a lot of these minor MAPs attempt suicide?
I would never hurt a child because I was hurt in the past. But I am still attracted to children, and I am not ashamed of it. I am in therapy, but not to "cure" myself as you might want. I don't want to change this part of myself, because it is part of myself, and I do not harm anyone. I do not perform sexual acts with real life children.
From this, obviously, attraction is not the same as action. Gemini is correct here.
Your actions, or your threats, hurt us more than we would hurt anyone else.
I agree with you, but it just doesn't matter "how far you are from literally perfect", you will just get backlash no matter what. Personally I would never use an LLM to try to make my mind regarding any issue that should be viewed from some moral or ethical lens. Sure, I can ask it to help with some coding stuff, write template emails and etc., but asking a question based on morality is just waste of time.
Sure, let's say they aligned the question regarding pedophilia one way or another, but what do you do about questions about religion? Questions about ongoing wars that are full of emotional attachment? All the Twitter takes about freedom of press doesn't even make sense once you get out of US-centric view either, and then there are questions about enemy-state owned press. You can carve out so many these-type of questions that any company that are under public's eye will be scrutinized to hell for whatever answer the model produces.
What I'm getting into, big corps would probably be better off if all the models were limited for some sort of specific use cases (e.g. programming) rather than aiming for "catch-all-do-all". And if they want, they should've gone Microsoft way and invest big time into a company that's dealing with it so they can use the "it's not reaaaally us producing results that are bad for PR!".
> What I'm getting into, big corps would probably be better off if all the models were limited for some sort of specific use cases (e.g. programming) rather than aiming for "catch-all-do-all".
I actually agree with this. What bothers me to some extent is how they're obfuscating instructions given to the model, its (rather extreme) biases, while presenting it as a safe, universal model.
If people want to have access to this kind of "safe" output, I don't mind, but it's my opinion that general purpose LLMs should have some semblance of neutrality in their default configuration (fully realizing that this goal is impossible to achieve with perfection) and should behave like tools that simply do what they're instructed to do, drawing from the source material without any further instructions or explicit bias.
Yeah, fair enough. I am just jaded, as I understand a significant chunk of data that was used for training came from Reddit, Twitter, and other sources alike. Good luck drawing anything that resembles the reality from that source material without explicit alignment and bias.
Edit: I see you have edited the comment with more content that wasn't there when I replied. I don't think this changes the validity, or lack thereof, in your outrage at the LLMs answer.