Because advertising works. Full stop. It doesn't matter if it is valuable or not. It just works. Definitely not with P(buy this crap) = 1. But the effect is still there and real and measurable and google has made colossal amounts of money out of exploiting it.
It might as well be a magic spell. You show the user the thing, and they buy/subscribe/click-through with some probability according to massive ML model that knows everything there is to know about them.
Yes - people are capable of making decisions in their own self interest. But there exists a gap where not _all_ of peoples' decision making process is the aforementioned. And that gap can be exploited, systematically.
The existence of that gap is the actual problem. At scale, you can own a nontrivial quantity of human agency because that agency is up for grabs. Google / similar make their money by charging rent on that 'freely exploitable agency'. Not by providing value to people. The very idea is ridiculous. Value? How are you going to define a loss function over value?
ML models on click-through or whatever else don't figure out how to provide value. They find the gap. The gap is made of things like: 'sharp, contrasting borders _here_ increase P by 0.0003', 'flashing text X when recently viewed links contain Y increase P by 0.031', etc and so on.
Yes? Of course advertising works, I'm not sure who's even debating that point. But the fact is, people wouldn't click on an ad, look at a product, add to cart, enter their credit card, and checkout if that product was not bring them value. You're acting as if people are forced to perform this series of actions which is simple false, hence why I implied the parent's comment is nonsensical.
You have cause and effect reversed. The only reason the ML model can predict whether someone will buy a product is because people have bought it in the past. Why did they buy it? Because it provides them value. The ML prediction is descriptive, not prescriptive. I can similarly create an ML model to predict the weather, that does not mean my model causes the weather which is basically what you're saying.
It is true that people are not forced to buy things. But even if one is not _forced_ into something, one can be _manipulated_ into something. This is what happens with ads: they're most of the time misleading (and in many cases they lie, tobacco industry being the classic example), they encourage addictive or compulsive behaviors, they try to manipulate you emotionally (which is easier if they know a lot about you), etc. Ads have too much power nowadays so that they even shape reality, they're not purely descriptive as you say, that's way too naive.
And ML models are not only based on what you've already bought. On instragram, for instance, I have ads for bird toys/vets/etc because I follow bird owners.
No person is forced, because a person's agency does not solely consist of the gap. It doesn't matter. The argument isn't: 'advertising is bad because it forces some specific person to do a thing they don't value'. The argument is: 'advertising is bad because it forces things to happen, and those things are bad'.
It's not a moral argument, but a practical one: agency is being extracted on massive scale, and being used for what?
Human beings might as well abstract away into point sources of agency for all it matters to the argument being made. If you can extract 0.1% of the agency of anyone who looks at a thing, and you show it to 3 billion people, _you have a lot of agency_. If you then sell it to the highest bidder, you find yourself quickly removing "don't be evil" from the set of any principles you may once have had.
My overarching point is that value-as-decision-mediator is meaningless in this calculus. It's the part of the equation that doesn't matter, the part you can't manipulate, the part that _is not a source of manipulable agency_. It's not relevant. I'm not saying it doesn't exist, or that it doesn't affect peoples' decisions: I'm saying it _doesn't matter_. It can be 99.99% of how you make your decisions, and it _still doesn't matter_. As long as that 0.01% gap exists.
> The only reason the ML model can predict whether someone will buy a product is because people have bought it in the past.
Yes. This is how you gather evidence that something works. It is not the reason it works. The ML model _knows about the spell_ because people have let it affect them in the past. But the spell works because it's magic. It doesn't need anything other than: Y follows X.
> The ML prediction is descriptive, not prescriptive. I can similarly create an ML model to predict the weather, that does not mean my model causes the weather which is basically what you're saying.
Not all models describe actions which are possible for you to take. Weather models are basically not like that. Advertising models _are_.
You aren't in a position where you can meaningfully manipulate the weather, if only you knew how exactly to manipulate it to maximize your profit. It's a vacuous argument in general. Models are just knowledge. Obviously some knowledge is useful, some isn't, some is dangerous, some isn't, some can be used by specific people, some can be used by any, etc.
It's not the model that is causing things to happen. It's a machine that uses the knowledge in the model, where the model describes actions possible for the machine to take. It is automated greed.
The fundamental concern is not that knowledge is bad, or that ML models are bad. It is that someone is in the position of having a tap on vast, diffuse sources of agency, and have automated the gathering of knowledge in using it to maximize profit, causing untold damage to everything, with the responsibility laundered through intermediary actors.