Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
That's a good reminder I need to go track down my roborock, it got stuck somewhere again. There's a map thankfully which helps me figure out where to look.
Sadly, it seems like the contingent of people who have a problem with Smart TVs is small but noisy, and has no real market power. If there were any significant number of people who would pay for a dumb high end TV, the market would sell them one.
Sort of reminds me how we complain loudly about how shitty airline service is, and then when we buy tickets we reliably pick whichever one is a dollar cheaper.
The problem is that consumers are not savvy. They go to the store, and compare TVs based on features presented. Colors, refresh rate, size, etc.
Its only when they get home (and likely not even right away) that they discover their TV is spying on them and serving ads.
This is a perfect situation where government regulation is required. Ideally, something that protects our privacy. But, minimally something like a required 'nutrition label' on any product that sends our data off device.
As far as I know, there is nothing to prevent Samsung from selling you a TV, then sending out a software update in two years which forces you to accept a new terms of service that allows them to serve you ads. If you do not accept, they brick your TV.
So it’s not a question of being savvy. As a consumer you can’t know what a company will choose to do in the future.
The lawsuit seems to be about using ACR, not the presence of ads.
> As far as I know, there is nothing to prevent Samsung from selling you a TV, then sending out a software update in two years which forces you to accept a new terms of service that allows them to serve you ads. If you do not accept, they brick your TV.
To the parent commenters' point, this is a perfect example of a situation where governments should be stepping in.
The thing that prevents a TV mfg from bricking your device is that they'd be instantly (and successfully) sued. In fact, there have already been many such class actions, ie with printer inks.
The downside is that it's sometimes easier and cheaper to just pay off the class and keep doing it.
That ought to be a slam dunk win in court. Especially since they probably won't show up to my local small claims court and I'll just send them the judgement.
I wouldn't say they aren't savvy. Many aren't, but also I don't blame them. Often you can buy a perfectly reasonable device and then they ad spying and adverts after you bought it. Most reviewers also don't talk about this stuff, and there are no standards for any of it (unlike e.g. energy consumption).
I went with Philips Hue smart lighting specifically because it could work without an account or any internet access for the bulbs or hub.
Guess what became required this year? At least it seems I can still use them offline if I don't use the official app. But the official app is now just a popup requiring me to create an account. I'm not sure if I could add new lights using third party apps. Not like I'm ever buying a Hue product again though.
True. But it does work for food safety, and to help curb underage drinking and smoking, to stop lousy restaurants from serving unsafe food and for lots of other stuff we take for granted.
Top down governance isn't a silver bullet, but it has its place in a functioning society.
A situation in which many people care a little,but a few people care a lot in the other direction,is almost exactly what government is for. Ken Paxton has issues, for sure, but good on him in this case.
I don’t agree with this. The only way this would make sense is if consumers were made aware of spying vs not spying prior to purchase.
But TV manufacturers can change the TV’s behavior long after it is purchased. They can force you to agree to new terms of service which can effectively make the TV a worse product. You cannot conclude the consumer didn’t care.
This 'Wild West' is easily solved with decent consumer law. Spying could be shut down over night if laws levied fines on TV manufacturers pro rata—ie fines would multiply by the number of TV sets in service.
If each TV attracted a fine two to three times the amount manufacturers received from selling its data the practice would drop stone dead.
All it takes is proper legislation. Consumers just lobby your politicians.
We're past the point when most people can claim ignorance. And surely we have enough protection to at least defend against the "changed the terms and conditions after purchase" situation? They can't force me to do anything, and then stop working if I refuse.
For now maybe? Consumer protections are at an all time low at the moment. Your exact argument about “we all know this just nobody cares and stop whining” is exactly what will be cited if you attempt to take action if they brick your device.
There is a market and people pay for it. However they are mostly not TVs, but monitors and those paying for it have the budget to pay far more. However this market will always exist because some of those are showing safety messages in a factory and if the monitor in any way messes those up there will be large lawsuits.
The problem is lack of information at time of purchase, in both cases. It's so onerous to figure out what these products are doing that people give up. Same in the airline case. If any of the airlines actually provided better service at a higher price, they'd have a market, but it's impossible to assess that as an end user with all the fake review bullshit that's all over the Internet these days.
The only cases where it's clearcut are a few overseas airlines like Singapore Airline who have such a rock solid reputation for great service that people will book them even if the price is 2x.
> Sadly, it seems like the contingent of people who have a problem with Smart TVs is small but noisy, and has no real market power.
No one cares. Smart TVs are super awesome to non tech people who love them. Plug it in, connect to WiFi - Netflix and chill ready. I have a friend who just bought yet another smart TV so he can watch the Hockey game from his bar.
> If there were any significant number of people who would pay for a dumb high end TV, the market would sell them one.
What happened to that Jumbo (dumbo?) TV person who was on here wanting to build these things? My guess is they saw the economics and the demand and gave up. I applaud them for trying though. I still cling to my two dumb 1080 Sony TVs that have Linux PC's hooked to them.
Wouldn’t smart TVs that didn’t spy on you also be awesome? Seems like a knowledge gap to me. This gets solved as soon as people realize what’s happening. Right now they don’t realize TVs are cheap because of the ad subsidy.
"If there were any significant number of people who would pay for a dumb high end TV, the market would sell them one."
The problem is easily solved and I'm surpised more people don't do it. For years I've just connected a PVR/STB (set top box) to a computer monitor. It's simple and straightforward, just connect the box's HDMI output into a computer monitor.
Moreover, PVR/STBs are very cheap—less than $50 at most, I've three running in my household.
If one wants the internet on the same screen just connect a PC to another input on your monitor. This way you've total isolation, spying just isn't possible.
Do you have a nice 65” OLED monitor option with solid display settings supporting Dolby modes, etc I can examine? I tried to find one and nobody is selling.
Not 65" but for a really large picture I just use the HDMI input on the smart TV sans internet and it's fine (also the TV makes a good large monitor). Works well on the projection monitor too.
Just put the HDMI into the TV set input and forget connecting the internet. That's the situation with one of my TVs and it gives a great picture. Also it works fine with my projection monitor.
..and constant notifications that the network is not connected, that there are wifi APs nearby, do you want to configure one(?), and that it's been 157 days since the last software update, and that you should connect your tv to the internet to get newest bestest firmware with 'new features'.
I think government is the only way to regulate below pain threshold nonsense that weighs down society.
but I think small issues in society might translate to small issues for government action, and regulatory capture has a super-high roi overturning "minor" stuff.
I suspect only showing real harm for something is the only way to get these things high-enough priority for action.
I kind of wonder if the pager attacks, or the phone nonsense in ukraine/russia might make privacy a priority?
If no one manufactures such a product, how does the "market" express this desire?
Buying one toaster, that would last your lifetime, is easily manufactured today, and yet no company makes such a thing. This is true across hundreds of products.
The fact is, manufacturing something that isn't shit, is less profitable, so what we're gonna get is shit. It doesn't really matter what people "want".
Not the person you're asking, but about as frequently as I replace washing machines. The fact that I'm doing it at all is the problem, especially since both machines had been "solved" by the late 1970s.
The non-electric office tools I have from that era are perpetual. Eternal.
How often are you replacing washing machines? As we had more kids, we upgraded our toaster from a 2-slice to a 4-slice, somewhere in the neighborhood of 11 years ago. Can't imagine we paid more than ~$20 for it. Still going strong today. If it lasts 10 more years, all my kids will be moved out of the house, and I suppose we could downgrade to a 2-slice model again. Unless the grandkids like toast.
I've been shopping around specifically for this type of thing. There's two options: one is to buy a monitor display similar to what's in restaurants and retail stores and the other is to switch to a projector without smart features. The monitor displays, like your computer monitor, is even more expensive than regular TV's because they have special features that make them better to have on all the time at retail stores. They don't even have sound systems. The other option is projector displays which are generally the more sane option but they are not as easily installed. I suspect that privacy conscious consumers will go for projector displays as they aren't bundled with spyware. There's still risks like with the Roku TV box but it's much easier to replace the streaming unit. Apple TV claims that it doesn't utilize ACR so that's a solid choice but I would personally go for a Linux box with an HDMI out.
If there were any significant number of people who would pay for a dumb high end TV, the market would sell them one.
I don't think they would. There are some TV manufacturers that are better about not nagging you (which is one of the reasons why I bought a Sony last year), but as time moves forward, companies have been less likely to leave money on the table. This is just the logical result of capitalism. Regulation will be the only way to protect consumer privacy.
Similarly, air travel gets worse as consumer protection regulations gets rolled back
This isn't really an accurate analysis because it assumes the only parties involved are the TV manufacturers and the purchasing consumers. In fact the third party is ad brokers and so the calculus to alienate some users in pursuit of ad dollars is different.
This sounds like victim blaming to me. "What do you mean you don't understand how software and the internet works and thought this was just a TV?!"
If you want to make a free market argument you need to look up what a free market is. In particular, consumers need to have perfect information. Do you really think if manufacturers were obligated to make these "features" clear that most people wouldn't care?
Cloud providers will use cheap investment capital to buy chips at increasing prices, while the public will be economically forced to get computational services from these cloud providers. After a few years, most software will work only when connected to cloud infrastructure, either for performance or for "safety" reasons. We're already seeing this with AI.
Cloud was there for many years and it's not that cheap, compared to ordinary servers you can buy. It's not clear how anything will change in the future.
Because of this I hope the current AI fad is a bubble and it bursts soon. So instead of cheap investment drying up the market for individual consumers, we'll have lots of used corporate hardware selling at scrap prices to end users.
So do not repaint them. We see that often with far more recent examples of collectible items -- clean it up and repaint it and the value tanks, even if the original is in terrible shape. If a statue is so priceless that we cannot tolerate a modern artist's skilled take on how it should look, then just leave it be.
I won't be redeeming any, that's for sure. I've been lucky so far, but I got a brush with this experience a couple years ago. I logged into my apple account from a web browser on my work computer. Turns out my company has pretty shitty security and our NATs were on the naughty list (I should have known better, I had been getting CAPTCHA'd every day if I browsed outside our network). Because I logged into the apple account from a naughty network, they instantly locked the account until I could prove it was really me and that everything was okay.
I did get it resolved relatively quickly, but for the next couple weeks I was randomly running into the fallout from that. It became really clear just how far reaching the impact would be if I lost the account and could not recover it. Ever since then I've tried hard to disentangle myself completely so that the blast radius will be much smaller.
At this point the biggest worry I have is what would happen to my MBP and iPhone. All of my cloud services are non-Apple, but they might be able to keep me out of my own machine and that would be devastating.
This does not scale, the amount of abuse is huuuuge. But I think with a prerequisite, it could:
Companies should be required to provide access to a service that verifies identity. I know such companies exist, so it is doable. And then, once it is provable that they are dealing with an actual human who can be identified, your rules can be applied.
Apple made 100 billion profit last year. They can surely afford to make this. Just because it would cost them profit does not mean we shouldn't require it.
For Apple, yes, but in the context of rules that apply across the board we should address the scaling issue. People who've had to deal with the filth of the Internet know how hard the problem is to solve, and not everyone has Apple money.
If you can't charge your customers enough to spend enough on this challenge, you don't really have a viable business, you've got a theft organization. Externalizing your failure to build a solid business by screwing customers is not okay.
It could be a death-by-a-thousand-cuts situation and we don't have enough context. My company has spent the last few years really going 1000% on the capitalization of software expenses, and now we have to include a whole slew of unrelated attributes in every last Jira ticket. Then the "engineering team" (there is only one of these, somehow, in a 5K employee company) decrees all sorts of requirements about how we test our software and document it, again using custom Jira attributes to enforce. Developers get a little pissy about being messed with by MBAs and non-engineer "engineers" trying to tell them how to do their job. (as an aside, for anybody who is on the giving end of such requirements, I have to tell you that people working the tickets will happily lie on all of that stuff just to get past it as quickly as possible, so I hope you're not relying on it for accuracy)
But putting the ticket number in the commit ... that's basically automatic, I don't know why it should be that big a concern. The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.
> The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.
Only problem there is the potential for a deeply-ingrained assumption that the Jira key being in the branch name is sufficient for the traceability between the Jira issue and commits to always exist. I've had to remind many people I work with that branch names are not forever, but commit messages are.
Hasn't quite succeeded in getting everyone to throw a Jira ID in somewhere in the changeset, but I try...
> But putting the ticket number in the commit ... that's basically automatic, I don't know why it should be that big a concern. The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.
That poster said "attach a JIRA Ticket to the PR", so in their case, it's not that automatic.
A lot of Jira shops use the rest of the stack, so it becomes automatic. The branch is named automatically when created from a link on the Jira task. Every time you push it gives you a URL for opening the PR if you want, and everything ends up pre-filled. All of the Atlassian tools recognize the format of a task ID and hyperlink automatically.
I haven't dealt with non-Atlassian tools in a while but I assume this is pretty much bog standard for any enterprise setup.
As someone else mentioned, the process is async. But I achieve a similar effect by requiring my team to review their own PRs before they expect a senior developer to review them and approve for merging.
That solves some of the problem with people thinking it's okay to fire off a huge AI slop PR and make it the reviewer's responsibility to see how much the LLM hallucinated. No, you have to look at yourself first, because it's YOUR code no matter what tool you used to help write it.
Reviewing your own PR is underrated. I do this with most of my meaningful PRs, where I usually give a summary of what/why I'm doing things in the description field, and then reread my code and call out anything I'm unsure of, or explain why something is weird, or alternatives I considered, or anything that I would catch reviewing someone else's PR.
It makes it doubly annoying though whenever I go digging in `git blame` to find a commit with a terrible title, no description and an "LGTM" approval though.
> requiring my team to review their own PRs before they expect a senior developer to review them
I'm having a hard time imagining the alternative. Do junior developers not take any pride in their work? I want to be sure my code works before I submit it for review. It's embarrassing to me if it fails basic requirements. And as a reviewer, what I want to see more than anything is how the developer assessed that their code works. I don't want to dig into the code unless I need to -- show me the validation and results, and convince me why I should approve it.
I've seen plenty of examples of developers who don't know how to effectively validate their work, or document the validation. But that's different than no validation effort at all.
> Do junior developers not take any pride in their work?
Yes. I have lost count of the number of PRs that have come to me where the developer added random blank lines and deleted others from code that was not even in the file they were supposed to be working in.
I'm with you -- I review my own PRs just to make sure I didn't inadvertently include something that would make me look sloppy. I smoke test it, I write comments explaining the rationale, etc. But one of my core personality traits (mostly causing me pain, but useful in this instance) is how much I loathe being wrong, especially for silly reasons. Some people are very comfortable with just throwing stuff at the wall to see if it'll stick.
That is my charitable interpretation, but it's always one or two changes across a module that has hundreds, maybe thousands of lines of code. I'd expect an auto-formatter to be more obvious.
In any case, just looking over your own PR briefly before submitting it catches these quickly. The lack of attention to detail is the part I find more frustrating than the actual unnecessary format changes.
Why would you are about blank lines? Sounds like aborted attempts at a change to me. Then realizing you don’t need them. Seeing them in your PR, and figuring they don’t actually do anything to me.
> Yes. I have lost count of the number of PRs that have come to me where the developer added random blank lines and deleted others from code that was not even in the file they were supposed to be working in.
That’s not a great example of lack of care, of you use code formatters then this can happen very easily and be overlooked in a big change. It’s also really low stakes, I’m frankly concerned that you care so much about this that you’d label a dev careless over it. I’d label someone careless who didn’t test every branch of their code and left a nil pointer error or something, but missing formatter changes seems like a very human mistake for someone who was still careful about the actual code they wrote.
I think the point is that a necessary part of being careful is reviewing the diff yourself end-to-end right before sending it out for review. That catches mistakes like these.
> I want to be sure my code works before I submit it for review.
No kidding. I mean, "it works" is table stakes, to the point I can't even imagine going to review without having tested things locally at least to be confident in my changes. The self-review for me is to force me to digest my whole patch and make sure I haven't left a bunch of TODO comments or sloppy POC code in the branch. I'd be embarrassed to get caught leaving commented code in my branch - I'd be mortified if somehow I submitted a PR that just straight up didn't work.
It’s cultural. It always seemed natural to me, until I joined a team that treated review as some compliance checkbox that had nothing to do with the real work.
Things like real review as an important part of the work requires a culture that values it.
One thing I've pushed developers on my team to do since way before AI slop became a thing was to review their own PR. Go through the PR diff and leave comments in places where it feels like a little explanation of your thought process could be helpful in the review. It's a bit like rubber duck debugging, I've seen plenty of things get caught that way.
As an upside, it helps with AI slop too. Because as I see it, what you're doing when you use an LLM is becoming a code reviewer. So you need to actually read the code and review it! If you have not reviewed it yourself first, I am not going to waste my time reviewing it for you.
It helps obviously that I'm on a small team of a half dozen developers and I'm the lead, and management hasn't even hinted at giving us stupid decrees like "now that you have Claude Code you can do 10x as many features!!!1!".
Yeah, I always think it's kinda rude to throw something to someone else to review without reviewing it yourself, even if you were the one to write it. Looking at it twice yourself can help with catching things even faster than someone else getting up to speed with what you were doing and querying it. Now it seems like with LLMs people are putting code up for review that hasn't even been looked at once.
My coworker does this. PRs with random files from other changes left in, console logs everywhere. Blatent issues everywhere.
I find it extremely rude they chuck this stuff at me without even having read it themselves. At least these days I can just chuck the AI reviewer thing on it and throw it back to them.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
reply