> These people have no problem with Facebook, Instagram, et al. being naughty because they profit from it.
Facebook, Instagram are not naughty. They're well embedded in the political economic ruling elite of the country. They amplify or mute whatever messages that elite wants amplified or muted. The US can't make rules for TikTok to do the same because that would be illegal, besides being too obviously partisan.
The idea that TikTok is somehow less corruptible than Facebook or Instagram is laughable as American investors were largest investors of ByteDance to begin with. If it was only about Israel, they can be pressured into censorship the same way Meta supposedly is*. The difference is that TikTok can also be pressured by China, where its parent company resides.
*What is more likely is that TikTok isn't actually more pro-Palestinian than Meta, but the demographics that use it actually are which affects the algorithm and user reports.
Completely disagree. Look around, mass protests start and governments fall when banning social media. There are 80 million daily active users of TikTok in the US, Trump would never, ever be so stupid as to piss off 80 million people by suddenly blocking their favourite app. That's the whole reason of this standoff.
Tesla worked as long as it had a technological advantage on its competitors (and when its competitors didn't exist at all). Now that the technology went mainstream and the competition is Chinese (high tech & high quality, much cheaper) there is no reason for consumers to choose an American brand. Also, Musk (at least the one that started Tesla and SpaceX, don't know now) isn't someone who's interested in competing in a mature market. I guess that's also the reason for pivoting to humanoid robots: gives him the type of extreme stimulus he needs to keep going.
I think it's also a cultural thing... I mean it takes time for companies and professionals to get used to the idea that it makes sense to pay hundreds of dollars per month to use an AI. That that expense (that for some is relatively affordable and for other can be a serious one) actually converts in much higher productivity or quality.
Hey Claude, can you help me categorise the tone/ sentiment of this statement, in three words?
"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."
The original post and the rest of the comment are about invent vs arrive (discover?). I'm sure I'll be able to find (parts of) your comments, too, that diverge in sentiment.
bgwalter is clearly dismissing AI. The post has all the telltale signs.
* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.
* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.
* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently
considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.
Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.
My position is that AI is going to end up being good at certain things, and it's going to be not so good at others, and that mix will change over time, but generally improve. I don't think it's going to replace all jobs and I don't think it's a world-ender.
My job as an engineer is to understand the technology and understand how to deploy it for the benefit of the people that I work for, up to and including myself. There's no room for dogma here. It's purely curiosity, investigation, and trial and error. See what works, see what doesn't.
Personally, I dislike centralized power because I think it's dangerous. And so, one of my goals is to find ways to use AI in a more distributed context that people have control over. Technology accrues benefits to those who deploy it. Therefore, I'd like to find ways for everyone to be able to deploy good technology.
Just because they were filled with Westerners. They didn't have any trouble with bombing refugee camps and apartment blocks full with Palestinians.
reply