Hacker Newsnew | past | comments | ask | show | jobs | submit | xz0r's commentslogin

> I disagree with most comments that the brusque moderation is the cause of SO's problems

Questions asked on SO that got downvoted by the heavy handed moderation would have been answered by LLMs without any of the flak whatsoever.

Those who had downvoted other's questions on SO for not being good enough, must be asking a lot of such not good enough questions to an LLM today.

Sure, the SO system worked, but it was user hostile and I'm glad we all don't have to deal with it anymore.


I've seen every new OS update leading to M1 Air performance degrade, at this point I'm pretty convinced Apple is doing this intentionally.

Edit: Same experience with iPhone X

Edit2: I still remember the feeling when I got them initially - that Apple is on customer's side, but now I feel totally helpless and i'm being forced to upgrade


I haven’t noticed this to be honest: macOS 26 Tahoe is the first update that significantly hindered the performances of my MacBook Air M1. Even with the Electron _cornerMask fix + disabling auto heuristics at the OS level.


I haven't either, though I've purposely kept my M1 on Sonoma for now because the newer OSs have dorked up my Ham Radio software.


Same thing, thinking about reinstalling :(


Let me direct you to the reddit AMA where people were literally begging to bring back 4o.


Yeah, anyone saying "Normal, non-technical users can't tell the difference between these models at all" isn't talking to that many normal, non-technical users.


A weekly 1 hour call, where pair programming/ exploration of an on-going issue, technical idea would be enough to replace face to face time with seniors. This has been working great for us, at a multi billion dollar profitable public company thats been fully remote.


> how did the pilot realize it was the cut-off switches?

The answer to this question is explained by a pilot in here https://youtu.be/00ooqCuRoU8?t=731

The pilots can hear engines spool down.


The article links to a forum post which kind of explains how engagement is maximised https://community.openai.com/t/uncovering-the-intent-behind-...


Poetic, but I don't think that really explains anything.


Ever thought about how there's a magnetic quality to mirrors that keeps us looking? I see GPT in a similar light, it functions as a mirror, reflecting aspects of our reality.


I don't think GPTs reflect much of us at all, because we ultimately must translate into language what chatbots lack (motivation, caring about things, emotions, biased stimulation like pain and pleasure, etc)

Language is a large part of how we think about ourselves, but I don't think chatbots can tap much into what it is to be human or what we care about/feel outside of what we've already written, and are thus kind of useless as fetishes for humanity. So far, anyway.


I too bought a 2020 MBA M1, it was great initially, but now seems like its getting throttled, same goes with my iPhoneX, I used to love Apple, but its just pathetic that they throttle older devices just to get users to upgrade.


No one is throttling your M1 MBA.


If Adguard is able to work on MV3 why isn't uBlock able to ?


Please open source it.


Correlation is not causation. It feels more like smarter automation to me.


> It feels more like smarter automation to me.

Yep. Elevators used to be proof that machines can think. Then compilers, and chess, and go, and search, and …

The problem with AI is that as soon as it works, we stop thinking about it as “artificial intelligence” and it becomes “just automation”. Then AI moves to the next goalpost.


>Elevators used to be proof that machines can think

Did they? When? By who?


Early elevators used to need professional human operators (you sometimes see them in old movies). Stopping at a selected floor was something a machine was unable to do on its own. Until it was and elevator operators lost their jobs and we just took it for granted that machines could do it.


But compilers, chess, go and search are all proof that computers could think. We've been discovering as we scale up the hardware that those things appear to be converging to human intelligence with minor tweaks (turns out tree search for chess needed to be combined with matrices and we're most of the way there). ChatGPT can out-reason many people I know and can out-argue a fair number of comments I see on the internet.

If we took this comment at face value we're ending up with a definition of "think" that can't reason, play games or recall information - or it would be outdone by machines. Thinking obviously isn't very important!


> Thinking obviously isn't very important!

This is not obvious in the least. Thinking was required to produce the thing that could achieve the outcome that was defined by thought.

I could just as easily say that AI, as currently implemented, is really just another muscle.


The problem seems to be with defining good tests for intelligence. FWIW, because GPT4 answers have a detectable pattern, they should presumably fail the Turing test.

At some level, intelligence requires logic, rationality, and conceptualization, all of which are topics which have evaded clear definition despite millennia of philosophy directly addressing the issues.


When you say we need ‘good’ tests for intelligence, you mean ‘tests that humans can pass but machines can’t’.

You’re demanding this because you aren’t comfortable with the implication that a computer can pass our existing tests for intelligence, so you rationalize that with the comforting thought that those tests were not meant to identify intelligence. Tests like the SAT or the bar exam or AP English. Or tests for theory of mind or common sense or logic. Those tests aren’t testing for ‘intelligence’ - they can’t be. Because a computer passed them.

It’s okay. We can make new tests.


Those are a lot of leaps to make about my motivations!


Humans also answer in predictable ways so if you place such criteria for a turing test humans will fail to pass it too


I suppose it's a bit of a scotsman argument, but the turing test is to see whether an observer can correctly guess whether the interlocutor is _human_, so by definition the test would pass if the other correspondent was human.

To the point underneath, humans do not answer in as predictable a way as ChatGPT. Your answer, for example, I am confident does not come from ChatGPT.

Edit: if I've horribly mangled the Turing test definition, please let me know


I just imagined we could look at the oldest example of intelligence in human history. In contrast with AI, our chauvinism has us tend to pretend even the earliest monkey had it, fish?, insects? etc If it can rub 2 sticks together it gets the diploma.

Turing test is easy, I had 2 chat bots talk about other users in the channel while besides some trigger words ignoring what those other users had to say. The human subjects got angry successfully which means it was important to them.


I had someone on HN state stockfish is intelligent. If that is your definition of intelligence sure GPT is also intelligent. I do not think that's a common definition though!


Point is that we move the bar every time computers reach it. At least in part because we want to keep feeling special. And in part because we go “Well that can’t have been the bar then”

I suspect even full AGI will be considered “just a machine” for many decades, even centuries, before it gains the same rights as humans. We love to find reasons we’re special. Look how long it took us to admit animals are intelligent.

For many humans, computers definitionally can’t be intelligent. It’s important to recognize that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: