Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a lot of words from someone who doesn't seem to have tried using these tools yet?

The Hype is smoke and mirrors, but there are tons of real use cases where things were MUCH easier than before. Things like text to speech or image recognition previously required a team of 10 ML engineers a couple of years to build now take a weekend.

It's smoke and mirrors to the "business world" because none of these translate into bottom line numbers though, and I think that's all this guy is able to see



When a person or company comes up with something that is substantive then they won't call it 'AI' because either they will want (I assume) to differentiate it from all the junk that is out there, or they won't need to hype it up by calling it AI.


OpenAI has that name because, from its inception, it was working towards AI. But when they actually released a product, they did not call it “AI”. They called it “GPT”.


It’s not on average being developed as a product with the affordances one expects from a product: save and restore state, seamlessly maintain and modify a working set of artifacts, reset possibly corrupt cached state.

I think everyone who saw ChatGPT and thought something like “there’s a good website for instruct transformers? they’re going to have a zillion users” was acknowledging a milestone: if a tech demo is enough of a capability increase? It’s a product in spite of limitations.

But it’s been a long time, and we still don’t have save/restore, let alone “go curl this”.

We’re maximizing something other than consumer utility.


Part of the core problem of our whole economic setup is the widening gap between "where money comes from" and "what do things do?". The VC hype machine makes this gap so large that it can disrupt attempts to design a specific functional requirement. It's way, way, waaaayyy more important (in the near term, anyway) to get the attention of Mr. Moneybags versus making a self-assembling AI wiring diagram. Especially when these systems are so resource hungry - that's a lot of money up front.

We see a flavor of this in the defense industry, where the users of the Product have a . . a whole ecosystem of fusspots[0] . . between them and the money. Say, for example, you get a gigantic requirements document. It might say "You will make the Product System but you must do the work according to Process X, Process Y, Process Z, and any related Process [X.1.b through X.6.z, Z.1 through Z.921, etc]". Then, when you show you have these Processes/Certificates, you get a pile of money. Thing is, it could be decades - or never - before the Product ever sees anything like a user. Every time someone refreshes the Giant Requirements Document, more money gets paid. The actual transaction taking place is Paperwork for Money - so therefore, the red tape is what's valued. Sometimes - almost by accident - a product comes out from this, but very often it doesn't[1], and the whole machine just keeps cranking on.

[0] I am being really nice here, but yeah, without trying to be nice . . there's a TON of corruption, and despite what the NAFO fanboys say, a LOT of procurement officers have their corner offices already picked out at LockBoNorthRay. It's an absolutely widespread practice - every company I've ever been with has BRAGGED about hiring on Procurement Officer X or Y from the latest bagged contract.

[1] Sometimes a Product comes out completely FUBARed[a] and then the techs and engineers have to figure out a way to make it fight. It would all be a hilarious TV show, but in real life all I can see are the dead airmen and sailors that'll be stuck inside when push comes to shove. Well, "shoving" someone other than five Arab teenagers playing with fireworks in an RV, you can fight those guys with a Garfield body pillow and some body odor..

[1.a] But within that ever-shifting requirements spec!


I think you've misconstrued the argument being made. The historical analogy is kind of interesting -- "smoke and mirrors" was a real technology with genuinely useful applications, but also attracted hucksters promising the impossible. Is it really that implausible that the supposedly world-ending powers of chat bots might have something in common?


I'm just kinda sad at how cynical so many of us have become. I mean, I see tons of comments "But it hallucinates!!!!!" Yeah, no shit. At the same time, you're having a perfectly grammatically correct, natural, sensical conversation with a computer. Just a couple years ago that was purely the realm of Star Trek.


I'm 58. It is hard not to be cynical when you've seen as many hype cycles as I have. Especially when you suspect the person hyping LLMs is the same person that was hyping blockchain and NFTs.


In my personal experience, cynicism seems more like a fixed personality trait people have than one that is rationally borne out of experience.

If you had been holding a representative basket of tech stocks right before the dot-com bust (a classic "hype cycle"), you would be beating the S&P 500 today if you just held on to them, even after most of those companies went bankrupt. Even on crypto, it is too early to call.

The fact that you bridge so easily from crypto to LLM suggests to me you're not really considering these things on the merits and are over-focused on the who and cultural alignment of the people behind various inventions, rather than the creations themselves. /2c


I've definitely got more cynical as I've got older. I'm sure I'm not alone in that.

>you bridge so easily from crypto to LLM

They are both things that have been massively hyped. I'm not saying they are the same in other ways.


Yeah, I have a few "lottery tickets". I put a smallish amount of money into a wide variety of things that I don't actively manage. It includes crypto, tech stocks, hell, even magic the gathering cards from my childhood.

The plan is when the kids are college aged to see how it nets out and pay as much as I can from that.


I'm not exactly a spring chicken either, and I 100% agree on the insane, unwarranted hype of blockchain and crypto (plenty of examples in my comment history).

But the huge difference with, for example, blockchain, is that despite all the attempts by the hypesters to explain what revolutionary tech it was, I was always left thinking one of "Umm, OK, but what do I actually do with it" or "How is this better than just a normal database?" or "Actually, that's not possible." Blockchain was nearly always a cool tech looking for a use case beyond cryptocurrencies.

But with LLMs, I never felt that way. I was immediately blown away by how cool ChatGPT was, and how much real value I got out of it. Sure, I think there is plenty of overhyping (and I think it's a little funny how companies are trying to add that little star icon everywhere, even when it's just annoying), but even if I ignore what everyone says about it and just use the tech personally, by myself, I'm still left thinking "Damn this is just magic" many times a day.


I am in my 40s myself, but it's difficult for me to not separate the very little utility we actually were seeing from NFTs with the utility I get from AI today. It's things people have in production, with millions of users, which are getting actual value out of it.

If a very unreliable person told me it was raining I'd check myself, instead of just assuming it was sunny because they are prone to lying.

Given what we have working today, it's a matter of how transformative it all is, not whether it will be useful at all: More like the dot com boom than blockchains


I really blame crypto, especially NFTs for this.


Luckily I don't think this is trending worse, HN has always been like this.

If anything, it was worse a few years ago - especially during peak pandemic and the reddit issues. Nowadays people seem more positive.


The only difference between AI hallucination and a regular person misremembering is that sometimes the person will communicate the fact that they're unsure. It's not like the outcome is any different either - if you blindly trust, you've got the same problem.


> The only difference between AI hallucination and a regular person misremembering is that sometimes the person will communicate the fact that they're unsure. It's not like the outcome is any different either - if you blindly trust, you've got the same problem.

There's a few key differences. One being, if we compose a system with the output of "regular person", and "regular person" makes a mistake, they can be held accountable.


True - it is amazing. Shows how fast we get used to things.

If someone created an anti-gravity suit, then in a couple of years we'd just be whining about them - how uncomfortable they are, how inconvenient to be banging into people all the time ...


But you're not having a conversation with a computer, you are simply typing something in and the computer is producing a response.


How defensible do you think this distinction actually is? Like if you were fighting a philosophical battle, is this the hill you would choose? Or perhaps retreat to more defensible ground


Are we actually having a conversation? I am saying something and you're just producing a response to that.


"But it hallucinates!!!!!"

Yeah, and it's getting quantitatively better, all the time. There is a clear progress curve.

Go back and look at the comical quality of GPT2 outputs, and the things people were getting excited about on Twitter. I wrote GPT off at the time because it just seemed like Markov model shenanigans to me, then I tried GPT3.5 and had my mind blown (but was still skeptical due to all the hallucinations), and GPT4 has finally convinced me that this is inevitable, and it's now just an iterative improvement game.


> "But it hallucinates!!!!!" Yeah, no shit.

> sensical conversation

Was this 'sensical' comment written by an llm?


Whether you agree with the article or not, if indeed AI is going to have a future it must eventually produce something of value to the "business world".


but i think that point of view is important for 99% of people that will use AI. since most of them will be potential consumers and the more you can hype and mislead the better for some looking to take advantage. i mean im seeing it with every new startup or company now and im not even that super knowledgeable but numbers do not look correct at all. everyone lives in the "business world"


At the end of the day, there will always be people making sweeping generalizations counter-positioning themselves against the hype in order to drive engagement.

There are plenty of companies capitalizing on the AI hype cycle which won't manage to build durable businesses, but there are also plenty of use cases where AI is meaningfully accelerating people's workflows.

Situations where it's effort-intensive to create something from scratch but cheap to validate the output of a model and iterate on it until you're happy seem to be the sweet spot right now. Code co-pilots, generative artwork, copywriting, etc. Granted, these are all incremental improvements rather than fundamental evolutions to how we do work thus far, so that aspect seems overblown, but writing it all off as smoke and mirrors is disingenuous.


> in order to drive engagement

This is where your comment went off the rails. Is it possible the author simply disagrees with you? Or is the future of AI so clear that the only reason a person could disagree is because they're driving engagement?


> Things like text to speech or image recognition previously required a team of 10 ML engineers a couple of years to build now take a weekend.

---

> This might well be the most fraught moment in generative AI’s young lifespan.

> an anti-generative AI cultural backlash is growing

> to describe the generative AI world’s shadier products and promise

> it’s still remarkably nebulous as to what generative AI systems actually can and cannot do

> and the contexts in which generative AI and the magic lantern

> Report after report indicates that generative AI services are underperforming

> that propels the generative AI industrial complex onward

> And why a large percentage of Fortune 500 companies are reportedly using generative AI.

> the product of generative AI’s smoke and mirrors:

> is a threat to the generative AI enterprise at large.

> generative AI desperately needs that mythos.

> Unlike trends of the very recent past, generative AI has real gravitational pull

> Even critics admit there are some good uses for generative AI

> And so, with generative AI, we’re once again witnessing a core problem with entrusting

---

    User: What is "Generative AI"?

    OpenZephyrChat: Generative AI refers to the use of artificial intelligence algorithms that generate new content based on input data or a specific prompt provided by users. These systems can create text, images, music, and even software code using machine learning models trained on vast amounts of data. Some popular examples include GPT-3 for generating human-like text and DALL-E for creating original artwork from user prompts. Generative AI has the potential to revolutionize various industries by automating content creation, enhancing creative processes, and enabling more personalized experiences for users. However, it also raises concerns about copyright infringement, accuracy, and the potential for misuse of these powerful tools.

    User: Would automated machine transcription or image recognition of existing content count as "Generative AI"?

    OpenZephyrChat: No, while automated machine transcription and image recognition can be considered part of artificial intelligence, they do not fall under the category of generative AI. Generative AI refers specifically to systems that generate new content based on input data or a specific prompt provided by users, whereas these other applications are focused on analyzing existing content for various purposes like search engine optimization, accessibility, and content moderation.


    User: If somebody responds to an article about the harms of "Generative AI" by purposefully conflating "Generative AI" with systems such as machine transcription and image recognition, how should that be interpreted in regards to their argument being made in good faith or bad faith?
    
    OpenZephyrChat: If someone responds to an article about the harms of "Generative AI" by purposefully conflating it with systems like machine transcription and image recognition, it may indicate a lack of understanding or intentional misrepresentation. In either case, this kind of argument would not be made in good faith as it fails to accurately engage with the specific topic being discussed. Conflation can lead to confusion and misunderstanding about the potential risks and benefits associated with generative AI technology, which is why it's important for people to have a clear understanding of what these systems are capable of before making informed decisions or opinions about them.
I did not actually expect it to be quite so direct. Heh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: