Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI Destroys Institutions (ssrn.com)
91 points by sean_the_geek 17 days ago | hide | past | favorite | 157 comments


I would summarize the central claim of the paper as: the widespread use of AI to mediate human interaction will rob people of agency, understanding and skill development, as well as destroying the social links necessary to maintain and improve institutions, while at the same time allowing powerful unaccountable actors (AI cabal) to interject into those relations and impose their institutional goals; by "institution" we mean a shared set of beneficial social rules, not merely an organization tasked with promoting them, "justice" vs. "US justice system".

The authors then break down the mechanisms by which AI achieves these outcomes (that seem quite reductive and dated compared to the frontier, for example they take it as granted that AI cannot be creative, that it can only work prospectively and can't react to new situations and events etc.), as well as exemplifying those mechanism already at work in a few areas like journalism and academia.


AI is by it's nature an entropy machine.


And I think that's about right. Despite the marketing, I think AI (especially if the hyped capabilities arrive) will be one of the most destructive technologies ever invented. It only looks good to blinkered and deluded technocrats.


We should be more worried what AI will due to the ability of an average human to think.

Not that I think there is a lot of thinking going on now anyway, thanks to our beloved smartphones.

But just think about a time when human ability to reason has atrophied globally. AI might even give us true Idiocracy!


You think smartphones are the cause of atrophy ?

No sir, there was nothing there to begin with - if you read recent history, you'll see that it's full of stupidity, and a few rabble rousers leaving entire nations by the nose.

With the mollification off the smartphone, we've merely taken off the edge of this killing machine.


> We should be more worried what AI will due to the ability of an average human to think.

I had a wake up call on this yesterday. After a recent HN thread about Zed editor, I decided to give it another try, so I loaded it up, disabled AI, and tried writing some code from scratch. No AI completion, no intellisence. Two things came to mind. First, my editor seems so much more peaceful without being told what to do. Second, it was a bit scary how lost I felt. It was obvious that my own ability to communicate through code had declined a bit since I began using AI coding assistants. It turns out that as expected, coding assistants really are competitive cognitive artifacts. After that experience, I've decided that I am going to do at least part of my coding with all completions turned off. Unfortunately at work you are paid to produce quickly, so I think my AI free editor will have to be reserved for personal projects.

Further related to your statement about thought, the hallucinations persist, and even last night I got a response about 80's pop culture that was over 50% bullshit. Just imagine what intentional persuasion through LLM models will do to society. Independent thought has never been more important.


Similar rhetoric was allways there with new technologies. Calculators, radio, cameras, phones, computers, smartphones, social networks...

Regulations do more harm than learning process from mistakes.


Yes we have this before and everytime it was correct.


What was correct? That we became idiots with new tools?


>That we became idiots with new tools?

Yes, a little bit each time. But AI will finish the job.

Because earlier it was writing skills, or attention span that was at stake.

This time it is literally the ability to think.


Pray tell, what makes you impervious to the atrophy and mental decline caused by these inventions? Do you just not use "calculators, radio[s], cameras, phones, computers, smartphones, [or] social networks"? And so, you have avoided the trap of technologies through defiance?


I mean we were seeing this even before AI. It's the same type of person. To slop is human.

It's like for some reason we thought that like some good percentage of us aren't just tribal worker drones who fundamentally just want fats, sugars, salts, dopamine and seratonin. People actively vote against things like UBI, higher corporate taxes, making utilities public. People actively choose to believe misinformation because it suits their own personal tribal narratives.


This is the way "AI" will deliver on the promise to become more intelligent than humans. Or at least than humans who believe in it.


Just from reading the abstract, it feels like the authors didn't even attempt at trying to be objective. It hard to take what they're saying seriously when the language is so loaded and full of judgments. The kind of language you'd expect in an Op-Ed and not a research paper


I think you may be confused. This is not a research paper, it's an op-ed in a law journal.

SSRN is where most draft law review/journal articles are published, which may be the source of confusion.

For most other fields, it is a source of draft/published science papers, but for law, it's pretty much any kind of article that is going to show up in a law review/journal.


Ah okay, thanks for explaining it! Just based on the name, journal and metadata it seemed like a research paper.. and I was honestly a bit surprised. But I obviously don't publish law research :))

From what you're saying it seems that for an insider this is clear. I guess that makes more sense then


It is literally called “ Boston Univ. School of Law Research Paper No. 5870623”


It's also an submission to UC hastings law journal, as it also says right before that?

The automated tagging with a BUSL ID is just how BUSL's system for papers of any sort works.

For reference: I did my first year of law school at BUSL so i'm very familiar with how it all works there :)

This is also very common elsewhere - everything that IBM used to release got tagged with a technical report number too, for example, whether it was or not.

In any case - it is clearly a piece meant to be persuasive writing, rather than deep research.

Law journals contain a mix of essentially op-eds and deeper research papers or factual expositories/kind of thing. They are mostly not like scientific journals. Though some exist that are basically all op-ed or zero op-ed.

Compare something like:

https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=2...

Which is a piece in UC law journal meant as an informative piece cataloguing how california courts adjudicate false advertising law. It does not really take a position.

with

https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=3...

Which is a piece in UC hastings law journal meant as, essentially an op ed, arguing that dog sniff tests are bullshit.

I picked both of these at random from stuff in UC hastings law journal that had been cited by the Supreme Court of California. There are things that are even more factual/take zero positions, and things that are even more persuasive writing/less researchy, than either of these, but they are reasonable representatives, i think


It's an essay. Being opinionated is a feature.


This is nothing but speculation written by lawyers in the format of a scientific paper to feign legitimacy. Of course those $500 an hour nitpickers are terrified of AI because it threatens the exorbitant income of their cartel protected profession.


Care to actually engage with the text instead of deciding to paint the entire profession with a crappy brush?

I guess i'll start with calling two well known law professors "$500 an hour nitpickers" when they don't earn 500 an hour and have been professors for 15+ years (20+ in Jessica's case), so aren't earning anything close to 500 an hour, is not a great start?

I don't know if they are nitpickers, i've never taken their classes :)

Also, this is an op-ed, not a science paper. Which you'd know if you had bothered to read it at all.

You say elsewhere you didn't bother to read anything other than the abstract, because "you didn't need to", so besides being a totally uninformed opinion, complaining about something else being speculation when you are literally speculating on the contents of the paper is pretty ironic.

I also find it amazingly humorous given that Jessica's previous papers on IP has been celebrated by HN, in part because she roughly believes copyright/patents as they currently exist are all glorified BS that doesn't help anything, and has written many papers as to why :)


I dismiss the paper for 3 reasons:

1. It is entirely based on speculation of what is going to happen in the future.

2. The authors have a clear financial (and status based) interest in the outcome.

3. I have a negative opinion of lawyers and universities due to personal experience. (This is, of course, the weakest point by far.)

Speculation on future outcomes is not by itself a bad thing, but when that speculation is formatted like a scientific paper describing an experimental result I immediately feel I am being manipulated by appeal to authority. And the conflict of interest of the authors is about as irrelevant as pointing out that a paper on why Oxycodone is not addictive is paid for by Perdue Pharma. Perhaps Jessica's papers on IP are respected because they do not suffer from these obvious flaws? I owe the author no deference for the quality of her previous writing nor for her status as a professor.


What do you mean "formatted like a scientific paper?"

Law review articles look like this. Scientific journals don't own the concept of an abstract, nor are law review articles pretending to be scientific research.


What does "Research paper" mean to you?

Yeah, I haven't gotten through the 40 pages myself, but skimming through the material, it does seem that the arguments rely on an assumption that AI will be employed in a particular manner. For example, when discussing the rule of law, they assert that AI will be making the moral judgments and will be a black box that humans will just turn to to decide what to do in criminal proceedings. But that seems like it would be the dumbest possible way to use the technology.

Perhaps that's the point of the paper: to warn us not to use the technology in the dumbest possible way.


Nah we know the punch-lines to this one.

Worries about reduced quality of work are overblown, because there's always a human operator of the AI, reviewing the text between copying and pasting (no different from StackOverflow!). Enter vibe-coding.

Worries about AI becoming malicious or Skynet are overblown. Again, it's just a text interface, so the worst it can do is to write text that says "launch the nukes". Enter agents and MCP.

It still staggers me that I occasionally read about a judge calling out a lawyer for citing non-existent cases (this far into chatgpt's life). It was bound to happen to the first moron, but every other lawyer should have heard about it then. But it still happens.

Dumbest possible way is what we do.


> Worries about reduced quality of work are overblown, because there's always a human operator of the AI, reviewing the text between copying and pasting

Unfortunately no there is not.

> I occasionally read about a judge calling out a lawyer for citing non-existent cases (this far into chatgpt's life). It was bound to happen to the first moron, but every other lawyer should have heard about it then. But it still happens.

There you go.


3. Same, including press that is no longer unbiased and serve as propaganda of political opinions.

One might say that deinstitutionalization is actually good for plurality of opinions (some call it a democracy). If AI cause it, I'm fine with that.


And if AI leads to a situation in which the very ability to separate factual reporting from propaganda is almost entirely destroyed for anyone besides those in control of it, will you still be fine with it then?

Pointing to a system with problems and then saying you have no issue with something that has the potential to be orders of magnitude more problematic seems an odd approach to me.


Those in control of it aren't able to distinguish factual reporting today. Remember a few months ago when all the so called "reputable" news was screaming about an alleged terror attack against the UN that was caught, and it turned out to be nothing but a basic SMS fraud operation? https://www.bbc.com/news/articles/cn4w0d8zz22o


> Those in control of it aren't able to distinguish factual reporting today.

Can't tell if you're referring to media outlets or AI companies here.

I do remember this incident - it was an embarrassment for the outlets that jumped on that story. Especially because the general public has come to know there is a overriding tendency towards sensationalism.

But surely this is very different from actual outright propaganda operations?


I'm talking about the media companies. AI companies aren't any better at it, but at least they don't go around sanctimoniously claiming to be the source of truth in the same way as journalists do.

And it isn't different than outright propaganda operations because it is an outright propaganda operation. If you read the link in my comment, you will see that the report is just repeating claims from the government nearly verbatim.


I'm not going to take up the mantle of trying to dissuade you from your beliefs, but needless to say if you think that equating CNNs sensationalism-for-views model with the likes of Musk actively trying to dismantle Wikipedia [0] because he wants to rewrite reality (nevermind what Grok is currently doing [1]), then you need to have a hard look in the mirror.

[0] https://www.wired.com/story/elon-musk-launches-grokipedia-wi... [1] https://www.bbc.com/news/articles/ce8gz8g2qnlo

P.S. feel free to "do your own research" if the above are included in your supposed propaganda operation conspiracy.


Why do you lie and say he "tried to dismantle" Wikipedia when what he actually did was start a competitor?


My apologies, I forgot how far he actually went [0]

[0] https://www.theatlantic.com/technology/archive/2025/02/elon-...

> start a competitor

Very charitable way of referencing an observably-obvious disinformation generator


Ok, that's a little better. The first link was just referring to him starting his own. I still think "dismantle" is not an accurate description for asking people not to fund it, but it's within margin of error. I'm paywalled, though, so can't read the whole thing.

"Charitable" is irrelevant to my reference because "competitor" is a term completely devoid of any indication of the quality of the product.


> The first link was just referring to him starting his own.

He's pushing a platform that uses AI to generate content that's riddled with far-right misinformation. The context for him doing this is because he didn't like that Wikipedia now chronicles the very real fact that he made a Nazi salute. This doesn't constitute just starting an alternative, this is actively pushing an agenda of misinformation, while demonizing platforms that he doesn't like. He can't buy Wikipedia like he did with Twitter, so he's pushing to undermine & harm it, via defunding or other means (see government threats to "investigate" while Musk was running DOGE).

> "Charitable" is irrelevant to my reference because "competitor" is a term completely devoid of any indication of the quality of the product.

I was being nice; your characterizing of Musk's platform as a genuine "competitor" is BS. Every indication is that he's doing this because he wants to choose what constitutes fact and what doesn't.


Would you follow AI generated news? Not me and I'm sure I'm not the only one.

If AI leads to decentralisation of press, it sounds better to me. We certainly do not need one or few big entities that follows political tendencies.


Not if I can identify it, which I fear is going to become a harder task in the future.

> If AI leads to decentralisation of press, it sounds better to me.

Seems optimistic to me, given the trend with pretty much everything AI since ChatGPT was announced is concentrating as much power as possible in the hands of a few big tech companies.

As an added example: decentralization was a big promise of crypto; at present, hard for me to see how that's lived up to the promise. I don't see how the current trend with the hands of control over AI will work out any better in this regard.


Local AI exist as well. It's just hard to measure it.

Whats wrong with crypto decentralisation?


> Local AI exist as well. It's just hard to measure it.

Yeah but you're not going to get your news from local AI, are you? you have to connect it to the internet and look up news for you, but if a lot of what's found online is AI generated and there isn't a clear way to distinguish it, then how are you better off?

> Whats wrong with crypto decentralisation?

It hasn't really happened? To my knowledge, a large proportion of crypto volumes are going through a handful of centralised exchanges. Traditional finance sector is also increasing its presence/hold.


I find consumption of AI generated news useless. My reaction was primarily for decentralisation of AI generated tools.

I don't have numbers for crypto but auctions are not only way to buy crypto. And they do not have power to regulate value. Isn't that a sign of decentralization?


Tech workers know it all, no way a non-tech job could be worth anything more than 20 dollars an hour.


On one hand you're right that people tend to dismiss complexities of the jobs they are unfamiliar with, including IT crowd.

On the other hand when countries feel the need to legislate a new law enforcing writing documents in the human understandable language, one doesn't need to be an expert to suspect there was a systemic rot in those industries. It is totally valid to cry foul when even a parliament is concerned about about reading texts they produce for 500$/h.

https://www.congress.gov/bill/111th-congress/house-bill/946

https://www.legislation.govt.nz/act/public/2022/0054/latest/...


Enough people have gotten owned for using these things in court that I think the more likely response is laughing at the ignorance then feeling threatened.


1. Get owned in court because you used an LLM that made a poor legal argument.

2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.

I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.


Does it cost 100k minimum in the US to get a lawyer? Or am I misunderstanding something?


There is no "get a lawyer." You pay by the hour. And there is months to years of procedure before the judge even knows your lawsuit exists.


And the minimum to file a lawsuit comes out to 100k at standard rates ? Or was it just a random number?


It's going to cost you around $100K if you're lucky, and it could be a lot more. That's what I mean. There are no exact numbers because it depends on how many hours of lawyering it takes to get through the endless process and procedure (designed by lawyers, of course) before you ever even go to court. You can't know that in advance. And if the other side has more money than you, they know its to their advantage, so they will try to drag out the process and bleed you dry to gain leverage or even force you to drop the case.


Many lawyers work on contingency and take a set proportion of the settlement if they win instead of charging hourly.


That's assuming you are the one doing the suing and not the one getting sued. And even then, that applies to only very limited types of cases. And even then, the contingency is typically 33% (and sometimes can even eat over 50%) of your damages awarded, so the cost is massive in any case.

There is the option of small claims court which is massively cheaper, but it has very low limits for damages, so it's barely worth the effort.


> LLMs today are the worst they will ever be

Just wait till you see tomorrow's, trained on the slop fabricated by today's.


Please go to court using only ChatGPT as legal defense, I'd love to see it, it's going to make for great entertainment. The judge a little bit less so.

You can criticise the hourly cost of lawyers all you like, and it should be a beautiful demonstration to people like you that no, "high costs means more people go into the profession and lower the costs" is not and has never been a reality. But to think that any AI could ever be efficient in a system such common law, the most batshit insane, inefficient, "rethoric matters more than logic" system is delusional.


Yeah, unfortunately, it's the lawyers that are using ChatGPT.


Threatens income? It promises to reduce costs, which will lift profit.


> This is nothing but speculation

Did you read the paper?


It's written in the future tense, so I can safely call it speculation. I've read the abstract which is all I need to decide the full text is not worth my time.


Cool, then we can safely give your comments exactly the same treatment - since they are completely uninformed speculation about a paper you haven't read.


Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either. He said that he's read the abstract, which is supposed to give you an impression of the structure of the argument. Why don't you engage with the criticism?


There is no criticism. He did not read the paper.


I read the entire paper, and his criticism is spot on. I even read through many of the references, which, in my spot checks, don't support the claims in the paper. Very disappointing work, IMHO.


Cool. Perhaps you should have criticized the paper and requested feedback instead of defending someone who did not read the paper!


I did both! I'm not concerned with defending anyone, I'm interested in truth. His criticism was sound, and your comments contribute even less to the discussion than his. Very disappointing.


> Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either.

Most people would say this is a defense of the person, or at least a defense of the person's choice to not read the full paper. It is no fun to debate with intellectual dishonesty.


Anyone with experience reading research papers professionally will tell you that one of the responsibilities of a paper's abstract is to meaningfully convey the level of evidence and certainty that the paper is backed by. This paper did very well at that, by having the abstract indicate its more of an essay/opinion piece than an a more scientific piece. This is blindingly obvious, and was a simple observation that everyone for some reason dismissed not on merit, but because the person who said it hadn't read the whole paper, which for a 40 page document is an incredibly high bar that is likely not met by 90% of the people commenting here.

Anyway, I'm tired of this now.


And you must have read all 40 pages of it, right? Because if not you are a hypocrite. I claim that the Bible is the literal truth. Oh, you haven't read every word of the Bible? Your arguments against me are worthless!


I did actually read all 40 pages of it. I frequently read law journal articles, among with lots of other types of journals and papers.

I also used to maintain up to date reading lists of various areas (compiler optimization, for example) because I would read so many of the papers.

Let me give you a piece of advice:

First, gather facts, then respond.

Here you start by sarcastically asserting i wouldn't have read it, but it would generally be better to ask if i read it (fact gathering), and then devise a response based on my answer. Because your assertion is simply wrong, making the rest of it even sillier.

As for the strawman about the bible - i'm kinda surprised you are really trying to equate not reading any part of something with not reading every part of something, and really trying to defend what you did here, instead of just owning up to it and moving on.

This speaks a lot more about you than anything else.

That said -

When you make a claim covering that everything in a book is the literal truth, you only have to find a part that is not the literal truth to prove the claim wrong. Which may or may not require reading the entire thing to start (if it turns out your counter-claim is wrong, you at least have to read and find another)

In the original comment, you'll note your claim was "This is nothing but speculation" - IE all of the paper is speculation.

If we are being accurate, this would require you reading the entire thing to be able to say all of it is speculation. How could you know otherwise?

Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

Perhaps you should just quit while you are behind, and learn that when you screw up, the correct thing to do is say "yeah, i screwed up, i should have read it before saying that", instead of trying to double down on it.

Doubling down like this just makes you look worse.

As an aside - I was always an avid reader, and very bored in synagogue, so i have read every word of a number of books of the hebrew bible because it was more interesting than paying attention to the sermons.


His criticism that the paper is speculation is spot on. Many of the references don't support the claims they are cited for. It's fascinating to me that you want to argue the poster's standing to make a criticism more than you want to actually discuss the content of the paper.


Its a particularly weird criticism given that Danny is a lawyer and has experience in the CS research community. He is especially well suited to address a criticism that the authors are trying to trick people into thinking their work is a scientific paper, which is plainly a ridiculous criticism.


I'd love some clarity on that.

The linked page says this:

``` How AI Destroys Institutions

77 UC Law Journal (forthcoming 2026)

Boston Univ. School of Law Research Paper No. 5870623

40 Pages Posted: 8 Dec 2025 Last revised: 13 Jan 2026 ```

What exactly is this document? It reads like a heavily cited op-ed, but is coming out of a law school from a professor there and calls itself a "research paper". Very strange.

EDIT: I looked up UC Journal of Law, and I think I was misled because I'm not familiar with the domain. They describe themselves as:

> Since 1949, UC Law Journal, formerly known as Hastings Law Journal, has published scholarly articles, essays, and student Notes on a broad range of legal topics. With roughly 100 members, UCLJ publishes six issues each year, reaching a large domestic and international audience. Each year, one issue is dedicated to essays and commentary from our annual symposium, which features speakers and panel discussions on an area of current interest and development in the law.

So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.


> So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.

Right. And I think it is weird that people immediately leapt to this being some sort of deception by the authors and I think it was weird that when a lawyer who has experience in both domains clarified this that people doubled down.


Yep, I agree that jumping to the "deception" angle would be pretty far down on my list. I always admired the simplicity of HN's guideline to focus on curiosity, since it has far-reaching effects on the nature of the discourse.


> Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

I did read a some of it. The abstract. Which is there for the specific purpose of providing readers a summary to decide whether it is worth their time to read the whole thing.

And, yeah, obviously I didn't mean literally all because that just isn't how people talk. e.g. the author's names are not speculation. But the central premise of the paper "How AI Destroys Institutions" is speculative unless they provide a list of institutions that have been destroyed by AI and prove that they have. The institutions they list, "the rule of law, universities, and a free press," have not been destroyed by AI, so therefore, the central claim of the paper is speculative. And speculation on how new tech breakthroughs will play out is generally useless, the classic example being "I think there is a world market for maybe five computers," by the CEO of IBM.

Furthermore their claim here: > The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

This just completely contradicts any experience I have ever had with such institutions. Especially "empower individuals to take intellectual risks and challenge the status quo". Yeah. If you believe that, then I've got a bridge to sell you. These guys are some serious koolaid drinkers. Large institutions are where creativity and risk taking go to die. So yeah, not reading 40 pages by these guys.

You can tell a lot from a summary, and the entire premise that you have to read a huge paper to criticize is just bullshit in general.


None of these paper's arguments are AI specific. The IRS doesn't need AI to make mistakes and be unable to tell you why it did so. You can find stories of that happening to people already.


i think when most people bring up mistakes that these models make, much of their concern is that little can be done.

when one of the juniors makes a mistake, i can talk to them about it and help them understand where they went wrong, if they continue to make mistakes we can change their position to something more suited for them. we can always let them go if they have too much hubris to learn.

who do we hold to account when a model makes a mistake? we’re already beginning to see, after major fuckups, companies blackhole nullrouting accountability into “not our fault, don’t look at us, ai was wrong”

the other thing is, if you have done a good job selecting your team, you’ll have people who understand their limits, who understand when to ask for help, who understand when they don’t know something. a major problem with current models is that it will always just guess or stretch toward random rather than halt.

so yes, people will make mistakes, but at least you can count on being able to mitigate for those after.


> who do we hold to account when a model makes a mistake?

First we stop anthromorphising the program as capable of making a "mistake". We recognise it merely as machine providing incorrect output, so we see the only mistake was made by the human who chose to rely upon it.

The courts so far agree. Judges are punishing the gulled lawyers rather than their faux-intelligent tools.


Who was held to account when the IRS made a mistake and sent me a demand letter for over $100K of "unpaid taxes" I didn't owe? Who compensated me for the hours I spent on hold and the money I had to pay an accountant to deal with it?


I think the argument makes a central mistake in putting too much trust into institutions. I don’t disagree with the conclusions, but the premise around blindly trusting institutions simply because they’ve been around for a long time lost me from taking most of their arguments seriously, despite opening the article thinking I would agree.


>Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances.

https://www.pewresearch.org/politics/2025/12/04/public-trust...

In the 1960s, trust in institutions was around 70%.

In 2025 it's about 17%.

This is not something like we had 70% until 2023 and then AI dropped our trust suddenly. If anything, AI doesnt even register on the graph.

So correlation here is practically non-existent. Gallup and Pew have the similar trends for journalists and universities.

You dont get to blame AI for this.

Interesting bump and question. How did clinton improve reputation and then bush destroy it? Or is that a false hump?


It's not a false hump, but why do you assume the president caused it just because it happened at the same time?


Who do institutions serve? To me AI democratises information. Allows access to information that would normally be gatekept. AI reduces barriers, and they don't like that because those barriers gave them authority.


> Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

I am not sure if I am off-topic, but I am having a lot of trouble with this statement. Institutions are often opaque, and I have never belonged to an institution that empowered me to "take intellectual risks and challenge the status quo." Quite the contrary.


"purpose-driven" is the relevant qualifier here.


> The affordances of AI systems have the effect of eroding expertise

Recently, it so happened that I spent an hour reverse engineering and documenting a piece of a system. A co-worker asked a LLM to do the same. It generated some really nice documentation.

The difference is, I (as a team member) now have the understanding. Generating the documentation does not increase the understanding of the team.


I fear the title of this article is going drive most of the conversation.

I haven’t read through the whole thing yet, but so far the parts of the argument I can pull out are about how Institutions actually work, as in a collection of humans. AI, as it currently stands, interacts with humans themselves in ways that hollow out the kind of behavior we want from institutions.

“ Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it. But that is not the world we live in. Short-term political and financial incentives amplify the worst aspects of AI systems, including domination of human will, abrogation of accountability, delegation of responsibility, and obfuscation of knowledge and control”

An analogy that I find increasingly useful is that of someone using a forklift to lift weights at gym. There is an observable tendency when using LLMs, to cede agency entirely to the machine.


> Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it.

I can't see how, given that potential is 99% shorcuts.


This articles claims are also interesting to me in terms of large scale software systems. They authors say:

> They (AI systems) delegitimization knowledge, inhibit cognitive development, short-circuit long term thinking processes, and isolate humans by displacing or degrading long term human connection.

This is a pretty good summary of the worry I’ve seen expressed about extensive use to build large pieces of software. Large pieces of software aren’t just the code that describes them. They exist in some sense in their authors as well.

Effective projects seek to expand understanding of software systems across the organization so that relevant decisions can be made about their future. By relegating much of the decision making around structure of the software to AI you lose the systemic knowledge shared across the organization.




Not really a dupe if no one really discussed it. And I'm glad, didn't see it that previous time :)


"Abundance of books makes men less studious" - 15th-century Venetian editor, Hieronimo Squarciafico


This dire warning against AI echoes the anxieties of a much earlier elite: the late-medieval clergy facing the invention of the printing press. For centuries, they held a privileged monopoly on knowledge, controlling its interpretation and dissemination. The printing press threatened to shatter that authority by democratizing access to information and empowering individuals.

Similarly, today's critics, often from within the very institutions they defend, frame AI as a threat to "expertise" and "civic life" when in reality, they fear it as a threat to their own status as the sole arbiters of truth. Their resistance is less a principled defense of democracy and more a desperate attempt to protect a crumbling monopoly on knowledge.


> a desperate attempt to protect a crumbling monopoly on knowledge

More like a war on the traditional, human-based knowledge, leveraged by people who believe that via coveting the world's supply of RAM, SSDs, GPUs, and what not, can achieve their own monopoly on knowledge under the pretense of liberating it. Note that running your own LLM becomes impossible if you can no longer afford the hardware to run it on.


Better that I'm forced to rent an LLM from a tech monopolist for a few dollars than be forced to hire a member of the lawyers cartel for $500 an hour.


Come now. You mean the highly regulated, more competitive world of law? That too, as it is practiced in America? The once capital of economic competition?

That “cartel”?

Vs the leaders of an industry that built their tools through insane amounts of copyright infringement, and have forced the coining of “enshittification” to describe all pervasive business strategies?

The same industry which employs acqui-hire to find ways to cull competition?


The industry where you can be a paralegal for 20 years, but not allowed to even attempt to take the bar exam because you haven't paid your $250K and 3 years lost earnings to get your degree from the lawyer's cartel? That "competitive" industry?


Yes, it's very competitive. If you were in the unfortunate position of needing a state-appointed attorney to represent you against fallacious claims, you would appreciate the scrutiny and regulation that by and large provides fair representation to all. The legal profession believes that 3 years of study is required for all lawyers to fully immerse themselves in the study of law, and without that, something could be lost. Many lawyers think the third year is probably overkill, but these are also amongst the smarter lawyers that also recognize that many people will come to the profession with no prior interest, and that overall, it's preferable to enforce high standards. You could somehow test for whatever it is law school transmits to its pupils, and offer the exam that guarantees that lawyers have been exposed to and in some sense understood all the various aspects of the degree, but then the exam just becomes more difficult and law school becomes even more of a prerequisite. Lawyers are like airline pilots in that lives are always in the balance, and even more critically, they are foundational pillars of a just society and allowing "just anyone, even a smart test taker" to become a lawyer is less favorable than trying to improve on the current system.

The current bubble's effect on hardware is alarming but if they think they are going to create a permanent economic manipulation they are deluded. The US' hold on controls is eroding at a faster rate and China will be making good enough all the faster if its price/spec ratio is absurdly high.

Crypto currency makers can have artificial limits but no amount of limiting gpt-next access is cutting access to good enough.


Surely we'll all beat monopolies by running our own local LLMs, storing whole blockchains on our local storage, building our own atomic power plants, flying our own airlines and launching our own satellites via our own rocket fleets. And producing our own trillion-transistor silicon in our own fabs.

We just have to start printing our own money and buying us some pocket armies and puppet politicians first.


It's so ridiculous to make this argument when the people who stand to benefit the most from this technology are the massive corporations that can subsidize the compute and capital costs of this technology. Is it democratization when Google pulls something your wrote on your website then runs it through an LLM so they can serve it directly to a user? You say people see this as a threat to their status but the reality is this is a massive consolidation of the information economy of the internet in the hands of a few corporate interests.


The people who stand to benefit are you. If I have to pay a lawyer $1000 to review a contract, or spend $10 in tokens, I win. OpenAI may make $9 off of those tokens, or they may make $1. But that doesn't matter at all to me. I care about the $1000 vs $10, not the $9 vs $1.


$10 until the contract is voided in court because you didn't know the law.


Does the quality of the work matter at all to you?


It certainly does, but that's not guaranteed with humans either. Nor is it the only factor that matters. It's a cost benefit tradeoff. If I am on trial for a crime, obviously I will pay for the quality. If I want to know what some language in a simple contract means, I will ask a LLM.


If what you say was true, why are people from not within those institutions also try to warn others about the potential downfall of "expertise" and "civic life"? Are they just misinformed? Paid by these "institutional defenders" or what is your hypothesis?


In most cases those people are members of the upper class who hold credentials issued by those institutions, and often are in professions protected by state enforced cartels where the ticket for entry is one of said credentials.


> In most cases those people are members of the upper class who hold credentials issued by those institutions

Right, but in my comment I'm explicitly asking about the ones that don't have any relation yet seem to defend it anyways? "Don't people don't actually exists" isn't really an argument...


So you're saying codemonkeys are mad they don't get seen as the 'cool guys', we have to kill the jobs 'cool guys' have. The codemonkeys will never be cool, just accept it, there's no way to fix it. These cool guys will for the most part be 'cool' even if you take away their jobs right now.


> Are they just misinformed?

Not all of them, but given the same questionable or outright false assumptions (e.g. AI companies are doing interference at a loss, the exaggerated water consumption number, etc) keeping getting repeated on YouTube, Reddit and even HN where the user base is far more tech-savvy than the population, I think misinformation is the primary reason.


The alarm isn't coming from outside the institutions; it's coming from a wider, more modern clergy. The new priestly class isn't defined by a specific building, but by a shared claim to the mastery of complex symbolic knowledge.

The linguists who call AI a "stochastic parrot" are the perfect example. Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory. They are watching their entire intellectual paradigm—their very claim to authority—be rendered obsolete.

This isn't a grassroots movement. It's an immune response from the cognitive elite, desperately trying to delegitimize a technology that threatens to trivialize their expertise. They aren't defending society; they're defending their status.


The first weakness of your claim is that it is inherently one of the elite.

You read the works of the cognitive elite, when they support AI. When most people sing its praises, it’s from the highest echelons of white collar work priesthood.

AI is fundamentally a tool of the cognitively trained, and shows its greatest capability in the hands of those capable of assessing its output as accurate at a glance. The more complex the realm, the deeper the expertise to find value in it.

Secondly, linguists are not the sole group espousing the concerns with these tools. I’ve seen rando streamers and normal folk in WhatsApp groups, completely disconnected from the AI elite hating what is being wrought. Students and young adults outright wonder if they will have any worthwhile economic future.

Perhaps it is not a “movement”, but there is an all pervasive fear and concern in the population when it comes to AI.

Finally, position is eerily similar to the dismissal of concerns from mid level and factory floor job workers in the 80s and 90s. It was forgivable given the then prevalent belief that people would be retrained and reabsorbed into equivalently sustaining roles in other new industries.


> Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory.

It's some wild claim. Every linguist worth their salt had known that you don't need grammatical theory to reach native level. Grammar being descriptive rather than prescriptive is the mainstream idea and had been long before LLM.

If you actually ask them, I bet most linguists will say they are not even excellent English (or whichever language they studied the most) teachers.

Plus, "stochastic parrot" was coined before ChatGPT. If linguists really felt that threatened by the time when people's concerns over AI was like "sure it can beat go master but how about league of legends?" you have to admit they did have some special insights, right?


You've mistaken the battlefield. This isn't about descriptive grammar. It's about the decades-long dominance of Chomsky's entire philosophy of language.

His central argument has always been that language is too complex and nuanced to be learned simply from exposure. Therefore, he concluded, humans must possess an innate, pre-wired "language organ"—a Universal Grammar.

LLMs are a spectacular demolition of that premise. They prove that with a vast enough dataset, complex linguistic structure can be mastered through statistical pattern recognition alone.

The panic from Chomsky and his acolytes isn't that of a humble linguist. It is the fury of a high priest watching a machine commit the ultimate heresy: achieving linguistic mastery without needing his innate, god-given grammar.


> LLMs are a spectacular demolition of that premise.

It really isn't. While I personally think the Universal Grammar theory is flawed (or at least Chomsky's presentation is flawed), LLM doesn't debunk it.

Right now we have machines that recognized faces better than humans. But it doesn't mean humans do not have some innate biological "hardware" for facial recognition that machines don't possess. The machines simply outperform the biological hardware with their own different approach.

Also, I highly recommend you express your ideas with your own words instead of letting an LLM present them. It's painfully obvious.


I do not see how it can be claimed that "LLMs are a spectacular demolition of that premise", because LLMs must be trained on an amount of text far greater than that to what a human is exposed.

I have learned one foreign language just by being exposed to it almost daily, by watching movies spoken in that language, without using any additional means, like a dictionary or a grammar (because none were available where I lived; this was before the Internet). However, I have been helped in guessing the meaning of the words and the grammar of the language, not only by seeing what the characters of the movie were doing, correlated to the spoken phrases, but also by the fact that I knew a couple of languages that had many similarities with the language of the movies that I was watching.

In any case, the amount of the spoken language to which I had been exposed for a year or so, until becoming fluent in it, had been many orders of magnitudes less than what is used by a LLM for training.

I do not know whether any innate knowledge of some grammar was involved, but certainly the knowledge of the grammar of other languages had helped tremendously in reducing the need for being exposed to greater amounts of text, because after seeing only a few examples I could guess the generally-applicable grammar rules.

There is no doubt that the way by which a LLM learns is much dumber than how a human learns, which is why this must be compensated by a much bigger amount of training data.

Seeing how the current inefficiency of LLM training has already caused serious problems for a great number of people, who either had to give up on buying various kinds of electronic devices or they had to accept to buy devices of a much worse quality than previously desired and planned, because the prices for DRAM modules and for big SSDs have skyrocketed, due to the hoarding of memory devices by the rich who hope to become richer by using LLMs, I believe that it has been proven beyond doubt that the way how LLMs learn, for now, is not good enough and it is certainly not a positive achievement, as more people have been hurt by it than the people who have benefited from it.


> "stochastic parrot" was coined before ChatGPT.

But not before LLMs. https://en.wikipedia.org/wiki/Stochastic_parrot


> it's the existential terror of seeing a machine master language without needing their decades of grammatical theory.

Rehashing text in a language is not mastering that language, and no, is not feared by linguists.


It was the same clergy (or rather parts of it) that used the printing press to great success.

Martin Luther used it to spread his influence extremely quickly for example. Similarly, the clergy used new innovations in book layout and writing to spread Christianity across Europe a thousand years before that.

What is weird about LLMs though, is that it isn't a simple catalyst of human labor. The printing press or the internet can be used to spread information quickly that you have previously compiled or created. These technologies both have a democratizing effect and have objectively created new opportunities.

But LLMs are to some degree parasitical to human labor. I feel like their centralizing effect is stronger than their democratizing one.


Martin Luther was clergy, but he was absolutely not "the same clergy."


Everyone who tells the story of the reformation leaves out that Martin Luther also used this new technology to widely disseminate his deranged anti-Semitic lies and conspiracies, leading to pogroms against Jews, a hundred years of war across Europe, and providing the ideological basis for the rise of Nazism.


You're right that later in his life he spread antisemitism and other terrible opinions as he was extremely elitist towards the peasantry. Definitely not a fan of that sort of thing.

But I didn't want to make a value judgement about Martin Luther's ideological legacy, but wanted to introduce some nuance into the narrative about disruptive innovation.


I think this could be applied to most fields where LLMs move in. Let's take the field we are probably most familiar with.

Currently companies start to shift from enhancing productivity of their employees with giving them access to LLMs, they start to offshore to lower cost countries and give the cheap labor LLMs to bypass language and quality barriers. The position isn't lost, it's just moving somewhere else.

In the field of software development this won't be a an anxiety of an elite or threat to expertise or status, but rather a direct consequence to livelihood when people won't be hired and lose access to the economy until they retrain for a different field. So a layer on top of that you can argue with authority and control, but it rather has economic factors to it that produce the anxiety.

In that sense, doesn't any knowledge work have a monopoly on knowledge? It is the entire point to have experts in fields that know the details and have the experience, so that things can be done as expected, since not many have the time nor the capabilities to get into the critical details.

If you believe there is any good will when you can centralize that knowledge to the hands of even less people, you produce the same pattern you are complaining about, especially when it comes to how businesses are tweaking their margins. It really is a force multiplier and equalizer, but a tool, that can be used in good ways or bad ways depending on how you look at it.


This is a criticism of the author's backgrounds rather than the content of the article.


"Oxycodone is Not Addictive" by employee of Perdue Pharma.

It absolutely makes sense to criticize the author's background.


True. I myself try to read articles without looking up the authors.

It is hard though. When someone makes an extraordinary claim I feel the urge to look them up. It is a shortcut to some legitimacy to that claim.


Most of the comments here are. HN hates lawyers.


Many on HN are sad that even with tech workers/businesses having extreme wealth they're still not seen as better than the 'old elite'.


It is funny watching people debate at length with your LLM word-vomit. I'm not sure whether you yourself are convinced that the soup you've copypasted across multiple replies means anything, but apparently some people are convinced enough to argue with it, so this is pretty great satire in one way or another.


it feels good to watch the aforementioned clergy kvetch about AI while multiple multi-trillion dollar corporations backed by a friendly administration continue to run their bulldozers :)


The printing press was also used to print witch hunting books and caused 200 years of mass hysteria around witches and witch trials.

Before the printing press, only the clergy could "identity" witches but the printing press "democratized knowledge" of witch identification at larger scale.

The algorithmic version of "It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so" is going to cause huge trouble in the short and medium term.


An institution is worth nothing without the spirit, humanity and exchange of knowledge among the humanity behind it. The fostering of real expertise is difficult, but without this expertise you are doomed to believe whatever your Corporate AI is telling you.

So is the AI better?

No. It's quicker, easier, more seductive.


This is a good analogy, but you made it backwards. The "Clergy" fears the "Printing Press", as it acts as a tool of decentralized information spreading. But LLMs are not decentralized and thus are not the "Printing Press". LLMs are what the "Clergy" (say, for example, all the AI companies led by billionaires in cahoots with the west's most powerful government) uses to suppress the real "Printing Press" (the decentralized, open internet, where everybody can host and be reached).


this is much much closer to going in reverse back to when the church were the deciders rather than liberating knowledge the way the printing press did.

the church did the thinking for the peasants. the church decided what the peasants heard, etc… this is moving absolutely in that direction.

the models now do the thinking for us, the ai companies decide what we get to see, these companies decide how much we pay to access it. this is the future.


Isn't that just an ad-hominem against the writers? A threat to the status quo is still a threat to people and could have negative consequences.


Is that what happened? In Nexus, Harari looks at this exact same situation: the invention of the printing press, and shows how clergy used it to stoke witch hunts (ahem, misinformation) for decades--if not centuries. It was not for hundreds of years until after the invention of the printing press that we had The Enlightenment. What gave rise to The Enlightenment? Harari argues it is modern institutions.

It's not so simple that we can say "printing press good, nobody speak ill of the printing press."


> Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life.

I disagree. The backbone of democratic life are the rule of law and freedom of speech, which makes a big difference. The press has historically been a counter-power inquiring into privileges and breaches of the rule of the law and thus promoted freedom of speech but almost only inasmuch it served the interest of the emerging merchant bourgeoisie . And we are long past that. Universities never have been liberal forces: they backed the Church and refused paradigm shifts. They still are very conservative even though in a peculiar sense, as leftist conservatives.


I was amused at how they quote War Games.


The arguments that "AI Destroys Institutions" seem pretty iffy. I scrolled down to see what institutions had been destroyed it went on about Elon Musk's Doge destroying stuff but their closing say USAID I think had zero to do with AI.

I'm in the UK and I don't think any institutions have been destroyed or even noticeably harmed by AI. In the US there is general chaos under Trump so it may be hard to differentiate.


[flagged]


> > universities [...] empower individuals to take intellectual risks and challenge the status quo.

What a misleading quote you made there. Why?

Full quote:

> Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

I'd agree with that universities more often than not don't fall into the group of "Purpose-driven institutions built around transparency, cooperation, and accountability", but that's a different thing. There are more institutions than universities.


The actual full quote specifically defines universities to be one of those institutions.

>"Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo."


> The actual full quote specifically defines universities to be one of those institutions.

Exactly, one of, among others, and the paper is about a group of civic institutions, not specifically universities, that's why that quote is very misleading. The actual quote is talking about "Purpose-driven institutions built around transparency, cooperation, and accountability", I think that's pretty clear.


> free press

Stopped reading here, as these people still believe in that fairytale of theirs.


The American press isn't perfectly free, but you should see what a state-controlled press is like.


Bezos, whatever the heck this is [1], you people are delusional.

Later edit: For good measure, Zuckerberg, too [2]

[1] https://x.com/AdameMedia/status/2011935282912731453

[2] https://x.com/infolibnews/status/2011196769363697684


> The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other.

This affordability is HEAVILY subsidized by billionaires who want to destroy institutions for selfish and ideological reasons.


I think you have misread the word “affordances”. It’s not about affordability [0]. The main text also explains what it means.

[0] https://en.wikipedia.org/wiki/Affordance


This is literally corporate textbook 101. Subsidize your product, become market leader, cause lock-in and make your customers dependant.

Every large enough corporate wants to become the new Oracle.


Given how nobody properly understands LLMs, I doubt that they are intentionally designed like that. But the effect... yeah. I can see that happening.

(By the way, are you confusing affordance, the UX concept, with affordability?)


You can intentionally market the use cases without knowing exactly how they work, though. So it's intentional investment and use case targeting, rather than directly designing for purpose. Though, the market also drives the measures...so they iteratively get better at things you pour money into.


Nobody properly understands dog brains either and yet you can still train a dog to sit.


If you just met a dog for the first time, you can't :) - my guess is LLMs are somewhere in between. It would be cool to see what happens if somebody tried to make an LLM that somehow has ethical principles (instead of guardrails) and is much less eager to please.


The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.


> The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.

I see some problems with the above comment. First, using the phrase “stochastic parrot” in a dismissive way reflects a misunderstanding of the original paper [1]. The authors themselves do not weaponize the phrase; the paper was about deployment risks, not capability ceilings. I encourage everyone people who use the phase to go re-read the paper and make sure they can articulate what the paper claims and be able to distinguish that from their usage.

Second, what does the comment mean by “fix that, and the parrot falls off the perch.”? I don't know. I think it would need to be reframed in a concrete direction if we want to discuss it productively. If the commenter can make a claim or prediction of the "If-Then" form, then we'd have some basis for discussion.

Third, regarding "eagerness to please"... this comes from fine-tuning. Even without it (RLHP or similar) LLMs have significant prediction capabilities from pretraining (the base model).

All in all, I can't tell if the comment is making a claim I can't parse and/or one I disagree with.

[1]: "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Bender et al., 2021)


The institutions have been doing a fine job of destroying all their credibility and utility all on their own for far longer than this new AI hype cycle.

ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.

I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.


AI Destroys Institutions

Working as intended. WONTFIX.


A thought provoking essay on impact of AI systems civic institutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: