Hacker Newsnew | past | comments | ask | show | jobs | submit | NoraCodes's commentslogin

I don't care that AI development is more fun for the author. I wouldn't care if all the evidence pointed toward AI development being easier, faster, and less perilous. The externalities, at present, are unacceptable. We are restructuring our society in a way that makes individuals even less free and a few large companies even more powerful and wealthy, just to save time writing code, and I don't understand why people think that's okay.

>hat makes individuals even less free and a few large companies even more powerful and wealthy

You're what, 250 years behind at this point?

Since the dawn of the industrial revolution there is a general trend that fewer can make more with less. And really even bigger than AI were fast fuel based transportation and then global networks. Long before we started worrying about genAI, businesses have been consolidating down to a few corporations that make enough to supply the world from a singular large factories.

We fought the war against companies. Companies won.

Now you're just at the point where the fabric makers were, where the man with the pick axe was, where the telephone switch operator was, where the punch card operator was.


Saying "well the world sucks so what's new" isn't a perspective a lot of folks are going to resonate with. Just because we can recognize troubling patterns from the past in the present doesn't mean we just volunteer to lie down and take a boot to the neck. Positive change is always possible. Always.

Good news: the evidence points to it being slower than non-ai workflows. So we're destroying our economy, society, and planet to make worse software, more slowly! :)

we are also making all software much worse at the same time. I dont think every app needs AI but apparently they do. Notion used to be a Zen writing app back in the day, canva used to an app where you can do simple graphics without a complicated tools panel.

I think Pete's article last year made a good case for regarding this as the "horseless carriage" stage, i.e. growing pains around how to use a new technology effectively.

AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)

Or maybe it's analogous to the skeuomorphic phase of desktop software. Clumsy application of previous paradigm to new one; new wine in old bottles; etc.


Affinity used to be good, now it's Canva++ with AI. Disgusting work

> Disgusting work

Please don't do this here. Thoughtful criticism is fine on this site but snark and name-calling are not.

https://news.ycombinator.com/newsguidelines.html

Edit: on closer look, you've been breaking the HN guidelines so badly and so consistently that I've banned the account. Single-purpose accounts aren't allowed here in any case.


We, as SW engineers, have been doing that to many industries for the last 40+ years. It's silly and selfish to draw the line now that we're in the crosshairs.

I've spent my 20 year career working largely in medical software. The only jobs I've been replacing are pancreas that stop functioning correctly.

Maybe don't speak for all of us.


Computers themselves replaced computers (yeah, a job title). Your medical software certainly automatizes someone else's job, otherwise no one will pay you to write them. You just don't care about them.

Or you do, but you believe it's worth it because your software helped more patients, or improved the overall efficiency and therefore created more demand and jobs - a belief many pro-AI people hold as well.


The job used to be the patients'. Manually managing type 1 diabetes isn't a fun job. Try reading Think like a pancreas for the fun details.

Patient outcomes are significantly better with modern technology.

> You just don't care about them.

Yeah, okay.


My comment wasn't about you in particular but the industry as a whole.

Much of the software written historically is to automate stuff people used to do manually.

I'd wager you use email, editors, search engines, navigation tools and much more. All of these involved replacing real jobs that existed. When was the last time you consulted a city map?


> reactionary politics that aim to restore the climate that allowed these aesthetics to blossom

Which policies, specifically, will result in a return to this aesthetic, in your opinion?


Sounds like bullshit to me, the early 90s seem in essence more liberal then today generally. Lgbt etc rights were not quite there in some places maybe but it was moving toward the right direction.

> Harvard President Alan M. Garber ’76 said the University “went wrong” by allowing professors to inject their personal views into the classroom, arguing that faculty activism had chilled free speech and debate on campus.

The university does not "allow" professors to express their opinions; that is a fundamental tenet of academic freedom, and is critically important to free speech in and of itself. The idea that a university could _prevent_ professors from giving their opinions in class is laughable anyway; if we didn't value the opinions of professors, we wouldn't need them at all, and could get away with lecturers without PhDs or research obligations. (Of course, many university administrators would quite like that.)

It seems to me that Garber is less interested in preventing faculty from expressing opinions in general and more that he is interested in suppressing a particular set of opinions he and his donors disagree with.


There is a big difference between professors being "free" to publish and express their views on a subject, and teaching that same subject in such a way that their views are presented as the only acceptable views on that subject.

I think you have more fundamental problems if you’re not capable of not taking people at their word at that point

Do you think that the use of a hammer is an innate skill, and that woodworkers learn nothing from their craft?

Okay, so let's say the use of a coding agent isn't an innate skill, so the author was gaining experience with the tool.

You - and many other commentors in this thread - misunderstand the legal theory under which AI companies operate. In their view, training their models is allowed under fair use, which means it does not trigger copyright-based licenses at all. You cannot dissuade them with a license.


While I think OP is shortsighted in their desire for an “open source only for permitted use cases” license, it is entirely possible that training will be found to not be fair use, and/or that making and retaining copies for training purposes is not fair use.

Perhaps you can’t dissuade AI companies today, but it is possible that the courts will do so in the future.

But honestly it’s hard for me to care. I do not think the world would be better if “open source except for militaries” or “open source except for people who eat meat” license became commonplace.


The problem are "viral" licences. Must the code generated by an AI trained with GPL code be released with a GPL licence?

Also, can an AI be trained with the leaked source of Windows(R)(C)(TM)?


> Also, can an AI be trained with the leaked source of Windows(R)(C)(TM)?

I think you mean to ask the question "what are the consequences of such extreme and gross violations of copyright?"

Because they've already done it. The question is now only ... what is the punishment, if any? The GPL requires that all materials used to produce a derivative work that is published, made available, performed, etc. is made available at cost.

Does anyone who has a patch in the Linux kernel and can get ChatGPT to reproduce their patch (ie. every linux kernel contributor) get access to all of OpenAIs training materials? Ditto for Anthropic, Alphabet, ...

As people keep pointing out when defending copyright here: these AI training companies consciously chose to include that data, at the cost of respecting the "contract" that is the license.

And if they don't have to respect licenses, then if I run old Disney movies through a matrix and publish the results (let's say the identity matrix)? How about 3 matrices with some nonlinearities? Where is the limit?

Since copyright law cannot be retroactively changed, any update congress makes to copyright wouldn't affect the outcome for at least a year ...


Open source except for people who have downvoted any of my comments.

I agree with you though. I get sad when I see people abuse the Commons that everyone contributes to, and I understand that some people want to stop contributing to the Commons when they see that. I just disagree - we benefit more from a flourishing Commons, even if there are free loaders, even if there are exploiters etc.


Of course, if the code wasn't available in the first place, the AI wouldn't be able to read it.

It wouldn't qualify as "open source", but I wonder if OP could have some sort of EULA (or maybe it would be considered an NDA). Something to the effect of "by reading this source code, you agree not to use it as training data for any AI system or model."

And then something to make it viral. "You further agree not to allow others to read or redistribute this source code unless they agree to the same terms."


My understanding is that you can have such an agreement (basically a kind of NDA) -- but if courts ruled that AI training is fair use, it could never be a copyright violation, only a violation of that contract. Contract violations can only receive economy damages, not the massive statutory penalties that copyright does.


Having a license that specifically disallows a legally dubious behavior could make lawsuits much easier in the future, however. (And might also incentivize lawyers to recommend avoiding this code for LLM training in the first place.)


People think that code is loaded into a model, like a massive available array of "copy+paste" snippets.

It's understandable that people think this, but it is incorrect.

As an aside, Anthropic's training was ruled fair use, except the books they pirated.


Fair use is a defense to copyright violation, but highly dependent on the circumstances in which it happens. There certainly is no blanket "fair use for AI everything".


This is quite literally the opposite of the tragedy of the commons.


I mean, I'll probably ditch the LLM - after all, it's open source so I can just build my own app to receive the messages - but it seems like a neat bit of kit.


Presumably it's more like an errant Ctrl-C.


Yup exactly this. Also Ctrl-W, alt tab, etc.


All these issues having been solved already in kiosk setups.


This article makes a distinction between "TV and radio" and "digital devices". I wonder how much of the gap between how much older generations say they get news from the latter category is became younger people are more likely to understand the actual meaning of those words? Most TVs are indeed digital devices!


Why do AI companies get to do whatever they want in order to meet their business goals ("liftoff")?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: