Hacker Newsnew | past | comments | ask | show | jobs | submit | bwfan123's commentslogin

> you might temper your expectations of a rational market

TSLA is like a snowball down a hill. It morphs from EV to autonomous driving to AI to robots to space to tera fab to space datacenters. Rolling in the next big narrative or gov handout as it speeds down the hill.


There was also a mini bubble around social media aggregators and RSS feeds culminating in sites like gada.be

I see the dynamic as follows (be warned, cynical take)

1) there are the youth who are seeking approval from the community - look I have arrived - like the person building the steaming pile of browser code recently.

2) there are the veterans of a previous era who want to stay relevant in the new tech and show they still got mojo (gastown etc)

In both cases, the attitude is not one of careful deep engineering, craftsmanship or attention to the art, instead it reflects attention mongering.


producing BS can be equated to generating statements without caring for their truth value. Generating them is easy. Refuting them requires one to find a proof or a contradiction which is a lot of work, and is equal to "solving" the statement. As an analogy, refuting BS is like solving satisfiability, whereas generating BS is like generating propositions.

I am actually surprised that the LLM came so close. I doubt it had examples in its training set for these numbers. This goes to the heart of "know-how". The LLM should should have said: "I am not sure" but instead gets into rhetoric to justify itself. It actually mimics human behavior for motivated reasoning. At orgs, management is impressed with this overconfident motivated reasoner as it mirrors themselves. To hell with the facts, and the truth, persuation is all that matters.

> It would be great if those scientists who use AI without disclosing it get fucked for life.

There need to be dis-incentives for sloppy work. There is a tension between quality and quantity in almost every product. Unfortunately academia has become a numbers-game with paper-mills.


> If LLMs are covering a gap here maybe there's an opportunity for better, local, lower-tech tooling that doesn't require such a huge tech stack (and subscriptions/rent) to solve simple, tractable problems?

I see this with every new technology stack. Way back, we had folks putting out browser "applets" to do the same things that could be done in excel. And then, we had these apps built in the cloud, in mobile, on ios/android, in react, on raspberry pi, on a gpu etc..etc.. ie, Simple apps reinvented with some new tooling. It is almost the equivalent of 'printf("hello world")' when you are learning a new language. This is not to undermine the OPs efforts, but I see it in the spirit of "learning" rather than that of solving a hard problem.


thanks for sharing !

> Understanding (not necessarily reading) always was the real work.

Great comment. Understanding is mis-"understood" by almost everyone. :)

Understanding a thing equates to building a causal model of the thing. And I still do not see AI as having a causal model of my code even though I use it every day. Seen differently, code is a proof of some statement, and verifying the correctness of a proof is what a code-review is.

There is an analogue to Brandolini's bullshit asymmetry principle here. Understanding code is 10 times harder than reading code.


imo, The OP has bad ai-assisted takes on almost every single "critical question". This makes me doubt if he has breadth of experience in the craft. For example.

> Narrow specialists risk finding their niche automated or obsolete

Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.

> Universities may struggle to keep up with an industry that changes every few months

Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.


I think this is a fair take (despite the characteristic HN negativity/contrarianism), and succinctly summarizes a point that I was finding hard to articulate while reading the article.

My similar (verbose) take is that seniors will often be able to wield LLMs productively, where good-faith LLM attempts will be the first step, but will be frequently be discarded when they fail to produce the intended results (personally I find myself swearing at the LLMs when they produce trite garbage; output that gets `gco .`-ed immediately- or LLM MR/PRs that get closed in favor of manually accomplishing the prompted task).

Conversely, juniors will often wield LLMs counterproductively, accepting (unbeknown) tech debt that the neither the junior nor the LLM will be able to correct past a given complexity.


I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.

I would draw an analogy here between building software and building a home.

When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.

Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: