Hacker Newsnew | past | comments | ask | show | jobs | submit | ccvannorman's commentslogin

By your reasoning, Putin invading the US and kidnapping President Trump for his crimes is equally valid

I would personally pay $2x market price for a Phone, Computer, Tablet that guaranteed privacy (via whatever technical means necessary) for all my interactions with the internet.

Is that so much to ask?

Could the next "Apple" produce such hardware/software stack to black box this for the consumer -- simply buy "Pineapple" products and guarantee this stuff can't touch you (user obsfuciation for all external platforms could be a hard technical challenge, I know - hence the big value if delivered)


I can't help myself and surely someone else has already done the same. But the query

  obj.friends.filter(x=>{ return x.city=='New York'})
  .sort((a, b) => a.age - b.age)
  .map(item => ({ name: item.name, age: item.age }));
does exactly the same without any plugin.

am I missing something?


The verbosity.

To your point abstractions often multiply and then hide the complexity, and create a facade of simplicity.


they'll take apart this criminal empire brick by brick


no stone will be left unturned


As long as nothing blocks the investigation!


I doubt we see many knee jerk mistakes on this one...


i think we have stoned this analogy to death now


That is also not enough. An agent could build an application that functions, but you also need to have a well-designed underlying architecture if you want the application to be extensible and maintainable - something the original dreamer may not even be capable of - so perhaps a shared extended dream share with a Sr. architect is also needed. Oh wait .. I guess we're back to square 1 again? lol


I joined a company with 20k lines of Next/React generated in 1 month. I spent over a week rewriting many parts of the application (mostly the data model and duplicated/conflicting functionality).

At first I was frustrated but my boss said it was actually a perfect sequence, since that "crappy code" did generate a working demo that our future customers loved, which gave us the validation to re-write. And I agree!

LLMs are just another tool in the chest; a curious, lighting fast jr developer with an IQ of 85 who can't learn and needs a memory wipe whenever they make a design mistake.

When I use it knowing its constraints it's a great tool! But yeah if used wrong you are going to make a mess, just like any powerful tool


Here's a question for you then. Imagine your own future years and decades spent doing nothing but rewriting crappy code like that. Not as a one-off thing, but as a job description. Does that sound enticing? Do you think you'd able to avoid burnout in the long run?



As someone who walked into 20k+ loc React/Next project, 95%+ vibecoded, I can say it's a relative nightmare to untangle the snarl of AI generated solutions. Particularly it is bad at separation of concerns and commingling the data. I found several places where there were in-line awaits for database objects, then db manipulations being done inline too, and I found them in the ux layer, the api layer, and even nested inside of other db repo files!

Someone once quipped that AI is like a college kid who studied a few programming courses, has access to all of stack overflow, lives in a world where hours go by in the blink of an eye, and has an IQ of 80 and is utterly incapable of learning.


Yes, better prompting is absolutely essential, and I still love to let Claude do the heavy lifting when it comes to syntax and framing. But in trying to re-write the data model for this app, Claude continually failed to execute due to prompt size or context size limits (I use Claude Max). Breaking it into smaller parts became such a chore that I ended up doing a large part "by hand" (weird that we've come to expect so much automation, that "by hand" feels old school already!)

Oh, also when it broke down and I tried to restart (the data model rewrite) using a context summary, it started going backwards and migrating back to the old data model beacuse it couldn't tell which one was which .. sigh.


finally, my vim window can hold 200+ lines on my laptop screen!


I absolutely love this question.

Postulate: You cannot define a largest physically describable number.

My assumption is that due to the very nature of Kolmogorov complexity (and other Godel related / halting problem related / self referential descriptions), this is not an answerable or sensible question.

It falls under the same language-enabled recursion problems as:

- The least number that cannot be described in less than twenty syllables. - The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^100) symbols.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: