Hacker Newsnew | past | comments | ask | show | jobs | submit | echelon's commentslogin

Rust isn't magic, but it has an incredibly low software defect rate. Google published a few studies on this.

If Rust code compiles, it probably has a lower defect rate than corresponding code written by the same team in another language, all else being equal.


ZIRP, IRS Section 174, and irrationally exuberant over hiring caused the first few rounds of layoffs.

The layoffs you see now are due to offshoring disguised as AI taking over. Google, Amazon, and even Hollywood are getting in on the offshoring craze.


I was so angry at this.

I want Amazon and Google to be broken up, but not in this category or along with these lines. This wasn't going to create some household appliance monopoly. Amazon has plenty of competition, and Roomba was already behind the curve.

Now America is out of this market category. A category we invented. This felt like our last toehold in consumer robotics.


Gaming GPUs enabled it. That's random serendipidous connective tissue that was presaged by none of the people who wrote the first papers fifty years ago.

Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.

Now that the salient has emerged, everyone wants to control it.

Capital battles it out for the chance to monopolize it.

There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.

The battle to become the first multi-trillionaire is why so much money is being spent.


I didn't even read the article and know that the headline is 100% correct.

It's the result of stochastic hill climbing of a vast reservoir of talented people, industry, and science. Each pushing the frontiers year by year, building the infra, building the connective tissue.

We built the collection of requirements that enabled it through human curiosity, random capitalistic process, boredom, etc. It was gaming GPUs for goodness sake that enabled the scale up of the algorithms. You can't get more serendipitous than that. (Perhaps some of the post-WWII/cold war tech even better qualifies for random hill climbing luck. Microwave ovens, MRI machines, etc. etc.)

Machine learning is inevitable in a civilization that has evolved intelligence, industrialization, and computation.

We've passed all the hard steps to this point. Let's see what's next. Hopefully not the great filter.


How is that different from "Compact Discs weren't invented, they arrived"?

Point to the single inventor of AI. You're going to have trouble.

Maybe you give it to the authors of a few papers, but even then you'll struggle to capture even a fraction of the necessary preconditions.

The successes also rely on observing the failures and the alternative approaches. Do we throw out their credit as well?

The list would be longer than the human genome paper.


Yes and exactly the same thing could be said for the invention of compact discs. You're just describing "history".

CDs are designed to be exactly in the way they are, and you don't get out of them anything more, or different, than what you put in.

Compute and transformers are a substratum, but the stuff that developed on it through training isn't made according to our design.


I don't have a problem with the headline but the article is kind of bad.

And the headline is vague enough that you could read many meanings into it.

My take would be going back to Turing, he could see AI in the future was likely and the output of a Turing complete system is kind of a mathematical function - we just need the algorithms and hardware to crank through it which he thought we might have 50 years on but it's taken nearer 75.

The "intelligence did not get installed. It condensed" stuff reads like LLM slop.


What would the equivalent be with LLMs?

I spend all of my time with image and video models and have very thin knowledge when it comes to running, fine tuning, etc. with language models.

How would one start with training an LLM on the entire corpus of one's writings? What model would you use? What scripts and tools?

Has anyone had good results with this?

Do you need to subsequently add system prompts, or does it just write like you out of the box?

How could you make it answer your phone, for instance? Or discord messages? Would that sound natural, or is that too far out of domain?


Simplest way pack all text into a prompt.

You could use a vector database.

You could train a model from scratch.

Probably easiest to use OpenAI tools. Upload documents. Make custom model.

How do you make it answer your phone? You could use twillio api + script + llm + voice model. Want natural use a service.


I think you're absolutely right about the easiest approach. I hope you don't mind me asking for a bit more difficulty.

Wouldn't fine tuning produce better results so long as you don't catastrophically forget? You'd preserve more context window space, too, right? Especially if you wanted it to memorize years of facts?

Are LoRAs a thing with LLMs?

Could you train certain layers of the model?


A good place to start with your journey is this guide from Unsloth:

https://docs.unsloth.ai/get-started/fine-tuning-llms-guide


> If you must to deploy every service because of a library change

Hello engineer. Jira ticket VULN-XXX had been assigned to you as your team's on call engineer.

A critical vulnerability has been found in the netxyz library. Please deploy service $foo after SHA before 2025-12-14 at 12:00 UTC.

Hello engineer. Jira ticket VULN-XXX had been assigned to you as your team's on call engineer.

A critical vulnerability has been found in the netxyz library. Please deploy service $bar after SHA before 2025-12-14 at 12:00 UTC.

...

It's never ending. You get a half dozen of these on each on call rotation.


My experience doesn't align with yours. I worked at SendGrid for over a decade and they were on the (micro) service train. I was on call for all dev teams on a rotation for a couple of years and later just for my team.

I have seen like a dozen security updates like you describe.


This was at a fintech and we took every single little vuln with the utmost priority. Triaged by severity of course, but everything had a ticking clock.

We didn't just have multiple security teams, we had multiple security orgs. If you didn't stay in compliance with VULN SLAs, you'd get a talking to.

We also had to frequently roll secrets. If the secrets didn't support auto-rotation, that was also a deployment (with other steps).

We also had to deploy our apps if they were stale. It's dangerous not to deploy your app every month or two, because who knows if stale builds introduced some kind of brittleness? Perhaps a change to some net library you didn't deploy caused the app not to tolerate traffic spikes. And it's been six months and there are several such library changes.


I don't know what a call rotation is, but I keep getting email flooded by half a dozen Linux vulnerabilities every day and it's getting old.

> Cryptids are Turing Machines whose behavior (when started on a blank tape) can be described completely by a relatively simple mathematical rule, but where that rule falls into a class of unsolved (and presumed hard) mathematical problems. This definition is somewhat subjective (What counts as a simple rule? What counts as a hard problem?). In practice, most currently known small Cryptids have Collatz-like behavior. In other words, the halting problem from blank tape of Cryptids is mathematically-hard.

As much as I love Spirited Away and Castle in the Sky, I've been so bummed Miyazaki hasn't returned to more adult storylines.

Princess Mononoke and Nausicaa are two of my top ten films. I'd do anything to have Miyazaki make one more.

I even bought Miramax's old marketing website and kept it online [1].

I was lucky enough to teach English in Hokkaido [2], which is where Ghibli animators drew inspiration for Princess Mononoke. It's such a beautiful place, and you can feel it in the film.

[1] http://www.princess-mononoke.com/ (I should get SSL certs but I haven't touched it in years.)

[2] https://news.ycombinator.com/item?id=46035689


> As much as I love Spirited Away and Castle in the Sky, I've been so bummed Miyazaki hasn't returned to more adult storylines.

What about "The Wind Rises"?


The Wind Rises is my favorite Ghibli, something about it just draws me in like no other. Don't get me wrong, I love Howl's Moving Castle, Princess Mononoke and Spirited Away like the next person, but The Wind Rises is special to me.

Please, don’t make me cry like that.

> I'd do anything to have Miyazaki make one more.

He's "retired" like 3 times now, you might not have to do very much.


Nausicaa was definitely his best work. It was also one of his earlier works and that can be hard to recapture.

I'd love to see the original manga story animated. There are so many beautiful scenes and plots missing from the movie...

However, it couldn't just be a sequel of the movie, as it diverged from the manga story line near the end.


I love Nausicaa but I would argue technically it has some weaknesses in the narrative and pacing compared to later works.

So in case someone comes to this without previous exposure Nausicaa may not be the best place to start. It _is_ nice.


Looking at [1] really took me back to those early days of the web. It was a special time. Thank you for that.

Depending on how it's hosted, giving it a letsencrypt certificate isn't too much of a hassle. I'd be happy to help.


> I even bought Miramax's old marketing website and kept it online

That’s awesome, private sale or did they let it drop?


Taste, or whatever you want to call this, is orthogonal to enjoyment.

I think Steve Jobs very much enjoyed life, and you know what kind of an attitude he had about things.

We're all wired up differently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: