Hacker Newsnew | past | comments | ask | show | jobs | submit | fny's commentslogin

A tangential thought: say AI makes a huge dent into labor. Wages will collapse across the board. Who is left to profit from?

There is a more feasible future IMO that paints AI as the washing machine, calculator, computer, spreadsheet, automation, etc. Jobs AI can complete don’t lead to people getting let go, but rather those people sit on top of the AI who can do their job much better (sometimes at larger scale, sometimes not). Better outputs for the same cost (well wage+AI costs) -> more purchasing power, more efficient business, etc.

I don’t know if this is how AI will go, but this exact thing happened to me with deep learning. I did stupid math to optimize algos in 2012, but in 2022 deep learning was 100x better than me. I just babysat the AI, as it (and llms) still can’t talk to clients, understand business/culturual nuances, navigate an org, politic, innovate, etc


I think a few people will live like kings while the rest of us live in Terrafoam.

Money is just a proxy for value add. When automation replaces the labor, the only remaining value add may be withholding violence. I hope we don't get there

> Who is left to profit from?

That's already the case where most of the economy is AI-related stock market speculation rather than being based on actual investment fundamentals.


I doubt this was ever classified information. It's written all over DoD and NSA requirements and best practices for staff and diplomats.

She was probably briefed repeatedly about this as a member of that committee.

Here's one example:

> Headphones are wired headphones (i.e. not wireless) which can be plugged into a computing device to listen to audio media (e.g. music, Defense Collaboration Services, etc.).[0]

[0]: https://dl.dod.cyber.mil/wp-content/uploads/stigs/pdf/2016-0...


>I doubt this was ever classified information.

The classified part would be the intelligence that the wireless protocol is compromised. I don't see that in your document.


That's not intelligence, just a precaution.

A precaution presumably based on intelligence. The (presumed) intelligence that the wireless protocol is compromised. As I said before.

I have worked on out of sample problems, and AI absolutely struggles, but it dramatically accelerates the research process. Testing ideas is cheap, support tools are quick to write, and the LLM itself is a tremendous research tool itself.

More generally, I do think LLMs grant 10x+ performance for most common work: most of what people do manually is in the training data (which is why there's so much of it in the first place.) 10x+ in those domains can in theory free up more brain space to solve the problems you're talking about.

My advice to you is to tone down the cynicism, and see how it could help you. I'll admit, AI makes me incredibly anxious about my future, but it's still fun to use.


This is so goergeous compared to the mess that's out there now.

In the US, homeless individuals foremost suffer from financial hardship not mental illness. Consider 39% of homeless individuals are in families [0: Page 17] while 40% have a serious mental illness or drug problem.[1] Many develop these problems while homeless.

Homelessness in the US has also increased by 47% since 2018. [0: Page 2] I doubt homelessness or drug abuse has increased accordingly.

People make the mistake to think otherwise because its not the homelessness you often see.

[0]: https://www.huduser.gov/portal/sites/default/files/pdf/2024-...

[1]: https://www.kff.org/medicaid/five-key-facts-about-people-exp...


Take a look at Figure 7 on this page, which indicates that (annual) overdose deaths have more than doubled since 2018: https://nida.nih.gov/research-topics/trends-statistics/overd...

If have lived anywhere with a significant drug-addict (opioid or fentanyl) population through this time period, you’ve seen the increase; if you haven’t, you may be lucky for it.


> People make the mistake to think otherwise because its not the homelessness you often see.

Indeed. Those homeless people without mental illness likely have more interest in not being seen, and more ability to avoid it.

> Homelessness in the US has also increased by 47% since 2018. [0: Page 2] I doubt homelessness or drug abuse has increased accordingly.

Not sure what the typo is in here. Surely homelessness has indeed increased in accordance with homelessness.


> People make the mistake to think otherwise because its not the homelessness you often see.

I don't think this is a mistake so much as people do not care about the homelessness they don't see.

Ironically when you use the specific words for the homelessness they do care about (unsheltered or unhoused) you're accused of being woke or whatever.


Do you want to fix the typo?

I'm a former Ruby guy who ended up in stats/ML for a time. I think it's all about information density.

Let's use your example of `A = P (1 + r / n) * (n * t)` -- I can immediately see the shape of the function and how all the variables interrelated. If I'm comfortable in the domain, I also know what the variables mean. Finally, this maps perfectly to how the math is written.

If you look at everything in the post, all of the above apply. Every one in the domain has seen Q = query, K = key, V = value a billion times, and some variation of (B, N_h, T, D_h). Frankly, I've had enough exposure that after I see (B, N_h, T, D_h) once, I can parse (32, 8, 16, 16) without thinking.

I like you found this insane when I started studying stats, but overtime I realized there a lot to be gained once you've trained yourself to speak the language.


This brought up memory of Hungarian notation. I think now I will try to use it in my PyTorch code to solve the common problem I have with NN code: keeping track of tensor shapes and their meanings.

  B, T, E = x.size() # batch size, sequence length, embedding dimensionality
  
  q, k, v = self.qkv(x).split(self.embedding, dim=-1)
  q, k, v = map(lambda y: y.view(B, T, self.heads, E // self.heads).transpose(1, 2))

  attention = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
  ...
vs

  B, T, E = bteX.size()

  iHeadSize = E // self.heads
  bteQ, bteK, bteV = self.qkv_E_3E(bteX).split(E, dim=-1)
  bhtiQ, bhtiK, bhtiV = map(lambda y: y.view(B, T, self.heads, iHeadSize).transpose(1, 2))

  bhttAttention = (bhtiQ @ bthiK.transpose(-2, -1)) * (1.0 / iHeadSize)
Looks uglier but might be easier to reason about.


My takeaway is that clock is ticking on Claude, Codex et al's AI monopoly. If a local setup can do 90% of what Claude can do today, what do things look like in 5 years?


I think they have already realized this, which is why they are moving towards tool use instead of text generation. Also explains why there are no more free APIs nowadays (even for search)


Exactly, imagine what Claude can do in five years!


10% on top of what we have now and the same things that the local models can do of those times ahead of us?


This seems like a classic time vs space trade off.

Instead of reconstructing a "wide event" from multiple log lines with the same request id, the suggestion seems to be logging wide events repeatedly to simplify reconstruction from request ids.

I personally don't see the advantage, and in either scenario, if you're not logging what's needed your screwed.


The Internet is dead. Long live the Internet.


Let's be honest. It's been mostly downhill since AOL.


Today is Wednesday the 11796th of September 1993.


Eternal September came up in conversation today about how users don't do effort posts any longer, they just want to leave funny comments below reaction videos and then swipe to the next one.

Anyone got any good effort post oases I can lurk and help out in?


I think discussion duration is a big factor. For sites like HN, there is little point in high-effort replies, since no one will be around to see them.


At least Gemini (protocol) users are basically immune to this because of the obscurity of the system


Small communities. Usenet ~1990 was ~1m people with access, most at either universities or tech-oriented companies or government labs.

Amongst more general discussion platforms, HN, Metafilter, possibly Tildes.net.

Anything large is by definition popular and common, both terms with freighted meanings. The more so if they're advertising-driven.


Discord is still good as long as its a small server.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: