Hacker Newsnew | past | comments | ask | show | jobs | submit | thamer's commentslogin

The March 2025 blog post by Anthropic titled "Tracing the thoughts of a large language model"[1] is a great introduction to this research, showing how their language model activates features representing concepts that will eventually get connected at some later point as the output tokens are produced.

The associated paper[2] goes into a lot more detail, and includes interactive features that help illustrate how the model "thinks" ahead of time.

[1] https://www.anthropic.com/research/tracing-thoughts-language...

[2] https://transformer-circuits.pub/2025/attribution-graphs/bio...


Is this only about remote MCP servers? The instructions all seem to contain a URL, but personally almost all the MCP servers I'm running locally are stdio based and not networked. Are you planning to support those in some way?

There's also this new effort by Anthropic to provide a packaging system for MCP servers, called MCPB or MCP Bundles[1]. A bundle is a zip file with a manifest inside it, a bit like how Chrome extensions are structured (maybe VSCode extensions too?).

Is this something you're looking to integrate with? I can't say I have seen any MCPB files anywhere just yet, but with a focus on simple installs and given that Anthropic introduced MCP in the first place, I wouldn't be surprised if this new format also got some traction. These archives could contain a lot more data than the small amount you're currently encoding in the URL though[2].

[1] https://www.npmjs.com/package/@anthropic-ai/mcpb

[2] https://github.com/anthropics/mcpb/blob/main/README.md#direc...


That's a good point, we really think that the future of MCP servers are remote servers, as running "random" software that has little to no boundaries, no verification or similar shouldn't be a thing. Is there a specific reason, you prefer stio servers over http servers? Which servers are you using?

Thanks for the mcpb hint, we will look into it.


> Is there a specific reason, you prefer stio servers over http servers?

Yes: the main reason is that I control which applications are configured with the command/args/environment to run the MCP server, instead of exposing a service on my localhost that any process on my computer can connect to (or worse, on my network if it listens on all interfaces).

I mostly run MCP servers that I've written, but otherwise most of the third party ones I use are related to software development and AI providers (e.g. context7, Replicate, ElevenLabs…). The last two costs me money when their tools are invoked, so I'm not about to expose them on a port given that auth doesn't happen at the protocol level.


> as running "random" software that has little to no boundaries, no verification or similar shouldn't be a thing

Would you class all locally running software this way, and all remotely running software the inverse?


Most software we install locally is at least distributed via a trusted party (App Store, Play Store, Linux package repos, etc) and have a valid signatur (Desktop & Mobile) or are contained in some way (containers, browser extensions, etc..).

In the case of MCP, remote servers at least protect you from local file leakages.


This is what it looks like, the switch is for "Offers & Promotions": https://i.imgur.com/wodOoBo.jpeg

From the Wallet app, tap on "…" at the top right, then "notifications".


They're not just from AI-generated text. Some of us humans use en dashes and em dashes in the right context, since they're easy to type on macOS: alt+hyphen and alt+shift+hyphen respectively.

On both iOS and modern Android I believe you can access them with a long press on hyphen.


i think you replied to the wrong comment


Does Dia support configuring voices now? I looked at it when it was first released, and you could only specify [S1] [S2] for the speakers, but not how they would sound.

There was also a very prominent issue where the voices would be sped up if the text was over a few sentences long; the longer the text, the faster it was spoken. One suggestion was to split the conversation into chunks with only one or two "turns" per speaker, but then you'd hear two voices then two more, then two more… with no way to configure any of it.

Dia looked cool on the surface when it was released, but it was only a demo for now and not at all usable for any real use case, even for a personal app. I'm sure they'll get to these issues eventually, but most comments I've seen so far recommending it are from people who have not actually used it or they would know of these major limitations.


The following CSS equivalent worked for me, using the "Custom CSS by Denis" Chrome extension[1]:

    ytd-rich-grid-renderer div#contents {
      /* number of video thumbnails per row */
      --ytd-rich-grid-items-per-row: 5 !important;
    
      /* number of Shorts per row in its dedicated section */
      --ytd-rich-grid-slim-items-per-row: 6 !important;
    }

I first tried it with the "User JavaScript and CSS" extension, but somehow it didn't seem able to inject CSS on YouTube. Even a simple `html { border: 5px solid red; }` would not show anything, while I could see it being applied immediately with the "Denis" CSS extension.

If someone can recommend a better alternative for custom CSS, I'd be interested to hear it. I guess Tampermonkey could work, if you have that.

[1] https://chromewebstore.google.com/detail/custom-css-by-denis...


The main alternative to LVGL seems to be TouchGFX[1], at least that's the one I've seen mentioned the most in conversations around UI libraries for microcontrollers.

As you wrote these aren't made for desktop apps, but you can use desktop apps to help with UI development using these libraries.

For LVGL there's SquareLine Studio[2], I used it a few years ago and it was helpful. For TouchGFX there's TouchGFXDesigner[3], I haven't used it myself and it seems to run only on Windows.

[1] https://touchgfx.com/

[1] https://squareline.io/

[2] https://www.st.com/en/development-tools/touchgfxdesigner.htm...


This is for screens usually controlled by microcontrollers, nothing running close to an operating system like Linux and rarely coming with a GPU.

See for examples ILI9341 or SSD1306 displays[1] or integrated boards with (often) an ESP32 microcontroller and a display attached[2].

[1] displays: https://www.google.com/search?q=SSD1306+OR+ILI9341+display&u...

[2] integrated: https://www.aliexpress.us/w/wholesale-ESP32-LVGL.html?spm=a2...


It's probably not slower than words, the rate for English pronunciation is something like 150-200 words per minute only.

That said, the "gibberlink" demo is definitely much slower than even a 28.8k modem (that's kilobit). It sounds cool because we can't understand it and it seems kinda fast, but this is a terribly inefficient way for machines to communicate. It's hard to say how fast they're exchanging data from just listening, but it can't be much more than ~100 bits/sec if I had to guess.

Even in the audible range you could absolutely go hundreds of times faster, but it's much easier to train an LLM that has some audio input capabilities if you keep this low rate and likely very distinct symbols, rather than implementing a proper modem.

But why even have to use a modem though? Limiting communication to audio-only is a severe restriction. When AIs are going to "call" other AIs, they will use APIs… not ancient phone lines.


> We change the product constantly — we’re talking over 1,700 updates per year!

Good job, the new red is a huge improvement.

Meanwhile the YouTube comment sections are still getting pummeled by bots, trying to scam viewers with fake crypto offerings (90%+ involving an "Elon Musk giveaway") or writing entire threads praising the great investment returns from a genius trader named "Mr Definitely A. RealName" who operates only on WhatsApp.

Take a look at the comments under this video for example, all the references to AMZ6OP are for a scam crypto token that they pretend is being launched by Amazon: https://www.youtube.com/watch?v=JRd_wNHJG4o.

I'm having doubts even reposting this link… please do not believe for a second that any of these claims are real.

I guess changing red to red-ish magenta was apparently more important than addressing the widespread issues that have been plaguing YouTube for years.


>I guess changing red to red-ish magenta was apparently more important than addressing the widespread issues that have been plaguing YouTube for years.

I have a suspicion that the color and design folks are not the same people in charge of comment section spam/bots.


Google laid off over 1000 people (100 in YT) last year. So at some point they did make a conscious decision that the 6 people making the red slightly more purple were more important than 1000 other roles.


Were those 100 people working in anti-spam/bot detection?


Apparently most were working on content creator relations, a lack of which is probably #1 in "widespread issues that have been plaguing YouTube for years" if you surveyed the people who make platform what it is. You also can't convince me 0 out of 1000 google employees are capable of taking on a spam prevention role. On a long enough time scale headcount is fungible.


>You also can't convince me 0 out of 1000 google employees are capable of taking on a spam prevention role.

Similarly, you can't convince me that laying off 6 people would make any difference (let alone solve) any of the spam, bot, or content creator relations issues. So I guess we're at an impasse.


> https://www.youtube.com/watch?v=JRd_wNHJG4o

It's somehow substantially less surprising to me that a video deep dive on cryptocurrency has these comments. Youtube should address it, but to a degree it's very much expected/comes with the territory.

I don't follow cryptocurrency on YouTube (and in fact I made sure to "not interested" the link you posted, no offense). Anyway, I don't follow crypto and as such haven't had the issue you described.

YouTube comments have been and always will be a cesspool. You should consider yourself luck that they actually improved quite a lot (at least in terms of toxicity and "stupidity") when the thumbs down button was nerfed.


Seems off topic. Do you actually think these designers have anything to do with spam detection?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: