Hacker Newsnew | past | comments | ask | show | jobs | submit | lucideer's commentslogin

This is a very long list & is still missing obvious well-known surveillance companies like Experian & who knows how many others. I can imagine the task of documenting this network is going to be pretty intensive.

and this is more a corporate style that work on surface, what about underground or state funded organization

This is a really great post, concise & clear & educational. I do find the title slightly ironic though when the code example goes on to immediately do "import anthropic" right up top.

(it's just a http library wrapping anthropic's rest API; reimplementing it - including auth - would add enough boilerplate to the examples to make this post less useful, but I just found it funny alongside the title choice)


I came across this Github comunity discussion today - "Unanswered" from April 2023 - & it felt very relevant to current discourse in the context of some recent frontpage submissions here on Claude CLI[0] & comments questioning whether Dependabot is actively maintained[1].

I wonder are we entering into an era of large commercial enterprise SaaS that is simultaneously expensive, in demand & also abandonware.

[0] https://news.ycombinator.com/item?id=46532075

[1] https://news.ycombinator.com/item?id=46538227


The post mentions a number of times that leaks happen "all the time", but the only comparative data shown related to this is for historical leaks from AS8048.

Does anyone have data on what the general frequency of these leaks is likely to be across the network?


I’ve seen leaks impact my company directly 4 or 5 times in 4 years, so I would think often enough since we own a /9~ and don’t change our routes too often.

BGP is outside of my skillset, and I'm sure the analysis is fair and accurate. However, had billion dollar US based company Cloudflare detected widespread manipulation of routing tables by the US secret services, I certainly wouldn't trust them to publish it.

I’m pretty confident that the US SIGINT agencies wouldn’t manipulate BGP to redirect traffic somewhere, as such a hijack will ALWAYS leave traces that would be observable by anyone impacted, downstream or upstream.

US SIGINT agencies? They’d just pwn the routers they are interested in. And almost certainly they’ve already done it. Like 10+ years ago.

BGP hijacks are really low-tech and trivial to detect. And competent intelligence agencies don’t do either, unless it comes with enough plausible deniability that it would even be insane to suggest foul play.

I operate a small BGP hobbynet under 2 different AS numbers, and even I keep logs about path changes. Not for any practical purpose, just sheer curiosity.

BGP is a globally distributed and decentralized system. The messages (announcements) propogate virtually across the entire internet. If someone hijacked a route to a prefix that I’ve received, and the path I’ve received is the hijacked one, I’d get that information.

So yes, if that happened, I’d totally expect CloudFlare to publish it, unless they got a NSL. Which they most probably wouldn’t get, as NOTHING about the event would be secret—-it would be out in the open for everyone to see the instant it would happen. There are also tools like https://bgp.tools which operate public route collectors, with the data being publicly available. RIPE has one too.


MANERS has some reporting here

https://observatory.manrs.org/#/overview

And Cloud flare has some publicly available reporting in radar

https://radar.cloudflare.com/routing


Yeah pretty sure it's abandonware.

I was expecting it to be replaced once they announced they were integrating Endor Labs into their GitHub Advanced Security enterprise offerings but all the news I've heard since that announcement has been focused on merging into Microsoft & AI-related layoffs so I presume someone just forgot to turn the Dependabot light off as they were leaving.


At least this breakage is clear & obvious.

I did some testing of configuring Claude CLI sometime ago via .claude json config files - in particular I tested:

- defining MCP servers manually in config (instead of having the CLI auto add them)

- playing with various combinations of ’permissions` arrays

What I discovered was that Claude is not only vibe coded, but basic local logic around config reading seems to also work on the basis of "vibes".

- it seemed like different parts of the CLI codebase did or didn't adhere to the permissions arrays.

- at one point it told me it didn't have permission to read the .claude directory & as a result ran bash commands to search my entire filesystem looking for MCP server URLs for it to provide me with a list of available MCP servers

- when restricted to only be able to read from a working directory, at various points it told me I had denied it read permissions to that same working directory & also freely read from other directories on my system without prompting

- restricting webfetch permissions is extremely hit & miss (tested with Little Snitch in alert mode)

---

I have not reported any of the above as Github issues, nor do I intend to. I had a think about why I won't & it struck me that there's a funny dichotomy with AI tools:

1. all of the above are things the typical vibe coder stereotypes I've encountered simply do not really care deeply about

2. people that care about the above things are less likely to care enough about AI tools to commit their personal time to reporting & debugging these issues

There's bound to be exceptions to these stereotypes out there but I doubt there's sufficient numbers to make AI tooling good.


Good info. Now I understand why they refused to acknowledge the UX issue behind my bug report: https://github.com/anthropics/claude-code/issues/7988

---

(that it's a big pile of spaghetti that can't be improved without breaking uncountable dependencies)


The permission thing is old and unresolved. Claude, at some points or stages? of vibe-coding, can be become able to execute commands that are in the Deny list (ie: rm) without any confirmation.

I highly suspect no one in claude is concerned or working on this.


I think at some point the model itself is asked if the command is dangerous, and can decide it's not and bypass some restrictions.

In any case, any blacklist guardrails will fail at some point, because RL seems to make the models very good at finding alternative ways to do what they think they need to do (i.e. if they are blocked, they'll often pipe cat stuff to a bash script and run that). The only sane way to protect for this is to run it in a container / vm.


I love how this sci-fi misalignment story is now just a boring part of everyday office work.

"Oh yeah, my AI keeps busting out of its safeguards to do stuff I tried to stop it from doing. Mondays amirite?"


So just like most developers do when corporate security is messing with their ability to do their jobs.

Nothing new under the sun.


I had Claude run rm once, and when I asked it when did I permiss that operation it told me oops. I actually have the transcript if anybody wants to see it.

It goes without saying that VCS is essential to using an AI tool. Provided it sticks to your working directory.

VCS in addition to working inside a vm or a container

Those stereotypes look more like misconceptions (to put it charitably). Vibe coding doesn't mean one doesn't care about software working correctly, it only means not caring about how the code looks.

So unless you're also happy about not reporting bugs to project managers and people using low-code tools, I urge you to reconsider the basis for your perspective.


This isn't remotely true. Vibe coding explicitly does not care about whether software works correctly because the fundamental tenet is not needing to understand how the software works (& by extension being unable to verify whether it works correctly).

That extension doesn't follow. It is possible to verify if software works without knowing how it works internally. This is true with many things. You don't need to know how a plane/car/elevator works to know that it works when you use it.

I would actually argue that only a small percentage of programmers know what happens in code on an instruction level, and near none on a micro-op or register level. Vibe-coding is just one more level of abstraction. The new "code" are the instructions to your LLM.


> You don't need to know how a plane/car/elevator works to know that it works when you use it.

I'm sure the 737 MAX seemed to work just fine to Boeing's test pilots. Observing the external behavior of a system is not a substitute for understanding its internal workings and the failure modes they carry.


No, vibe coding is about not reading the generated code but you have to check that it works, be it manually or using tests.

If you do not, why are you vibe coding?

Also there are ways to use a coding agent that are different from this and produce great results, like this:

https://friendlybit.com/python/writing-justhtml-with-coding-...


"fundamental tenet"? There's not an engineering pope speaking ex cathedra.

I mean it's new enough to essentially still be a neologism, so you're right - we can give any arbitrary definition to it if we like. I'm just describing my own observations.

the abstractions around this stuff are still a jenga stack with round pieces... I think it will tighten up over the next year or so for real world use cases. Right now it's great if one is a "build your own tools" kinda person.

Nobody cares how the code looks, this is not an art project. But we certainly care if the code looks totally unmaintainable, which vibe-coded slop absolutely does.

I'm using an LLM to write the code for my current project, but I iterate improvements in the code until it looks like code I wrote myself. I sign off on each git commit. I need to maintain and extend this code, it is to scratch my own itch.

LLMs are capable of producing junk, and they are capable of writing decent code. It is up to the operator to use them properly.


> I'm using an LLM to write the code for my current project, but I iterate improvements in the code until it looks like code I wrote myself.

The prevailing research suggests this is not quicker than just writing it in the first place.


It may not be quicker, but it is often more thorough and less stressful on my old joints. It is also far less tiring.

“Take this CSV of survey data and create a web visualization and create a chloropleth map with panning, zooming, and tooltips” I bypass permissions and it’s done in 10 minutes while I go do some laundry. If I did it myself I would not even be done researching a usable library and I would have zero lines of code. Those studies are total nonsense.

I could see it in cases.

LLMs excel at tasks that are fresh. LLMs are wonderful at getting the first 80% of the way there. -- LLMs are phenomenally good for a first draft or so.

I've had worse experiences for getting LLMs / agents to refactor code. I would believe in many cases it could be quicker to just manually go through and make refinements compared to merely getting the LLM to keep trying.


That seems very intuitive to me. If you want extremely specific changes made at extremely specific locations in an extremely specific way then you probably need to do that yourself because a language model can’t read your mind. I think there are very large set of problems where implementation details do not actually matter and cheap, disposable code is not a problem. I don’t think vibecoding is a good idea for missile guidance. Probably OK for a dashboard a manager isn’t really going to use anyway.

The operator is incentivized not to use them property

I want to be able extend the code so I'd say I am incentivized to use it properly.

While true, the only anyone has to care that vibe coding* produces technical debt is that the LLM doesn't always properly clean up that technical debt without being prompted to do so, and that when you have too much technical debt your progress slows down regardless of if there's a human or an LLM doing the coding.

To put it another way, ask what code an LLM can maintain, not just what code a human (of whatever experience level) can maintain.

* in the original sense, no human feedback at any point


Proper vibe coding should involves tons of vibe refactoring.

I'd say spending at least a quarter of my vibe coding time on refactoring + documentation refresh to ensure the codebase looking impeccable is the only way my projects can work at all long term. We don't want to confuse the coding agent.


> it seemed like different parts of the CLI codebase did or didn't adhere to the permissions arrays.

I’ve noticed the same thing and it frustrates me almost every day.


CC works amazingly well but I agree the permissions stuff is buggy and annoying. I have had times where it’s repeatedly asked me for permission for something I had already cleared, then I got frustrated and said “no” to the prompt, then asked it, “why are you asking me for permission for things I’ve already granted?” Then it said “sorry” and stopped asking. I might be naive but don’t we want permissions to be a deterministic, procedural component rather than something the AI gets to decide?

I get the same feeling, but I think its not just the code agents.

All the AI websites feel extremely clunky and slow.


This is why I run claude inside a thin jail. If I need it to work on some code, I make a nullfs mount to it in there.

Because indeed, one of the first times i played around with claude, I asked it to make a change to my emacs config, which is in a non-standard location. It then wanted to search my entire home directory for it(it did ask permission though).


I’d urge you to report it anyway. As someone that does use these tools I’m always on the lookout for other people pointing this type of stuff out. Like the .claude directory usage does irk me. Also the concise telegraphing on how some of the bash commands work bug me. Like why can it run some commands without asking me? I know why, I’ve seen the code, but that crap should be clearer in the UI. The first time it executed a bash command without asking me I was confused and somewhat livid because it defied my expectations. I actually read the crap it puts out because it couldn’t code its way out of a paper bag without supervision.

It's funnier this way. Let the vibe coders flounder and figure it out themselves. Or not.

It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.

Perhaps I can laugh at the next Equifax of the world as my credit score gets torched and some dude from {insert location} uses my details to defraud some other party. Of which I don’t find out about until some debt collector shows up months later.


> It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.

This is unacceptable. Why would I patronize a business that hires vibe coders? I would hope their business fails if they have such pitiful security and such open disdain for their clients.


Between banking, infra, or government institutions, you've already got a relationship with a vibe coder. You can't avoid it unfortunately.

I read or heard somewhere at least 80% of CC is written by CC and Aider (before CC was mature enough)

Not sure the comments are debating the semantics of vibe coding or confusing ourselves with generalizing anecdotal experiences (or both). So here's my two cents.

I use LLMs on a daily basis. With the rules/commands/skills in place the code generated works, the app is functional, and the business is happy it shipped today and not 6 months from now. Now, as as super senior SWE, I have learned through my professional experiences (now an expert?) to double check your work (and that of your team) to make sure the 'logical' flows are implemented to (my personal) standard of what quality software should 'look' like. I say personal standard since my colleagues have their own preferred standard, which we like to bikeshed during company time (a company standard is after all made of the aggregate agreed upon standards of the personal experiences of the experts in the room).

Today, from my own personal (expert) anecdotal experiences, ALL SOTA LLMs generate functional/working code. But the quality of the 'slop' varies on the model, prompts, tooling, rules, skills, and commands. Which boils down to "the tool is only as good as the dev that wields it". Assuming the right tool for the right job. Assuming you have the experiences to determine the right tool for the right job. Assuming you have taken the opportunities to experience multiple jobs to pair the right tool.

Which leads me to, "Vibe coding" was initially coined (IMO) to describe those without any 'expertise' producing working/functional code/apps using an LLM. Nowadays, it seems like vibe coding means ANYONE using LLMs to generate code, including the SWE experts (like myself of course). We've been chasing quality software pre-LLM, and now we adamantly yell and scream and kick and shout about quality software from the comment sections because of LLM. I'm beginning to think quality software is a mirage we all chase, and like all mirages its just a little bit further.

All roads that lead to 'shipping' are made with slop. Some roads have slop corners, slop holes, misspelled slop, slop nouns, slop verbs, slop flows and slop data. It's just with LLMs we build the roads to 'shipping' faster.


No matter what which stereotypes you think the developers adhere to, your should file the bugs. Or stop complaining about them.

Right? The general case just doesn't make sense to me when people do that, where "that" is "I have a problem with person/organization, but rather than talk to person/organization about thing, I'm going to complain about it to everyone except person/organization and somehow be surprised that problem never gets fixed"! Like, how do you want things to get better?

It’s not a strategy for improving the outside world. It’s an automatic emotional pressure relief valve for reducing internal discomfort.

These are "AI"-addicted developers that you're talking to.

They have been tricked into a world-view which validates their continual, lazy use of high-tech auto-generators.

They have been tricked into gleefully opting in to their own deskilling.

Expecting an "AI"-addicted developer to file a bug is like expecting an MSNBC or Fox News viewer to attend a town meeting.

The goal of "AI" products is to foster laziness, dependency, and isolation in their users.

Expecting these users to take any sort of action outside of further communication with their LLM chatbots does not square with the social function of these products.

Edit (response to the guy/LLM below me):

Hackernews comments written by fearmongering LLM idiots will tell me to "keep an open mind" about dogshit LLM chatbots until the day I die.

LLM technology is garbage.

If these tools are changing the world, they're only doing so by:

1. Dramatically facilitating the promulgation of idiotic delusions

2. Making enterprise software far, far more vulnerable than it was even in the recent past


this is a lazy take. all software has bugs and defects.

part of what we do, as developers is to learn. to have an open mind to new tools and technologies.

these tools are… different, they’re changing the world (fast), and worth trying to understand. your mental rigidity to doing things “the right way” will hold you back and limit your growth. the world is changing. are you?


Those tools are massively overhyped and hemorrhaging money by the second. Such a shame so many people are so blind as to not be able to take things with some realism and a non biased POV. They're great, yeah, they help for a lot of things, some people really "vibe" with that kind of workflow, good for them.

Everytime you "prompt" and you "vibe" you're not "changing with the world", you're using copious amounts of energy on very expensive hardware that you would never, in your lifetime, would be able to use if it wasn't backed by trillions in VC funding. Don't believe me? Try to match the performance of a current model with local hardware, report back with how much that costs in hardware and energy.

They're all in the stage A of enshittification, the bait phase. You're willingly making yourself reliant on a tool that eventually will be uncostable for any individual, and only affordable for big orgs.

If the job of a developer is to "learn, and have an open mind to new tools and technologies", and "my mental rigidity to doing things "the right way" will hold me back and limit my growth", then I don't want to be an engineer. Because one thing is to experiment, and another one is to, pardon the expression, suck off any new technology as the new epitome of anything. I don't want to be a "developer" with no criteria. Call me an engineer instead, I do things "the right way", and I don't fall prey to fashion under the guise of "growth".


Attending council meetings as a citizen observer is a huge waste of your time. The council already knows how it’s going to vote. The whole public-facing legislative process is community theater.

Sounds like a malware

It's only "objective" if you accept the beneficiaries of those donations as "objectively" benign.

I don't fully agree with the gp's statement - Musk is at least a little worse than most - but Gates in particular is a terrible counter-example. Especially in light of recent document releases.


(I typed out a reply to the above but the gp got flagged into oblivion before I hit submit so I've copy pasted it into the only non-reactionary descendant thread for pastry posterity)

Reply to the original now-flagged comment:

---

This.

I might reword your statement to "musk isn't notably worse than...", & I will say twitter has significantly declined in many ways since he took over - both the software quality (many things no longer work - especially e.g. search - & people just frustratingly accept it because broken window theory I guess) & also many of the new features being objectively horrific (like Grok generating CPM on demand without ramifications).

However at its core Twitter is still the same Twitter it always was in terms of the toxic but politically engaged & zeitgeist-relevent live community discussion that takes place there. Reddit may rival it within some narrow selective niches but there's nothing else giving us what Twitter is giving us in terms of being connected to what is happening in international political culture. On both sides of the spectrum: conservative discourse is a lot more broad & active on Twitter than on Truth Social or similar, & outside of weird insular tankie Discord or Matrix servers, Twitter is also where it's at for leftist discourse; Bsky & Mastodon are both deserts.


I wouldn't do anything to "correct" your guide - I think it is "correct" as is. This comment is great for its informational content but I'd consider it an addendum, not an erratum.

If you like it might be nice to include a section on historical and/or niche browsers that lack some of the elements this guide describes - like e.g. Dillo which is a modern browser that supports HTML4 & doesn't support Javascript. But your guide should (imho) centrally focus on the common expectation of how popular browsers work.


Apart from the apparent comparative ease of creation relative to GUIs (I suspect Electron apps may be easier than TUIs), I think the main benefits from a user perspective seems to be down to cultural factors & convention:

- TUIs tend to be faster & easier to use for cli users than GUI apps: you get the discoverability of GUI without the bloated extras you don't need, the mouse-heavy interaction patterns & the latency.

- keybindings are consistent & predictable across apps: once you know one you're comfortable everywhere. GUI apps are highly inconsistent here if they even have keybindings

- the more limited widget options brings more consistency - GUI widgets can be all sorts of unpredictable exotic

- anecdotally they just seem higher quality


For that matter, with modern terminals, you can still do mouse interactivity as an option. I think that working over an SSH terminal is pretty nice in and of itself even if you can self-host a web application.

I've almost always got my terminal app open anyway, in the case of VS Code, I don't even need to switch to another app to use it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: