Hacker Newsnew | past | comments | ask | show | jobs | submit | mesrik's commentslogin

Also RPKI has been available long time already.

Considering the routing table size has been increasing and IPv6 need anyone shouldn't be running global routing with gear not supporting RPKI any more, the routing polices and announcing those RIR they operate.

https://en.wikipedia.org/wiki/Resource_Public_Key_Infrastruc...


Many v4 prefixes in the ARIN region are legacy and don't support RPKI unless you sign the registration agreement. I have a legacy prefix and may eventually be forced to sign up.


>or even by normal load from someone deciding to split a /8 prefix into /24's

If that kind of happening directly from load of added 25 routes it's quite hard to believe it.

  # 10/8 prefix here only to show how to get number of new routes added.

  $ sipcalc -n 24 10.0.0.0/8 | grep -c Network   
  25
  $
BGP peering routing policies have then been for the good reason constructed in way that they expect advertisements "exact accept" with a prefix-list with that /8 prefix, because that's is expected when peering is agreed even when not explicitly stated by many. This expected best practice following goal to manage and prevent internet routing table being filled with superfluous routes.

But anyway, sudden change from /8 to 25 x /24 without first noticing your peers and giving them time to change that "exact accept;" to "orlonger accept;" is quite sure footgun if you don't know common principles of network management. But usually that kind of screwup blast radius is local mostly local only to that /8 prefix.

Not sure though how that could be technically avoided in BGP protocol or router control-plane (router OS config) design. Policy filters and best practices how to use them have been set for good reason. Not just to irritate and make things harder than they need to be. We certainly did not do that while I was still working.

Right, something else what could happen with that kind of sudden change is. If that peered had also other peers which had instead "orlonger" in place traffic would then switch to that, what could have some side effects like saturated links, slowness or even increased costs. Too bad, and may happen. But principle is that communicate your routing changes in good time before you actually make the changes. That will prevent most of this kind of problems ever happening to you.


Oh, my bad. How didn't I notice my mistake right away. That 25 is grossly wrong, I should have checked before using that. The correct line to get subnets is

  $ sipcalc -s 24 10.0.0.0/8 | grep -c Network
  65536
Which increases significantly global routing table size of course. I apologise my mistake on that matter that I should have noticed before posting.

Anything else I wrote about changing prefix advertisement is correct. You should and need to communicate your advertisement changes in good time to your peers and let them time to make changes.


>Using "easy listening" as a pejorative has always baffled me. Why does music need to be difficult?

Yes, I agree with you, it shouldn't and doesn't need to be.

But some things like music be it Jazz or something else isn't always just matter of listening but way of self establishment, way of life living or pursuing life, way how they seeing themselves and communicate themselves to others. I'm not in to this or studying this or anything else, but it's known behaviour model and you find studies if you like to read about it more.

Right, some Jazz aficionados tend to be like hipsters. Who despise and keep unorthodox anything but their likes would grok. A way of self establishment and having reason to keep themselves different. At least a bit better than others. I'm not claiming everybody are, but I certainly have met few of those quick to classify someone things they like.

I find my self like more West Coast Jazz bands and artists performances older I get. And if I'm not completely wrong it might be a more common trend their share has increased over the past ten or so years playing in radio stations too at least where I live.


Well, that's about what happens in Sauna with electric stove.

In Finland we do it every day and have done decades already.

Those who may not know electric stoves have been about fifty years common use at least in urban environments. Stoves have anything from three, one in each of three phase current used heating elements (resistor coils) 400V 6-8 kW power draw commonly in small house stoves and 2-3 times that swimming baths saunas stoves.

While sitting topmost sauna benches bathing, we throw fresh water from bucket with a sauna laddle (saunakauha) water to stove(s) anything from small drippings to a pint with trying to little spread it out. This is to get steam and make it pleasant relaxing 'löyly' as we call it.

The stove is usually heated about an hour or so before starting bathing to get temperature somewhere 70°-100°C (158-212°F).

It's not advisable to have stove showing those red hot glowing elements peeking out behind stones, but it does happen if stones were not laid properly. But even if water gets directly to elements those will not break or get any damage as they are made intentionally to resist that.

So boiling water practically immediately does happen, it's not particularly dangerous when applied in circumstances where equipment is made to withstand that is nothing miraculous. And that really happens millions of times each day in Finland and some other places where that kind of sauna culture is practised both at people private homes and also public swimming baths saunas alike.

I will be observing it next time about in 14 hours from this writing as I'm going swimming as usual tomorrow morning at 6:00 am. when pool opens early tomorrow, and then likewise twice more (Wed, Fri). Also once more (Thu) evening sauna reservation slot i've got this flat I live.

There is a quite good english page about Finish sauna in Wikipedia, but to get a glimpse what modern sauna and stoves look Harvia a long time stove manufacturer web pages you get some sense what I'm writing about.

- https://www.harvia.com/en/


It’s a good point but there might be a pretty big difference in force because the ladled pint of water is not contained on any axis. A pint of water in a cup, with up as the only exit, subjected to the full current of a 3 phase 480v circuit is probably going to generate a good size jet of steam straight up.


Yeah, that's true that water thrown to a stove isn't much contained anything but that bathing room. Some of water will of course flow between stones bit deeper, but there is plenty of room to expand when it boils to steam.

Some firm hissing, minor clanking noise from stones is normal and even bit sharper noise when a stone cracks is what water use on stove causes when stones get old and are used lot. Stove should be cleaned periodically when it's cold depending on how much it's been used and there is need to replace stones or even all of them if it's been long time and there is some sand accumulation stove bottom grill or plate, whatever it has to hold stones falling trough. Family houses cleaning perhaps 1/yr and public saunas open 6 am to 8 pm 300 plus days a year, they will do stove maintenance every or every other month.

And yes, getting good amount of steam of course is what's been whole goal in this kind of sauna use and what we prefer. Some other places where they have begun to call it sauna too, they may not even allow to use water nothing but as drinking water and usually they don't warm up that 'sauna' as hot as we tend to do or if they do it's more like Turkish bath type then.


That's a good question.

During 2024 Summer Olympics my then employer which DNS and core network I was still managing as I returned summer holiday. I was told by helpdesk our users around different locations at campus were not able to open national TV broadcaster streaming services and view the games.

I found out by asking few of these users that they got denied claiming to be from UK and that streaming services were not allowed abroad. TV broadcaster told me once I got someone to know anything about the matter reply, that they use MaxMind GeoIP service. So I went to see and test few addresses from MaxMind debug page and that clearly showed many addresses from around 20 subnets of /16 our IPv4 CIDR block were showing the same.

So I sent email to MaxMind support asking why and tried to find out means they use to check where each network is located and populate it to their GeoIP DB, which then clients either mirror or use remotely from their service.

After few emails with their support that they did not use RIPE (RIR) database at all as RIPE terms of use doesn't allow using RIR information for commercial purposes. So MaxMind neither did not apparently use WHOIS (RDAP) LOC records, and wrong information did not update from our LOC records DNS had either.

I never got any explanation how they figure out where that IP or CIDR block is being used. Between the lines I was assuming it's perhaps some kind of trade secret they don't like to talk about. Maybe it's using mobile devices location service or like, but amount these days VPN's are being used that could lead them updating bogus information to database service use they then sell and naive customers trust <eh>.

But most I was surprised by that how easy it was update information, basically just communicating clearly and writing polite convincing message they seemed to take that information pretty much by face value and that I was sending my messages from DNS SOA RNAME address.

But if GeoIP data provicers don't use that then who or what services do, that I still have no idea.


These days RFC8805[0] is pretty widely supported. But as far as I understand, it's not entirely trusted and geolocation providers will still override that data if it doesn't match traceroutes and whatever other sources they use

https://datatracker.ietf.org/doc/html/rfc8805


A bit late to reply so much longer (10h) I posted my comment. But just for the record here I go.

After reading that RFC8805 here it's what it writes situation at the time of publishing August 2020.

"8. Finding Self-Published IP Geolocation Feeds" and subsequent

The issue of finding, and later verifying, geolocation feeds is not formally specified in this document. At this time, only ad hoc feed discovery and verification has a modicum of established practice (see below); discussion of other mechanisms has been removed for clarity."

and subsequently

"8.1. Ad Hoc 'Well-Known' URIs

To date, geolocation feeds have been shared informally in the form of HTTPS URIs exchanged in email threads. Three example URIs ([GEO_IETF], [GEO_RIPE_NCC], and [GEO_ICANN]) describe networks that change locations periodically, the operators and operational practices of which are well known within their respective technical communities."

I spent also a moment trying to figure out what can I find about its adoption and use and didn't find much of it. Some blog posts, articles and comments to question whether Amazon AWS or Microsoft Azure support it and answers were pretty much nope, no they don't at least yet time of writing last year and this year.

Thus I'm concluding it's unlikely any major source of location information for GeoIP providers like MaxMind. Nope they're not, it's too marginal source for them to spend time on so little used spec yet.


They could get a rough estimate of an IP location using traceroute from many different known locations. Very rough but it’s a starting point.

For some cases, they might just lookup who owns that IP range and put their address as the IP location.


Yes traceroute is something where approximate rough estimate where IP perhaps could be as up to ISP level hosting it, but traceorgute isn't usually allowed pass firewalls and seldom reaches target IP on networks where clients really are.

One possibility is BGP advertised and known information like https://www.cidr-report.org provides could be used. But like I wrote commercial GeoIP data providers are not allowed to use WHOIS information from RIR registries. It's their ToS generally prevent it being collected and resold why MaxMind told me that they don't use it.

Thus the LOC information I had updated RIPE DB in our records LOC or any other information there were not used by MaxMind. Or at least that's what they claim. True or not I don't know, but that's what they tell if you ask from them.

Also apparently they did not use LOC records from the organization domain I maintained DNS LOC records either. And I got no answer why nor what they use as their sources of information. As it's more likely some kind of trade secret of them.


>Unfortunately for HP, its workstations (the ones OP acquired) weren't nearly as popular with universities and developers as Sun Microsystems', so you tended to find HP-UX in commercial production—larger servers, more workload, but smaller numbers

Agreed, the university I worked HP systems cost was the major reason Computing Center Sun was purchased, though we had stray discount price purchased units of almost all vendors too.

We did have one HP 3000/MPE running library VTLS quite long time. I can't remember its exact model any more. But was first 160cm heights rack filling old system and then later replaced with some 9000/E35 matching size smallish (a thick and very heavy PC) size smaller 3000 series box. I did not manage that, but helped its sysadmin with his 9-track autoloader issues couple of times. I would have certainly recycled that tape unit to another use, but it was HP-IB (IEEE 488 / GPIB) connected like whole rack filled with disks all daisy-chained were easy to believe not having been cheap. Too bad it was so hard to get GPIB adapter working with other systems. Those terminals used with MPE having local edit buffer were weird, as was HP Roman character set used. All so well built that was a shame to let the go when VTLS was retired about 30 years ago.

Maths department did have better funding to get few HP-UX running long time. Only HP-UX we had at CC was C160 workstation running OpenView NMS, but that's it.

Yes and commercial side (a telco vendor) I did work customer demanded HP and there were very few Sun servers. It was only used if and when software was not at all available for HP-UX. What I recall Ericsson switching systems tended to come with Sun/Solaris and Lucent 5ESS HP/HP-UX that time.

A friend of mine went SF some conference, I don't recall year. But he came back with HP brand sunglasses which HP gave all visiting their booth and told "Remember, not to look at Sun" :D


There were many early stories about Scott McNealy and his Sun crew going into competitive situations against Apollo, ComputerVision, DEC, HP, Intergraph, Masscomp, SGI, Symbolics, Tektroincs—whoever, really, and there were a lot of whoevers in those days. Competition would argue: "Ours are clearly superior!!" and give a good showing of that. Better networking, display resolution, realtime responsiveness, app performance, rendering speed—whatever metric.

And then Sun would hit back: "Yeah, maybe a smidge better... Not saying it is, but maybe, in an ideal light. On the other hand, with Sun, we cost a lot less. That means you can get 3 or 4 of your engineers empowered with a world-class workstation for every engineer you could with <competitor>." Boom. Those economics were compelling.

It also helped that in those days, Sun workstations became the object of desire for a lot of young developers and engineers, myself included. Sun styled itself into the "it" product.


The Berkeley Automounter Suite of Utilities also known as am-utils did similar arch/platform local and remote mount tweaks commonly quite easily.

https://www.am-utils.org/

The am-utils "amd" known as its running process current use I don't have much to say as I've not much seen it as at least Linux distros have had autofs-tools quite long time. But -90 something am-utils was the thing we mostly used.

Adding: Oh, that made me remember we had then also user mode nfs daemon, which allowed re-exporting remote mounts, which was at times with smaller disks and always looking where to get it more if nothing but temporary storage great help. Current kernel based nfs doesn't support it any more.


IMHO, HP-UX had hands down best written man pages I've ever seen any UNIX commercial or free. And I've been working quite many with.

All man pages were well written, nicely formatted easy to read and almost all came with often valuable examples giving quick enough understanding to check usage most often. That has been absolutely the thing that I've missed other *nix systems since.

But there are too many things were done so nicely and made it nice to maintain with HP-UX that it's not worth trying to remember and list all. But unfortunately shell environment was not match to convenience GNU tools Linux had from beginning. That is without making effort to install (read: compile from source for quite long time) those HP-UX if that was allowed. With university computing center that no problem, but telco side it was big nono -- not without getting product owner permission first :/

But just an example Ignite-UX was one of my favourites with HP-UX. The simplicity using a one simple command with few options bootable DAT tape that could then be used to either recover whole running fully functional system or clone that developed system first to staging lab and then up to production with ease was great time saver major upgrades and migrations. None of the Linux bare metal backup systems I've tested have been able to recover exactly same disk layouts, usually LVM part is poorly done. As has been VmWare p2v migration tools also btw.

That Linux LVM that Sistina did first before Red Hat bought them, is implemented quite exactly what HP-UX had for some time then.


I agree that LVM in HP-UX was far ahead of Linux back in the day. To be fair some of those advanced features in HP-UX LVM required an additional license (eg: mirroring required Enterprise Operating Environment). I haven't touched HP-UX in like 10 years however.


That is true, sw licenses were a major nuisance. As they usually are. Not just where to get one, but in time, to keep track of those and secure so that proof of purchase was not lost before deployment and include final delivery. HP product codes and major version change product renaming plague were not exactly my favourite part of work!

Many HP-UX boxen (servers) came with default (interactive) multiuser OS licenses. Product differentiation which HP sales loved had license castrated workstations, which came only two user license.

First time I had no clue about this and were wondering why some odd network management software I was installing a server did not restart properly and was causing head scratching. Then I found that logs stated our license was not valid though it had been confirmed valid in other test install.

A HP support guy I knew and saw later told that I had probably to install optional two-user package and then the software will start. Oh, great that it was. But what the heck that two-user license only prevented only two serial line users simultaneously and only systems console was serial that time and everyone else logged in via network. To be sure I made PM check if we still were within license because of that. He told me later yep, no problem there. Just get it done and we're ready deploy it to site.


I remember the hours gcc needed to compile itself on those HP servers. We needed it for all the programs that would not compile with HP's cc. We also installed some GNU userland utilities because, as you wrote, they were better than the ones in HP-UX. Those were the years around 1990.


I did some HP-UX in late 80's, migration of servers across the country for a courrier company from NCR towers to HP servers running HP-UX (sorry don't recall the models of hand).

Had fun porting sortware across, a radio system that was unable to test fully unless in the field (which it did first time, which was amazing). Had many good chats with HP engineers back then (we did a large purchase as a global company) and one I still recall was early editions of HP-UX having an error code of 8008, until somebody in senior managment at HP saw it one time (no customer had ever complained apparently about it).

I liked HP-UX having previously worked on IBM RT systems running AIX, as well as NCR towers with there more vanilla System V. Though did have SMIT with AIX and SAM with HP-UX for those manual saving moments of ease to fall back on. Though my favourite flabour of unix of that time would be the Pyramid systems dual universe OSx. You could have a BSD or an AT&T enviroment at once, able to use both flavours in scripts by prefixing with bsd or att, to run that command. Don't recall how it handled TERMCAP/TERMINFO of hand (that was always an area of fun back then).

Fun times, in the days in which O'Reilly and magazines like Byte or Unix World, were the internet, along with expensive training courses and manuals that you would use and thumb every page of the multi tombed encyclopedic stack they came in.

Best C platform for developing that I did use in that era, hands down the VAX under DCL, the profilers etc, pure leaps and joy.


> I liked HP-UX having previously worked on IBM RT systems running AIX, as well as NCR towers with there more vanilla System V.

There's very little on the internet about those "NCR Towers."

> 1987: https://www.techmonitor.ai/hardware/ncr_marries_its_tower_un...: "Despite abandoning its effort to implement Unix on its NCR 32 chip set, NCR Corp did not abandon its ambition to bring Unix into the mainstream of its mainframe product offerings, and the company yesterday launched a facility whereby its top-end multiprocessor Series 9800 fault-tolerant mainframes can be used as servers to a network of 68020-based Tower Unix supermicros."

> 1988: https://www.techmonitor.ai/hardware/ncr_renews_its_tower_uni...: "When you sell as many machines as NCR does with the Tower, you can’t rush to incorporate a new chip as soon as it arrives because there simply aren’t enough chips to meet your needs. Accordingly the new Tower models use the 25MHz 68020 rather than the 68030."


Alas no documentation I have on those NCR towers, or much could add (though very robust kit as never had any issue with them hardware wise and even in a warehouse/distribution hub and decommissioning one with over an inch diesel soot caked inside and thinking I need hazard pay). Alas lost a lot of my manuals in a move, along with old systems couple decades ago. All I have left are some IBM stuff, well duplicated across the net, IBM good with there online documentation. Just wished had the ICL and Honeywell mainframe manuals I had still.


Yeah, HP's cc was "not technically a C compiler" - the only supported use of it was to compile a couple of stub files and link the kernel, on kernel configuration changes. (This led to a bunch of work in making gcc bootstrap from cc, even on top of HP/UX weird ABI, something involving function pointers being longer than other pointers IIRC?)


I think that's the bundled version. There's a 'proper' C and C++ development environment you can install if you have the codeword for the install CD.


And documentation, apparently no longer to be found on public HP sites, after all their reboots as companies.

Occasionally I find some stuff via search engine, mostly random.


HP-UX general support seems it is EOLd by end of this year. Extended, apparently very pricey, support will last till 2028.

It would be nice if anyone having still contacts they could ask if HPE would be willing to relax at least parts of HP-UX, like documentation and let achieve.org take them and let us occasionally check things as rererence how it was HP-UX.

It would be shame if all that work that they did documents were lost and unavailable general public later on.


Could it be possible to copy the man pages directly from a running distribution? I'm sure that's not allowed, but if it's otherwise disappearing forever...


Sure it is, if you have installation disks. Those are bog std ISO9660 with rock ridge -overlay extension. Which is just hidden file on CD top directory, which maps those silly uppercase ISO naming conventions with file version, and Linux should be able to mount them without problems.

I do not remember any more if those man files were preformatted and .Z compressed or were there the troff source files and "an" package also. Commercial unicen did have bad habit not to provide sources, so that could be the case.

But if someone have the CD:s then its not too hard to check I believe. Installation files could be packed somehow, like compressed and then cpio or tar inside. That's what I now think those would have been. But I can't remember for sure, its's bit over 25 years when I did work HP-UX last time.

And if I remember correctly HP did ship some printed manuals also with CD's. I have some kind of memory seeing some disks like that, but I never used those. We had paper manuals back then and which were then sent to customer as part of our product. Nor have I any idea which format those documents or whole document CD's would be. Postscript or PDF if we would be lucky, but it could be some proprietary format in worst case.


I very rarely read man pages (similarly gnu info) on Linux nowadays, whoever they're written for isn't me.

Superior alternatives:

* tldr/tealdeer - usually just a pile of typical usage examples, almost always covers what I want

* jfgi because surely someone has tried to do this before and asked about it on an ancient forum

* llms - regurgitating the info from above, possibly with the bonus of letting it try a script on a sandbox and then entering a error-confusion loop

* source - documentation can be wrong or incomplete, but the source never lies


>As someone who deals in breach data this is a simple regex to strip out.

Sure it is, but at least you do get later, post leak, a slight chance find out where leak originated.

Data stealers seldom strip out that +extension part before the selling or otherwise dump it somewhere. And while it's passed on, you get to see address as you gave to that party that had leak. Reason seller don't strip of it is perhaps because they sell by number of unique addresses and while +extension usage is quite rare they make more money when they don't strip it off too.

Information where it leaked can be very useful information to pass leaker at least up till point they have announced they know about the compromise happened. I've done that since turn of century too many times I've lost count already and been quite many times the first to get them know that they had a problem there.

And sure I've received thank you emails that I gave them early head-up info about the issue.


Hah, what a coincidence this post is.

Just last week I had to open first time alarm clock with green 7-segment display. Because I accidentally dropped it while vacuuming and antenna cord broke as it was so firmly under picture frame holding nail. And while open cleaned interior from dust, used greasy PRF to lubricate pots, switches and tuning wheel. If I recall correctly it did have that LM8560 chip in int and with display looked almost exactly what was in subject article inside.

Label on bottom claims: -----------------------------

     Luxor CR 9016
  NOKIA Consumer Electronics 
   International S.A
  (FI)(N)(S)[x] 230V ~ 50Hz
              Battery 9 V
  MADE FOR NOKIA IN CHINA

  -----------------------------
And another smaller sticker

  SERIAL NO.
  9302-00106
I bought -85 before christmas because my then girlfriend told that alarm clock that I've built myself using standalone clock module purchased from a local electronics component store was too ugly for us and had to go. Sure, I took that old one to summer cottage and once I saw this better looking to make her happy. What couldn't a young man do to make is becoming fiancé happy, right.

Q: But why the device is branded Luxor and it's made for NOKIA? A: Because NOKIA bought bit earlier that year (1985) Swedish Luxor consumer electronics. And I guess they did not had yet time redo chassis with NOKIA printed on and this was a still products transition period.

NOKIA was still at that time making also TV sets and was about to bring two years later its first completely new way of implementing analog TV using digital processing chips, which allowed quite nice fieatures like PIP which was great help making VHS recording without ads. I had one of those TV-sets (M-model) and used it about 10 years.

But that alarmclock radio from -85 is still going strong, good shape and it definitely was good purchase about 40 years ago.

e: Sorry about formatting, I tried to find how to format literally, but couldn't find. OK, good enough now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: