Sun Rays were so good. Being able to walk over to someone else's desk and say "hey, take a look at this" and swap your card for theirs and instantly have your desktop was such a great user experience.
Also enjoyed the keyboards (with control where caps lock "normally" is)...
Maybe it's me, but I think in the worse case scenario (criminal liability), a jury will be more sympathetic to a person (acting in good faith) that had an issue because of the car manufacturer's provided system than they would to a person who installed a DIY/open source self-driving system.
And I say this as someone who considered trying out the project.
There's a guy who makes them in his garage. They're not really conceptually hard to make as such, they're just fiddly, delicate, labour-intensive and mostly replaced by astoundingly cheaper and often better (outside of a few niches) solid state options.
If there were some kind of interdiction on silicon (an evil genie or some kind of Butlerian Jihad perhaps?), the market would remember and/or rediscover the thermoelectric effect and throw money/postapocalyptic bartered goods at glassblowers pretty sharpish.
If that status continued, I'm sure we'd see developments in that space in terms of miniaturisation, robustness, efficiency, performance, etc., that would seem as improbable to us as a modern CPU would seem to someone in the no-silicon timeline. You may never get to "most of a teraflops in your pocket, runs all day on 4000mAh and costs three figures" but you could still do a meaningfully large amount of computation with valves.
Savant-tier, obsessive, dedicates his life to it "guy" does it in his garage over a period of how many years, and has succeeded to what point yet? Has he managed even a single little 8-bit or even 4-bit cpu? I'm cheering that guy on, you know, but he's hardly cranking out the next-gen GPUs.
>the market would remember
Markets don't remember squat. The market might try to re-discover, but this shit's path dependent. Re-discovery isn't guaranteed, and it's even less likely when a civilization that is desperate to have previously-manufacturable technology can't afford to dump trillions of dollars of research into it because it's also a poor civilization due to its inability to manufacture these things.
You don't need trillions of dollars to start making tubes again. And it wouldn't be that one guy doing it for funsies, would it? If the question was "can one hobbyist bootstrap everything on his own" then I would agree. Maybe you completely lose even the insight that a small electric current can be used to switch or modulate a larger one. But if you're also losing mid-high-school physics knowledge, that's a different issue.
As I said, you probably won't ever get to where we are now with the technology, but then again probably 99.999% of computing power is wasted on gimmicks and inefficiency. Probably more these days. You could certainly run a vaguely modern society on only electromechanical and thermionic gear - you have power switching with things like thyrotrons, obviously radios, and there were computers made that way, such as the Harwell Witch in 1952.
Maybe you don't get 4K AI video generation or petabyte-scale advertising analytics but you could have quite a lot.
Looking at the Ryzen 7 9800X running at 5.2 GHz, if you chopped off 99.999% of that, you'd get a 52 kHz CPU, with 6.6 megaflops vs the original 6.6 gigaflops.
For reference, the original 4004 Intel CPU from 1971 ran at 740 kHz, so 52 kHz isn't even enough computing to do a secure TLS web connection without an excessively long wait. The 4004 did not do floating point, however, and it wouldn't be until between the 486 (1989) and the Pentium (1993) that we see 5-10 MFLOPS of performance.
Hmm... I think 9800X should be able to do at least 32 FLOPS per cycle per core. So 1.3 TFLOPS is the ceiling for the CPU. 1/100000 leaves you... 12 MFLOPS.
99.999 may be an ass-pull of a figure, but I was thinking more in terms of having whole datacentres screaming along doing crypto, billions of cat videos, Big Data on "this guy bought a dishwasher, give him more dishwasher adverts", spinning up a whole virtual server to compile a million line codebase on every change, and AI services for pictures of a chipmunk wearing sunglasses. There's a good chunk of computation that we as a society could just go without. I know of embedded systems that run at hundreds of MHz and could replaced by no CPU at all and still fulfill the main task to some extent. Because early models indeed used no CPU. Many fewer functions, but they still fundamentally worked.
Many things we now take for granted would indeed be impossible. I suppose the good news is that in some electropunk timeline where everyone had to use tubes, your TLS connection might not be practical, but the NSA datacentre would be even less practical. On the other hand, there'd be huge pressure on efficiency in code and hardware use. Just before transistorisation, amazing things were done with tubes or electromechanically, and if that had been at the forefront of research for the last 70 years, who knows what the state of the at would look like. Strowger switches would look like Duplo.
Probably there would still be a lot of physical paperwork, though!
Comparisons to old technology is just something I do for fun, don't read too much into it. :)
Fun fact: A usb-C to HDMI dongle for has more computing power than the computer that took us to the moon.
As far as the NSA being even less practical, they're among the few who have the staff that could eke every last cycle of performance out of what remained. Maybe the Utah datacenter wouldn't work, but Room 641A long predates that.
I mount my nfs shares like this:
sudo mount -t nfs -o nolocks -o resvport 192.168.1.1:/tank/data /mnt/data
-o nolocks Disables file locking on the mounted share. Useful if the NFS server or client does not support locking, or if there are issues with lock daemons. On macOS, this is often necessary because lockd can be flaky.
-o resvport Tells the NFS client to use a reserved port (<1024) for the connection. Some NFS servers (like some Linux configurations or *BSDs with stricter security) only accept requests from clients using reserved ports (for authentication purposes).
I did JFK-EWR coming back from HND one time. Not the only option but probably the best, all things considered. That's life in the fast-paced, slam-bang, laugh-in-the-face-of-death world of non-revving.
[AI driven] data center power consumption is real and as of right now, it seems like other consumers are subsidizing it.