What’s the maximum range to your phone to get notifications? I’ve been trying to cut back on my reflex to look at my phone every few minutes. It’d be great if I could keep my phone on a charging stand and be able to walk around my house and still get notifications.
Exactly the killer use-case for pebble! It's "blue-toothy" range, so it'll mostly work in adjacent rooms but might have difficulty going diagonally upstairs v. downstairs, or ranging too far outside.
IIRC, pebble had a "vibrate on BT-loss", which could remind you to go retrieve the phone when ranging outside to rake leaves (or forgetting your phone in a restaurant or something).
I think Eric posted about this, and it was an impressive distance. Obviously YMMV based on the size of your house and how thick the walls are, but my old Pebble worked in much of my house and I would expect that as BT has gotten better (on both the phone and watch) in the last decade, the new versions will have even more range.
also probably depends on the building you live in.
not trying to start a flame-war, but i can imagine that you get quite some range in the US, if you live in one of those cardboard-inner-walls houses.
in the 30cm thick solid wall apartment i live in my pebble looses connection the next room over, i almost need line-of-sight for it to work. working at my desk, get up, walk 5 meters to the bathroom, watch looses connection.
maybe my smartphone has a weak bluetooth receiver, compared to other models, who knows...
Huh interesting, those have new BT guts, so should have as good of performance as any. I guess 30cm thick solid walls are not common enough for BT to be designed to go through them?
i think it's the by now internet-meme worthy difference between walls in the US compared to most of Europe. i've never lived somewhere which didn't have thick brick or concrete walls. 30cm was a bit high, more like 20cm.
i've seen tons of americans making holes in their walls by punching or falling into them. could never relate myself, i'd have a broken hand or concussion :D
my phone is not very powerful, maybe that's a factor.
It’s also running virtualized in a lot of cars! Although I’ve seen more and more US car companies switching from QNX to Linux. Chinese car companies I’ve worked with all use Linux instead of QNX, so perhaps that is the future.
I would open a new bug for each of those questions and say “we will evaluate this after the MVP is implemented”. Give the person credit in the bug description. That will usually satisfy their concerns. Set the priority on the bugs to low and I’ll never even have to look at them again, unless one of them actually becomes a problem.
There is never a need to store a pin in the database, store it in temporary storage like redis. Set the TTL to the expiration date. You can hash if needed, but I’m less concerned that someone hacks into your reds instance and steals your pins from the last 10 minutes, bc everything else is gone.
There should never be a need to return a pin to the client. You’ve already texted/emailed it to them. They are going to send it back to you. You will check against your temporary storage, verify/reject, and delete it immediately after.
That's where VPN obfuscation is the play, imo. A lot of people nowadays are leaving streaming platforms or watch YT on smart TVs, so it does have a place. You can always exclude a device from the VPN coverage too.
Obfuscation only protects you from your own ISP messing with VPN connections. Streaming services (etc.) can't see what protocol you're using between yourself and the VPN in any case, they just see the VPN's exit IP address. Which is likely on their list of known VPN IPs.
If you start countering geolocation blocking with vps rental and VLESS vray etc then its still good to obfuscate at the endpoint. Passing VPN traffic off as something else is good policy wherever your tunnel goes.
Having used both extensively, Geizhals doesn't hold a candle to McMaster. McMaster is, bar none, the single best e commerce website I've ever used (if you already know what you're looking for, and definitely still top shelf if you don't).
But McMaster and eg Amazon are optimizing for different things. McMaster knows its clientele isn't going shopping, they're solving problems. As such, McMaster focuses on helping your solve your problem and get back to work. Amazon, on the other hand, is focused on just selling you "as much 'anything' as possible" and wants you to spend as much time there as possible in the hopes that you'll stumble on an impulse buy.
We do! It recognises the 'Do Not Track' sent, at least on FF for me. I get a very small popup telling me it's respecting this setting, and no request for accepting anything else:
> "Do not Track"-Modus erkannt! Es werden nur technisch notwendige Cookies verwended. [Datenschutzerkl"arung](...)
Lovely. If only the rest of the web looked like this.
Wow. SubC’s software engineering needs some work. They thought the camera’s file system was unencrypted, when it was encrypted. They didn’t know where the keys were to decrypt it. It turned out the key was written unencrypted to a UFS storage device. There was a file written to /mnt/nas/Stills, which indicates that the camera was to writing to a remote file system that wasn’t mounted.
They thought the camera’s file system was unencrypted, when it was encrypted.
Unfortunately this situation is likely to get more common in the future as the "security" crowd keep pushing for encryption-by-default with no regard to whether the user wants or is even aware of it.
Encryption is always a tradeoff; it trades the possibility of unauthorised access with the possibility of even the owner losing access permanently. IMHO this tradeoff needs careful consideration and not blind application.
This is why I always shake my head when the Reddit armchair security experts say "The data wasn't even encrypted!? Amateur hour!" in response to some PII leak.
Sure, sure buddy, I'll encrypt all of my PII data so nobody can access it... including the web application server.
Okay, fine, I'll decrypt it on the fly with a key in some API server... now the web server had unencrypted access to it, which sounds bad, but that's literally the only way that it can process and serve the data to users in a meaningful way! Now if someone hacks the web app server -- the common scenario -- then the attacker has unencrypted access!
I can encrypt the database, but at what layer? Storage? Cloud storage is already encrypted! Backups? Yeah, sure, but then what happens in a disaster? Who's got the keys? Are they contactable at 3am?
Etc, etc...
It's not only not as simple as ticking an "encrypted: yes" checkbox, it's maximally difficult, with a very direct tradeoff between accessibility and protection. The sole purpose of encrypting data is to prevent access!
Server stores encrypted blobs. Server doesn't have the keys.
Entire application is on the client, and just downloads and decrypts what it needs.
Obviously your entire application stack needs to be developed with that approach in mind, and some things like 'make a hyperlink to share this' get much more complex.
Re: encrypting data that would be served via web server: why would anyone bother to encrypt data meant to be shared externally worldwide? It makes no sense to begin with…
This has already happened to Windows users when BitLocker disk encryption is enabled by default and they do something that causes the encryption key to be lost.
You can have the key saved in your Microsoft account.
> They thought the camera’s file system was unencrypted, when it was encrypted.
Willing to bet plenty of hn readers are unaware of encryption going on at lower layers of the tech stack than they're aware of.
For example most hard drives encrypt all data, even when not commanded to, as a way to do 'data whitening' (ie making sure there are even numbers of 0's and 1's in the data stream and not some pattern which might throw off tracking.)
The encryption key is simply stored elsewhere in the drive - or nvram or in the firmware.
But it means if you extract the physical magnetic surface and read it with the right microscope, you might well find the data encrypted with no available key.
Scrambling and encryption are two different things. Scrambling is very easy to do at line rates. Encryption not so much.
Ethernet is a good example. It has the same problem where long strings of 0's or 1's can cause clock recovery problems. The solution as clock rates have increased is to just run all the data through a scrambler driven by a simple Linear Feedback Shifter.
If you're talking about SED feature, no, it isn't widespread since it's regarded as an "enterprise" feature and only available in minority of drives (regardless of HDD or SSD).
Client or OEM variants of same drives (otherwise identical) lack SED option most of the time and doesn't encrypt data by default.
"Active" with those systems just means the encryption key is now user-supplied instead of being stored on the controller/drive. The actual encryption is always active; which makes sense, if anything it means you have one less configuration to test.
If I learned one thing about SSD firmwares/controllers it's to be sure of nothing. Especially when the market is flooded with cheap controllers that can barely keep up with line speeds I'm very much doubtful that they're unconditionally encrypting at rest data.
This is line coding, often used on wired connections. But reading a hard drive trace isn't quite a wired connection, so the trade-offs are different.
Most notably with line coding when using positive and negative voltages it is quite important for the average voltage to be zero to avoid building up a charge difference.
Whitening can often be used if the downside to an imbalance or long runs is much lower. Notably in RF this is often about avoid harmonics with a little bit of symbol timing advantage thrown in.
Whitening doesn't really require encryption though. Weak cypher streams xored into the data work fine. Even a repeated 256 bit string is quite alright.
Whitening using any non trivial encryption key seems weird to me. AES with a key equal to the current offset in ECB mode already feels over-engineerd.
> Whitening using any non trivial encryption key seems weird to me.
It's because there was an era when drives were expected to be able to do 'hardware' encryption with a user provided key, so reusing that hardware to also do whitening even if the user didn't provide a key was very convenient.
Plus you get all the other benefits - ie. a single scsi command can 'secure erase' the whole disk in milliseconds by simply changing the stored key.
That is such a typical bug report to a large company. A user who spent a lot of time debugging and finding the root cause of an issue, and a few faceless peons at the large company spending a few minutes on it, realizing it’s not a priority, and abandoning it.
Not really. There wasn’t a true patch attempt submitted, as far as I can see. There was some helpful info about how commenting out a couple lines could work around the issue, but doing a real engineering evaluation to check spec compliance and make sure it’s all covered in the Bluetooth testing infrastructure is a much bigger task.
And not a small bug either. This large an interoperability issue and it takes a nerd not in the employ of Google to fix their shit? This is why Apple's vertical integration makes it one of richest companies in the world. Google's only up there because of their success in that one business of theirs.
What does Apple being Apple have to do with Google not paying somebody to work on getting Airpods, which presumably should conform to some Bluetooth spec, in order to get Airpods to work on Android?
>>...due to a bug in the Android Bluetooth implementation.
The issue can be resolved because an android bug can be debugged by a contributor. A similar issue can't even be analyzed from the apple side by anyone but an apple employee.
We are assuming there are bugs in iOS, but their closed sourceness can mislead people to believe there aren't. Then, yes, their vertical integration makes them rich, which in this case is bad for users, in the guise of being good.
We'll probably end up with the doors from Philip K. Dick's Ubik that charge you money to open and threaten to sue you if you try to force it open without paying.
Just wait Sam Altman will give us robots with people personalities and we’ll have Marvin. Elon will then give us psychotic Nazi internet edgelord personality and install it as the default in a OTA update to Teslas.
reply