It's crazy that I'm paying for IntellJ yet can't hook up my own LLM to it. Jetbrains wants to be some middleman instead of letting me use my tools. I won't and I can't use your hosted LLMs, period.
I run a separate Mac Mini that has the full iCloud Photos library on a massive external drive, set to "Download originals". I then rsync that filesystem to a separate Linux box. This works but you must not ever disconnect the external drive.
I don't have a solution for iCloud Drive, as there wasn't a keep offline setting last time I checked. So use it only ephemerally.
Arq [1] has an option to "materialize" dataless files, basically forcing them to be locally available. The only issue is if it's a large file and it gets pushed off device often, you can burn a lot of bandwidth re-downloading it over and over again.
At least as of Sequoia, the Settings > iCloud > Drive > Optimize Mac Storage option enables iCloud Drive files to be stored offline. Likewise, right clicking any iCloud Drive files in the Finder includes a Keep Downloaded option. Since I minimally use iCloud Drive, in the past (older OSes) I also had Hazel make copies of iCloud Drive files so they were certain to be in backups.
Time Machine backups to a samba share on the Linux box would get you both the Photos library database and the iCloud Drive stuff. It also means you don't need to bother with the external drive.
There is a keep all files offline setting for iCloud Drive (turn off "Optimize Mac Storage" in Systems Settings).
I'm not familiar with the "Photos Library.app", but I have an m4 mini with my photos in a Photo's Library. I'd love to know your script to rsync the photos into a separate drive/directory
(note: tested with brew rsync, IIRC the default rsync is outdated on macOS)
Somewhere in the directory structure is a folder /originals/ which has all the actual files.
Note that this is only a last resort backup. Restoring the library as a whole requires a Mac with a compatible OS version. Restoring without a Mac would only get you the originals, so only the out-of-camera files (jpg, heic, raw), with no edits or metadata changes from Apple Photos applied (Apple Photos doesn't touch the EXIF data). You'd probably also lose the video part of all live photos, as the live video files stored as separate files and not part of the .heic files. They're there, but not very usable.
An alternative to this workflow is to export all photos (with edits applied) from the Photos app, but honestly I'm not sure if that even works and how long it would take for multi-TB libraries.
If a scientist just completely "made up" their references 10 years ago, that's a fraudster. Not just dishonesty but outright academic fraud.
If a scientist does it now, they just blame it on AI. But the consequences should remain the same. This is not an honest mistake.
People that do this - even once - should be banned for life. They put their name on the thing. But just like with plagiarism, falsifying data and academic cheating, somehow a large subset of people thinks it's okay to cheat and lie, and another subset gives them chance after chance to misbehave like they're some kind of children. But these are adults and anyone doing this simply lacks morals and will never improve.
And yes, I've published in academia and I've never cheated or plagiarized in my life. That should not be a drawback.
Yes, because StackOverflow is horrible to monitor.
I work on an open source project at $DAYJOB, and as a team it's much easier dealing with a forum or bug report than tracking SO questions.
You can't set up email notifications for watched tags [1] as SO doesn't take that use case seriously. There's no way to see read or unread questions, or e.g. mark them as read (personally or as a team).
To make matters worse, SO tags are often wrongly applied leading to seeing lots of unrelated or low quality questions briefly. These get removed eventually but still show up initially.
And lastly, I think SO has said that generally they don't want to devolve into a vendor support forum. Which is understandable, given that most companies just post boilerplate answers and aren't really helpful. As a company it makes things harder because moderation is completely out of your control.
Don't get me wrong, I often apply it myself and refactor code into smaller functions. But readability, understandability and ease of maintenance comes first.
Especially juniors will apply such rules overzealously, bending over backwards to stay within an arbitrary limit while at the same time not really understanding why.
Frustratingly, their lack of experience makes it impossible to discuss such dogmatic rules in any kind of nuanced way, while they energetically push for stricter linter rules etc.
I've tried, and one even argued there's never ever a reason to go over N lines because their book is "best-practice" and said so. And you should not deviate from "best-practice" so that's it. I'm not making this up!
That said, I'm always open to discuss the pros and cons for specific cases, and I do agree the default should be to lean towards smaller functions in 90% of cases.
reply