There's so much content and so many opinions on "meetings" that are polarized and come from a very narrow viewpoint. This particular POV seems reasonably critical of meetings but also recognizing that they are often warranted and can be carried out effectively.
I was happy to see "rallying the troops" as an acceptable reason to have a big meeting, because in my opinion it can be a good reason for a big meeting to exist (even though I'm sure many people disagree.)
I’m ok with age being used as a partial proxy for experience when we’re talking about highly specialized roles with massive implications like the ones that DOGE staffers were dropped into.
> $191k USAGM broadcasting contract for “broadcast operations and maintenance in Ethiopia, Africa”
USAGM's mission is to promote the USA's diplomatic interests in parts of the world with little or no press freedom. Whole thing was cut by executive order of Trump to the maximum extent possible.
Because of that order, it's not even a "not specialised" role, it's not a role.
If USAGM should be cut or not, should have been the choice of congress rather than the executive, but that's a different question entirely.
> Botswana MI curriculum
What's "MI"? Mission-Influenced? That sounds like a plausible amount to spend on a curriculum about Botswana for the benefit of the State Department, let alone in Botswana on anything.
And if it is in Botswana, you have to then actually ask "what is this mission, and is this in the interests of the USA taxpayer?", which needs specialists.
> And if it is in Botswana, you have to then actually ask "what is this mission, and is this in the interests of the USA taxpayer?", which needs specialists.
Specialists in what? Asking whether something is in the interests of the taxpayer? Data analysis? If so, then such specialists would have to be found in an independent organization without conflicts of interest for any specific mission, aligned with the interests of the taxpayer, and they would need to be recruited from the part of the political spectrum that cares about waste in government. In other words, you'd need a group that looks like DOGE.
> Asking whether something is in the interests of the taxpayer?
Yes.
Because they need to:
(1) understand the answer, and not mistake terms of art for things they sound like to normal people. For example, to use Musk's ideology, this would be things like mistaking a study in "transgenic rats" or "trans fatty acids" to be anything about gender.
In the case of `$1.3M State Dept. education contract for “Botswana MI curriculum”`, you've still not said if you recon this is in or about Botswana, and you've still not said what "MI" is, you've taken something that you think "obviously" sounds bad (or why else would you have quoted it?) without having thought too hard. I tried searching, the sidebar was an AI summary of (and linking to) this thread that made claims not supported by anything anyone here has actually said, and only one of the four(!) real links even got me a page with the string "Botswana MI curriculum" on them, which linked to X.com which also didn't explain what that was.
What you've done here is treat it as an applause light, not considered anything about taxpayer interests. Applause lights can be done by an AI, taxpayer interests cannot.
(2) for all items including those that sound good when you do know what they mean, be able to tell if they actually did what they said they did rather than pocket the money.
(3) even when they did the thing, determine if they're any good at doing the thing or if they're a bunch of well-meaning idiots.
For (2) and (3), I'm mainly thinking of the UK with this, with PPE bought for the pandemic that wasn't fit for purpose.
(4) have security clearance to know about clandestine missions, so that you don't cut the expenses which are deliberately faked by the government on purpose with a bland an/or politically correct title so nobody complains about the clandestine mission, despite the money being spend on absolutely nothing at all like whatever the line-item says it was, once what is and isn't "politically correct" gets inverted.
> In other words, you'd need a group that looks like DOGE.
No, you'd get something a lot more competent. And boring.
If you look at those titles and assume that they could be cut, without any more information, you are not a serious person and do not deserve to have any budgetary authority anywhere.
At least bother to come up with some reason they should be cut. But you can't even seem to put that into words.
Apparently "not a serious person" is the new insult of choice with you guys, huh. What a ridiculous reply.
Of course they should be cut. The slogan of the winning party for the last decade was America First. They ran on that platform. Broadcasting and teaching on a different continent isn't putting America First. There's your reason.
The insistence on not understanding obvious stuff is such a tiresome attribute.
> Broadcasting and teaching on a different continent isn't putting America First. There's your reason.
You think advertising doesn't work?
$191k/year to promote American interests in Ethiopia may or may not be value for money to the American taxpayer (I honestly don't care because I'm not one), but to think it can't be value for money is to claim that the primary business model of half the American tech giants — and also the business model of X.com, which isn't a giant but is the property of DOGE's most famous figure-head — is fake.
The US diplomatic agencies, which include USAGM which ran this station, have the business of promoting American interests across the world.
It sells (advertises) the USA's preferences to Ethiopia. Preferences such as "do not interfere with shipping things up the Red Sea or we'll do to you what we did to the Houthis in Yemen". Or preferences like "open your markets to what our businesses want to sell to you". Or, historically, "human rights are in everyone's best interest, you should do more of that because it will make you rich and then you can afford more of our stuff".
Stuff like that.
But to repeat: As I neither know nor care about the national interests of the USA in Ethiopia, I do not say this should or should not be funded — all I say is that this kind of thing *must be considered when deciding if it is or isn't good value, you cannot possibly know a priori just from the title alone*.
The willingness to think you understand and can have an informed opinion on something neither you nor I nor a twentysomething engineer from Tesla know anything about is just as tiresome.
I’m only arguing that there are complex reasons why some of these programs exist and it requires experience and perspective to uncover that and make informed decisions.
I could see Sora having a significant negative impact on short form video products like TikTok if they don’t quickly and accurately find a way to categorize its use. A steady stream of AI generated video content hurts the value prop of short form video in more than one way… It quickly desensitizes you and takes the surprise out that drives consumption of a lot of content. It also of course leaves you feeling like you can’t trust anything you see.
Do people on the dopamine drip really care how real their content is? Tons and tons of it is staged or modified anyways. I'm not sure there's anything Real™ on TikTok anyways.
I think a lot of them actually do. It's easy to see TikTok users as mindless consumers, but the more you consume the more you develop a taste for unique content. Over the past few years the content that seems to truly do well at a global scale very often has markers of authenticity. Once something becomes easy to produce it becomes commonplace and you become sick of it quickly.
Thought the same. The human-generated content is just as brainless as the AI-generated slop. People who watched the first will also watch the latter. This will not change a lot, I think.
I mean, this is basically already status quo for YouTube Shorts. Tons and tons of shorts are AI-voice over either AI video or stock video covering some pithy thing in no actual depth, just piggybacking off of trending topics. And TikTok has had the same sort of content for even longer.
The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.
As an Airbnb host I can just as quickly tell you stories of exploitative guests who are chronic abusers of the system, attempting to get refunds by threatening narratives like this that they know have the potential to get sympathy and traction with Airbnb or on social media. In almost all of these cases it ends up being one persons' word vs. another. An accusation is far from proof, but hosts most often stand to lose.
Of course, everyone comes from their own particular point of view and/or bias.
I'm a host. The POV I see this from is that of someone who pays close attention to the market and the changing perception of short term rentals. I've read far enough beyond the headlines to know that these accusations are very often not what they seem, and that this narrative is being blown way out of proportion considering how infrequently it actually happens.
The POV a sub-segment of NYT readers see this from is one of being righteous about short-term rentals (in theory at least.)
The POV of writers and editors at NYT is to respond to their readers' preferences.
It definitely depends on the nature of your work, but the notion of having a channel I need to check hourly makes me ill. If I’m needed I should get a notification, and if I’m involved in an active discussion, I’m there. Otherwise I’ll catch up on a daily basis.
This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame. The article cites AI systems that the FDA already has cleared to operate without a physicians' validation.
> This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame.
Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.
Let me know when the bean counters sign off on fleets in the millions of vehicles.
Yes and I would swear that 1700 of those 2000 must be in Westwood (near UCLA in Los Angeles). I was stopped for a couple minutes waiting for a friend to come out and I counted 7 Waymos driving past me in 60 seconds. Truth be told they seemed to be driving better than the meatbags around them.
You also have Mercedes taking responsibility for their traffic-jam-on-highways autopilot. But yeah. It's those two examples so far (not sure what exactly the state of Tesla is. But.. yeah, not going to spend the time to find out either)
I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.
I have to admit if my life were on the line I might be that Karen.
Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?
Let's look at mammography, since that is one of the easier imaging exams to evaluate. Studies have shown that AI can successfully identify more than 50% of cases as "normal" that do not require a human to view the case. If group started using that, the number of interpreted cases would drop in half although twice as many would be normal.
Generalizing to CT of the abdomen and pelvis and other studies, assuming AI can identify a sub population of normal scans that do not have to be seen by a radiologist, the volume of work will decline. However, the percentage of complicated cases will go up. Easy, normal cases will not be supplementing the Radiologist income the way it has in the past.
Of course, all this depends upon who owns the AI identifying normal studies. Certainly, hospitals or even packs companies would love to own that and generate that income from interpreting the normal studies. AI software has been slow to be adopted, largely because cases still have to be seen by a radiologist, and the malpractice issue has not been resolved. Expect rapid changes in the field once malpractice solutions exist.
From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.
If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.
I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.
But since we don't know where those false negatives are, we want radiologists.
I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.
The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.
That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.
Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.
Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.
I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.
It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)
If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.
The FDA can clear whatever they want. A malpractice lawyer WILL sue and WILL win whenever an AI mistake slips through and no human was in the loop to fix the issue.
It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it
What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper. If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
> What's the difference in the lawsuit scenario if a doctor messes up?
Scale. Doctors and taxi drivers represent several points of limited liability, whereas an AI would be treating (and thus liable for) all patients. If a hospital treats one hundred patients with ten doctors, and one doctor is negligent, then his patients might sue him; some patients seeing other doctors might sue the hospital if they see his hiring as indicative of broader institutional neglect, but they’d have to prove this in a lawsuit. If this happened with a software-based classifier being used at every major hospital, you’re talking about a class action lawsuit including every possible person who was ever misdiagnosed by the software; it’s a much more obvious candidate for a class action because the software company has more money and it was the same thing happening every time, whereas a doctor’s neglect or incompetence is not necessarily indicative of broader neglect or incompetence at an institutional level.
> If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
To make a fair comparison you’d have to look at how many more people are getting successful interventions due to the AI decreasing the cost of diagnosis.
> What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper
The doctor's malpractice insurance kicks in, but realistically you become uninsurable after that.
yeah but at some point the technology will be sufficient and it will be cheaper to pay the rare $2 million malpractice suit then a team of $500,000/yr radiologists
This is essentially what's happened with airliners.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
No - planes cannot "land themselves with zero human intervention" (...). A CAT III autoland on commercial airliners requires a ton of manual setting of systems and certificated aircraft and runways in order to "land themselves" [0][1].
I'm not fully up to speed on the Autonomi / Garmin Autoland implementation found today on Cirrus and other aircraft -- but it's not for "everyday" use for landings.
Not only that but they are even less capable of taking off on their own (see the work done by Airbus' ATTOL project [0] on what some of the more recent successes are).
So I'm not sure what "planes can land on their own" gets us anyway even if autopilot on modern airliners can do an awful lot on their own (including following flight plans in ways that are more advanced than before).
The Garmin Autoland basically announces "my pilot is incapacitated and the plane is going to land itself at <insert a nearby runway>" without asking for landing clearance (which is very cool in and of itself but nowhere near what anyone would consider autonomous).
Taking off on their own is one thing. Being able to properly handle a high-speed abort is another, given that is one of the most dangerous emergency procedures in aviation.
Having flown military jets . . . I'm thankful I only ever had to high-speed abort in the simulator. It's sporty, even with a tailhook and long-field arresting gear. The nightmare scenario was a dual high-speed abort during a formation takeoff. First one to the arresting gear loses, and has to pass it up for the one behind.
There's no other regime of flight where you're asking the aircraft to go from "I want to do this" to "I want to do the exact opposite of that" in a matter of seconds, and the physics is not in your favor.
How's that not autonomous?
The landing is fully automated.
The clearance/talking isn't, but we know that's about the easiest part to automate it's just that the incentives aren't quite there.
It's not autonomous because it is rote automation.
It does not have logic to deal with unforeseen situations (with some exceptions of handling collision avoidance advisories). Automating ATC, clearance, etc, is also not currently realistic (let alone "the easiest part") because ATC doesn't know what an airliner's constraints may be in terms of fuel capacity, company procedures for the aircraft, etc, so it can't just remotely instruct it to say "fly this route / hold for this long / etc".
Heck, even the current autolands need the pilot to control the aircraft when the speed drops low enough that the rudder is no longer effective because the nose gear is usually not autopilot-controllable (which is a TIL for me). So that means the aircraft can't vacate the runway, let alone taxi to the gate.
I think airliners and modern autopilot and flight computers are amazing systems but they are just not "autonomous" by any stretch.
Edit: oh, sorry, maybe you were only asking about the Garmin Autoland not being autonomous, not airliner autoland. Most of this still applies, though.
There's still a human in the loop with Garmin Autoland -- someone has to press the button. If you're flying solo and become incapacitated, the plane isn't going to land itself.
One difference there would be that the cost of the pilots is tiny vs the rest that goes into a flight. But I would bet that the cost of the doctor is a bigger % of the process of getting an x-ray.
They have settled out of court in every single case. None has gone to trial. This suggests that the company is afraid not only of the amount of damages that could be awarded by a jury, but also legal precedent that holds them or other manufacturers liable for injuries caused by FSD failures.
At the end of day, there's a decision needs to be made and decisions have consequences. And in our current society, there are only one way we know about how to make sure that the decision is taken with sufficient humanity: by putting a human to be responsible for making that decision.
Medicine does not work like traffic. There is no reason for a human to care whether the other car is being driven by a machine.
Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.
When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.
There's some sort of category error here. Not every doctor is that type of doctor. A radiologist could be a remote interpretation service staffed by humans or by AI, just as sending off blood for a blood test is done in a laboratory.
> There is no reason for a human to care whether the other car is being driven by a machine.
What? If I don't trust the machine or the software running it, absolutely I do, if I have to share the road with that car, as its mistakes are quite capable of killing me.
(Yes, I can die in other accidents too. But saying "there's no reason for me to care if the cars around me are filled with people sleeping while FSD tries to solve driving" is not accurate.)
You know, for most humans, empathy is a thing; all the more so when facing known or suspected health situations. Good on those who have transcended that need. I guess.
Non-blinding headlights already exist. Modern projection headlights can map where the light ends up on the road to illuminate your path while avoiding oncoming traffic. It just isn't widely adopted (in the US at least) as of yet.
It is here and sucks on curvy roads. My commute is down a mountain canyon and if I'm on the outside of a curve (turning left) the incoming traffic does not detect my headlights and I'm blinded for the entire curve. I want them banned. How hard is switching between high and low beams?
We're not talking about auto high-beams. We're talking about headlights that mask out a portion (of even the normal beam) based on where other cars are.
> The recognizing other cars part of those systems is… not great. (yet? hopefully.)
Or bicyclists or pedestrians. We have all of automotive histroy to demonstrate that blinding others isn't necessary for driving, not even for comfort-level safety gains.
I don't know how he'd be deciding which oncoming cars are equipped with this feature, as it's still uncommon. And he said " How hard is switching between high and low beams?" which seems to be more talking about auto high beams.
Better -something- that's trying to mask low beams than the alternative (nothing).
> I don't know how he'd be deciding which oncoming cars are equipped with this feature, as it's still uncommon.
The technology is required on some types of headlights (which you can recognise), because…
> Better -something- that's trying to mask low beams than the alternative (nothing).
…they also made low beams notably brighter and reach further (= extended the angular output). The alternative isn't nothing, it's less bright low beams.
Adaptive headlights have only been approved for use in the US for ~3 years. They were sold in cars in the US before that, but the adaptive function was disabled.
> On country roads, it’s extremely valuable for keeping the shoulder lit up with high beams to see things like fear and bicycle.
It is my experience that bicyclists and pedestrians aren't partial to the endless passing vehicles that are blinding them. Seeing is part of how they keep out of drivers way. I disagree that we should ruin their vision just so drivers can seem them even more than they used to.
> Modern projection headlights can map where the light ends up on the road to illuminate your path while avoiding oncoming traffic.
Ask any EU trucker about this and they will curse you out with the most creative expletives you have heard in your life. At least the existing systems are apparently hot garbage, especially on highways where some oncoming truck headlights might be hidden by the median yet you can still blind the trucker themselves (since they're higher up).
I was happy to see "rallying the troops" as an acceptable reason to have a big meeting, because in my opinion it can be a good reason for a big meeting to exist (even though I'm sure many people disagree.)