You know, I thought that deepfake videos would be politically weaponized when I first heard about them. However, after doing more thinking on this, we have had photoshop for 30 years already! We see photoshopped images all the time and while some people can be fooled, many others remain skeptical of an image then try to verify it hasn't been altered. I don't think photoshopping has really been a big problem yet, which makes me think that deepfakes won't be one either because it is fundamentally the same kind of deception but in video form.
All of these things make it easier to mass-produce bullshit at low cost.
I'm pretty sure I know people who have been convinced by meme quotes. A headshot of a politician they don't like, with text overlaid, which they never said. People are outraged! And never bother to inspect the source.
Anything that makes it easier to lie about what someone said or did, or makes it harder to disprove... They're all politically weaponized, already.
Isn't the drunk pelosi video literally just slowed down audio? If anything, it proves that we don't really need these insanely advanced machine learning techniques to make bullshit. Something as basic as slowing down a video 20% will do just fine in misleading people.
The thing is deepfakes don't really make this easier. They are a lot of work to produce and ultimately aren't a whole lot more effective than a good old fashioned photoshop or crappy meme.
They're only a lot of work as long as the tooling is difficult to use, not accessible and produces output that doesn't look authentic.
30 years ago, that might have been the case with doctoring images, now practically everyone has a personal computer that they can install and use PhotoShop or some similar tool on.
The research demonstrations I've seen are sufficiently terrifying. I believe we'll have something like http://www.xtranormal.com/ for major political figures within two years, producing deepfakes that are sufficiently realistic that I have several relatives who will be tricked by them. Do you not know people who will be fooled?
People's eyes and ears may be fooled by a video but as this capability becomes widespread, which it certainly will, I'm not so sure that many people will be deceived in the long-run.
Technology is evolving but not in a vacuum, society's reactions also evolve in response. Today, many people interpret video to be "evidence" but those same people can interpret a photo to be a "claim" or perhaps some form of lower-confidence indication. Before photo manipulation was commonly known, I think photos were in a similar place as video - more trusted. Based on history, it's reasonable to expect video may follow a similar trajectory as photos to becoming less trusted in situations where it matters.
So, what happens when media types which were previously more trusted as evidence become less trusted? The same things that happened with print, audio and photos. Viewers will evaluate external cues such as the reputation of the publisher and corroborating evidence. The leading indicators that we should suspect deception will likely be similar. For example, how divergent the behavior depicted is from expectation, how contentious the surrounding context is and the existence of parties with an interest in creating such a deception.
This effect already happens with manipulation of intent through tricky video editing, for example deleting the rest of a reply to a question or even swapping in an alternate question. In the last decade I'd say the typical person is far more aware this is possible.
So, in the near-term there may be some successful deception but in the long-term I expect the potential value of creating such deceptions will diminish and we'll arrive at a new "normal" much like we have now. The biggest long-term impact may be false claims of "doctored video!" from those who were actually caught on video doing something they didn't want seen by others. But as we already see now, those pre-disposed to believe whatever is shown is false will search for indications it's doctored. Those pre-disposed to believe whatever is shown is true will search for indications it's just more confirmation of what what they already suspected. Either way, the existing reputation of the person shown, the distribution source and the pre-existing knowledge of viewers will likely be more determinitive than the media itself.
What you're saying is that video will cease to be a useful tool for exposing flawed-yet-entrenched viewpoints for what they are. If you have any idea of the role expository media has played in civil rights and anti-war efforts, this should terrify.
Being aware that something might be fake and actually not getting influenced by it are not the same though. Particularly when it's reinforcing an existing belief, but it already starts when being a little gullible provides more entertainment value than being sceptical.
It's hard to predict actual effects, I don't think that anyone could have foreseen that the primary use of stills image editing for manipulation is not the perfect crime of an elaborate fake but a barrage of provocatively simple memes that don't even pretend to care about believability. The act of sharing is the message.
I'm not sure that's quite true, as video is more likely to be cited or redistributed by journalists as a primary source, for example, where memes would not be. The age of credible video is at an end.
They are not a lot of work to produce unless you're a GPU. A sophisticated audience might not be impressed by video alone but seek correlation from witnesses, the date and time of the alleged event etc., but a well-timed lie is often enough to swing an election or trigger a political crisis.
Yes, but this could be radically different in 3 years, which isn’t enough time for one election cycle let alone the time it takes for society to iterate towards a solution. This tech is moving much faster than society generally acclimated to things.
Do these transformations leave any discrepancy or signature in the video or audio that would be detectable by a machine? (So, tiny, tiny discrepancies might work.) Someone could make a browser plugin to alert the user when video/audio has a good chance of being fake.
If software can identify it as fake, another can be improved till that isn't the case anymore. This is actually being used, search Generative Adversarial Network for more info and background.
> I'm pretty sure I know people who have been convinced by meme quotes. A headshot of a politician they don't like, with text overlaid, which they never said. People are outraged! And never bother to inspect the source.
I really wonder which type of meme has the most influence on average, straightforward and outright lies like the one you've noted, or the more subtle, subversive social commentary style. I'm a big fan of the latter, I think they're very interesting and underappreciated.
For example, this one - nothing more than a simple screenshot of Twitter, but to me this seems very persuasive: http://magaimg.net/img/80rb.jpg
All of these are from t_d so obviously one-sided, I'm sure a similarly impressive collection from the other perspective could easily be assembled, and it's not that uncommon to encounter otherwise intelligent people who have obviously had their beliefs shaped by those memes.
I remember thinking that computer networks would connect people and make the world a better place.
I'm now just about ready to unplug the whole thing and launch it at the sun.
To your point, there are people who are persuaded by assertions.
I personally find it gets even more entrenched when people believe they have seen the evidence with their own eyes. When they see a doctored photo, clip out of context, etc.
It is pretty hard to be optimistic about this whole mess sometimes, but then on the other hand, just as negativity and hate spreads so quickly, might it be possible for positivity and love to do the same, some day? I think so.
Oh ya I love that picture, was trying to find it not that long ago with no luck. It does a brilliant job communicating how powerful propaganda can be.
Yeah the bigger problem is taking video footage of a politician, and then going frame by frame to find the most unflattering possible depiction of them (usually right after a cough or a sneeze) so you can use it to "support" your trash click bait headline. No deep fakes needed - you can make anyone look like a raving lunatic if you take the frame right before a sneeze.
The problem with deep fakes is a hostile nation taking over your phone/facetime calls and sounding exactly like your own parents. How can you tell if they can do it in real time? A bad actor could get you to do some really bad things.
That is an interesting point, but this already happens on voice calls. My grandma got a call a while ago from someone claiming to be my brother. Said he needed money because he was in a South American jail or something. Luckily shes still pretty sharp, so she hung up and called my brother (he was not in South America) so the ruse was up. She was pretty shaken up though. A video call would be more convincing but only an incremental, not fundamental, difference.
My mother received a similar call, telling her that I'd had an accident.
If you're thinking such an evil persons should be in jail, don't worry, they are! It's them who are in South American prisons using burners or stolen phones.
There's nothing in that link that contradicts what I wrote. My source was an official police alert. I lost the reference, but here is another one from a newspaper (in Spanish):
Maybe PGP or some other form of cryptographic signing will become (more) mainstream as a result of this. Or at the very least a secret word that families can share amongst eachother to verify identities.
Cryptographic signing just moves the problem from the authenticity of the document to the authenticity of the key.
That can be very useful when it is useful to only have establish trust once but that's not really the problem described here. The secret word is probably more useful in being so simple, however it still has to be established beforehand.
Photoshopping is done by hand and generally has mistakes. Good photoshops are still believed.
Videos are being altered by machines. They’re being optimized for natural looking results. It’s harder to notice small mistakes when frames are going by at 24FPS vs poring over a static image for 30 seconds until you finally notice the one region with mismatched shadows or odd clipping.
I've literally never been able to identify a photoshopped image (except for immediately obvious work), without someone first pointing it out. I have a feeling that the percentage of the population that can spot edited images is in the single digits.
As someone who usually spots the Photoshop, I have some trouble assessing the danger of such techniques; thanks for bringing your input. I feel that someone who sees “through” a doctored piece of information will not even realize the power it may have had to others who might not participate in the conversation but end up part of the bubble nonetheless. By the time a fake has been debunked it's too late—but what's the alternative, censorship?
Not censorship, context. Censorship is disrespectful of the reader; it assumes the authority knows better. Context is respectful; it assumes you will make the best decision (for you) when you have all the relevant information.
So don’t delete the fake video. Put a big red exclamation mark next to it that says “this video has been substantially manipulated. Contents may not be genuine.”
Also: while viewers and producers both deserve the same respect, producers can forfeit theirs by consistently failing to respect their viewers. A consistent pattern of intentional deception should earn a shadow ban.
Mostly people seem to be sceptical of things they already don’t believe. If someone repeats something you believe, how much research are you going to do. So photoshopped images that reinforce your beliefs slip by, and the one that challenge you, you catch. Or worse, the ones that challenge you get labeled as photoshopped regardless of their provenance.
You don't think that photoshopping has really been a problem and you think people remain skeptical?
I guess if you think that epidemics regarding images of male/female body image, body dismorphia, self-harm, anxiety, and using celebrities to sell products aren't connected to it, but I've found exactly the opposite.
I love photography and I am utterly unable to talk to non-photographers or convince them about what happens in the production of most images they see in most forms of commercial media.
It goes something like this:
"Hey ACowAdonis, how much of that photo do you think was retouched?"
looks at photo
"All of it".
"All of it? What do you mean?"
"I mean all of it."
"But that's Reese Witherspoon! (or insert popular celebrity here)"
"Yep, and you can see how her eyes have been adjusted, her skins been adjusted, they've changed the shape of her arm, taken a few pounds off the mid section, increased the boob size, changed the colour of her hair...and i'm pretty sure that's not her hand".
"Nah, you crazy..."
"You want crazy...pretty much every photo in every fashion magazine and every media item involving that celebrity has been adjusted to a similar extent"
The best way to fool someone isn't to do an indistinguishable Photoshop job. It's to do a passable-enough fake of something the person wanted to believe anyway.
Here's how it will work, someone will make a deepfake of a political opponent and then publish it on a forum where like minded people gather using a dummy account.
Other dummy accounts will take the deepfake and start making a narrative around it, sending chain emails to their real world contacts.
Real world contacts will start passing around deepfake chain mail they were sent.
Some of these emails will take the deepfake as true, some will talk of it as being a funny parody, but "funny because it's true" anyway.
Major news organizations can now address the issue as news because people are passing it around, maybe it will be something like 'Well Bob, I think the X have a real image problem on their hand, if the video is true or not..' You don't mean to say you think it's true!? "I didn't say that Bob, I'm frankly not qualified to judge and I haven't done any research what I'm worried about here is that there is a perception that it is true or if it is not exactly true in this particular instance that it might be true, and that is what I mean by a real image problem"
The REAL issue here will not be the fake videos themselves. They will cause many messes, but the real issue is an acceleration of what we see today: a loss of trust in information and in particular the established media. More societal rift, easy to dismiss any negative news about your favourite politician/rapper/.. as fake video; more difficult court cases even where there's video evidence, ..... Terrifying.
This scepticism is itself a problem. There's a whole branch of philosophy that claims that the objective truth is impossible to know. With deepfakes that's even more true, and might drive a lot of people into despair and apathy.
We'll still have provenance and trusted organizations which is what we mainly use to verify important and easily faked things like written words, photos, and videos that might be taken from a different context to what their descriptions claim. There are other techniques people have developed to verify things too, like the group recitals that transmitted the old testament orally for multiple generations. You couldn't just edit it and repeat a fake version because the change would have conflicted with the consensus.
Society survived before videos and photos, when all information was easily edited. I think we'll be fine. Maybe we're just in a brief decade or two where we became complacent at believing all videos were real without taking any of the care that we used to take with grainy films of alien autopsies or spoken testimonies of people who claimed to have seen bigfoot.
We'll still have provenance and trusted organizations
Oh boy are you in for a rude awakening. It doesn't matter what smart people believe if enough stupid people are convinced of something else. You're using examples from times of very low information distribution to form expectations about the opposite condition, which is already problematic, and ignoring all nonsensical superstition that used to be the norm.
When I was growing up I remember a minor local mania over a supposed miracle at a religious shrine which became a summer sensation. People were charting tour buses to say prayers and hoping to witness a miracle themselves. Right now in the US we have a community of people who have been enthusiastically chanting at political rallies about locking their opponents up for the last 3 years without any apparent care for evidence or factual basis. Obviously political rallies are known for their hyperbole but at some point you have to feed the beast.
Oh, I've got no hope for the majority of people. They're a lost cause - they still believe in religions! They don't need videos to convince them because even rumors will do. I'm thinking of at least casually critical people or courts, those who have some interest in what's true.
I don't think that fakes (photoshop, deepfakes, whatever) need to be absolutely believable to be effective. The long-game purpose is to erode trust in institutions, media, politicians, etc. Fakes accomplish this goal by being just believable and just frequent enough that more and more people start deciding to believe whatever it is they want to believe because "who knows what the real truth is!".
Yes, I realize that and photoshopping got a bunch of press for being used on modelling pictures to "enhance" the models before putting them in magazines. These deepfakes will probably be used to do some other similar things as well. My point was that deepfakes aren't anything new and we already have the tools to analyze them. Those tools just aren't computer programs, they are people posting videos on youtube going pixel-by-pixel to show how a certain photo was doctored. After all, a video is just a series of photographs.
I think writing computer programs designed to spot these deepfake videos would be very helpful as the volume of doctored videos increases, but this isn't some disruptive technology (at least for people trying to deceive others).
You're kidding yourself. A large majority of Americans still believe Sarah Palin said she could see Russia from her house, and that was SNL satire. It will be very difficult to undo the damage done by a convincing and well-timed deep fake. Especially a fake that people want to believe.
I feel like we just need some comedians to make a bunch of entertaining, realistic but labelled-fake content with deepfakes. Production cost of making a show where each actor is deepfaked into a world leader or dead historical figure or whatnot is not a significant hurdle.
I agree, it was fake news articles on Facebook that spread false information but it took desperately ignorant people to believe it for the consequent chaos to ensue. So while deepfake videos are scary, I think what we have to really worry about is the deep ignorance of the voters.
Isn't the deep ignorance of the voters the entire point of a deep fake? It's knowingly a fake, so by creating it you are already trying to pull one over on someone. When the president of the US can say "I never said that" even with video/audio evidence of him saying it while also screaming about deep fakes, the slippery slope is being greased. It's not hard to believe that viewers that only get information from a single source will fall for it. Even if they hear arguments it is fake, they will not research it on their own because their single source is never wrong.
From my observation I believe that a lot of educated and less educated people don’t really care if something is fake as long it confirms their opinions. They don’t want to let go of their “facts” although they know they are wrong.
Writers have been able to write nonsense for a long time... and photo manipulation we've gotten quite used to. All we do is add video to the category of things that might be lies, and so need independent verification.
Skepticism is good and healthy, and verification in the age of Google isn't that hard.
You can trust that if the NY Times or CBS publishes a video, they verified its authenticity, or else will be publishing a big retraction within a few days that will also make the news because it's so rare.
Whether your uncle sends you a random photo or a video of a politician that seems too exaggerated or weird or unbelievable... you assume it might be manipulated... as you already do now. Making Nancy Pelosi seem drunk didn't take a deepfake, just slowing it down.
It's not any kind of big change. Just applying the same skepticism we already automatically apply to so many other things.
> You can trust that if the NY Times or CBS publishes a video, they verified its authenticity, or else will be publishing a big retraction within a few days that will also make the news because it's so rare.
This may be, or become false, due to political motivation to seriously damage the "other side of the aisle".
In fact, often times you don’t even need to lie to skew the “truth”. Cherry picking facts or even just highlighting certain facts over others, plus an optional bit of extrapolation or subtle misinterpretation, is often enough to fit whatever narrative you want to push.
> and verification in the age of Google isn't that hard.
It’s hard because publications often parrot each other. You walk away confident of your “verified” truth due to echo chamber effect, which might be worse than not verifying at all.
> You can trust that if the NY Times or CBS publishes a video...
I can’t. Again, you don’t need to make factual mistakes to push an agenda.
I remember there was some oil companies-backed anti-Tesla propaganda image a while ago showing the "environmental disaster a lithium mine creates," which went viral for a bit. It was, I think, a tar sands mine.
There's no way deepfake videos won't make the propaganda situation worse, at least for a while.
People know about photo manipulations and get suspicious because we've had Photoshop for thirty years (and analog photomanipulation even longer) and see them all the time. This wasn't always true. When photography was new, manipulations that wouldn't fool anyone today were taken as proof by many people. See for example https://en.wikipedia.org/wiki/Cottingley_Fairies
Photoshop has been used for years to successfully fool millions of men and women that consume magazines showing people with smooth skin and sexy bodies.
This is exactly what I wanted to post. People are becoming insecure about their bodies because of fake images of famous people. It creates high expectations that can never be met in real life and seriously ruins lives.
I think the problem is that if we can’t trust video then there is really nothing visual left we can trust. Until now you could at least trust video recordings to some degree. Not sure if that a good thing or not.
We’ve had photo manipulation without the need of a darkroom or skilled optical retouchers for 30 years, but weaponized photo manipulation has been a thing since Stalin’s censors airbrushed out Trotsky a century ago.
Yes, this is true. However, good quality photo editing became relatively cheap and convincing in more recent years. So now you don't need to be a nation-state to be able to convincingly pull something like that off. You can just be some guy in a basement with $1000 worth of computing gear and experience using photoshop.