Larry: "I have seen "news" videos that portrayed untrue events as if they were real. I had to search for corroborating articles to learn that the vid I was watching was fake. There are numerous videos of deer on a trampoline, a bear waking up a guy napping in his yard, "moose" pushing a police car off the roadway; all created with AI and all very believable. When people accuse someone of something and that person says "Show me the video", meaning, show me proof, that no longer applies. Even video evidence is now suspect. How much damage is this kind of thing likely to inflict?"
Google AI (GAI): The shift toward hyper-realistic AI-generated media is creating a "crisis of knowing" that extends far beyond harmless animal videos. Experts warn of a "post-evidentiary" world where the traditional belief that "seeing is believing" is fundamentally broken, leading to several layers of societal damage.
1. The "Liar’s Dividend" in the Legal System
The most profound damage is the "Liar’s Dividend": because deepfakes exist, people can now dismiss real evidence by simply claiming it was created by AI.
Admissibility Crisis: Video, once the "gold standard" of proof, is being treated with increasing skepticism by judges and juries.
False Confessions: Studies show that when individuals are shown hyper-realistic (but fake) video evidence of themselves committing a "crime" in a lab setting, a high percentage will actually internalize the guilt and sign a false confession.
Evidence Fabrication: There are documented cases in 2025 and 2026 of individuals being jailed based on fabricated AI text messages or photos used in domestic violence and burglary cases.
2. Erosion of Public Trust and "Epistemic Agency"
As of late 2025, surveys show that 70% of people struggle to trust any online information because they cannot tell if it is AI-generated.
Emotional Manipulation: AI videos of "wholesome" or "outrageous" events (like your moose or bear examples) are often designed to trigger strong emotional responses, which bypasses critical thinking and makes the false event feel "true" in the viewer's memory.
Strategic Distrust: Even when a fake video is debunked, the initial "emotional stain" often remains, permanently coloring a person's perception of a public figure or institution.
The damage is also measured in direct financial and psychological costs:
Financial Fraud: Fraud losses driven by AI-generated social engineering (like voice-cloning or deepfake video calls) are projected to reach $40 billion by 2027.
Reputational Ruin: There are real-world examples where individuals, such as a high school principal in 2024, had their reputations destroyed by deepfake audio of them making offensive remarks. Even after being cleared months later, the reputational damage was irreversible.
Non-consensual Content: A staggering 96% of the most popular deepfakes are pornographic, used primarily to harass and violate the privacy of women.
Summary of Damage
Type of Damage Impact
Legal Genuine evidence is dismissed; innocent people are pressured into false confessions.
Democratic Mass-produced propaganda erodes trust in elections and public health.
Economic Massive increase in sophisticated fraud and identity theft.
Psychological Constant skepticism leads to "moral panic" and social fragmentation.
Larry: "I have read that it is possible to use AI to create an entire video of someone saying or doing things they have not said or done. For instance, AI could be used to make a video of someone confessing to a crime. Is this true?"
(GAI): Yes, it is entirely possible to use AI to create a video of someone confessing to a crime they did not commit. This technology, known as a deepfake, has advanced to the point where even experts struggle to distinguish fabricated video from reality.
How AI Fabricates "Truth"
Modern AI uses generative adversarial networks (GANs) to map a person’s face and clone their voice using just a few seconds of real footage or audio.
Hyper-Realistic Admissions: AI can generate a video of a person speaking new words—such as a confession—with perfectly synced lip movements and emotional facial expressions.
Real-World Examples: In Myanmar, a detained former minister appeared on TV "confessing" to bribery; journalists immediately suspected it was a deepfake due to subtle audio glitches. Similarly, researchers have created fictional "confession" videos for studies that successfully fooled participants into believing the person was guilty.
The Damage to the Legal System
The ability to fabricate confessions creates several severe risks in the justice system:
False Confessions through Coercion: Police could potentially use deepfaked "evidence"—such as a video of an accomplice confessing and naming the suspect—to trick a suspect into believing the case against them is insurmountable, leading them to sign a real (but false) confession.
Infiltration of the Courtroom: There are already documented cases where deepfake videos were submitted as authentic testimony. In a 2025 California case, a judge threw out a lawsuit after detecting subtle "glitches" in a video that turned out to be AI-generated.
The "Liar’s Dividend": Because deepfakes exist, defendants can now dismiss real incriminating videos by simply claiming they are AI-generated, creating a new challenge for prosecutors to prove authenticity.
Is it Illegal?
While the act of creating synthetic content is not always illegal, using it to commit fraud, extortion, or defamation is a crime. Several states, including California and Alabama, have recently enacted laws specifically targeting the non-consensual creation or dissemination of deepfakes.
(GAI): "Would you like to know about the specific "tells" experts use to spot these fake videos, or how blockchain is being used to prove a video is real?"
Larry: "The specific tells first."
(GAI): As of 2026, AI video generators like Sora 2 and Veo 3 have become "cinematic," but they still struggle with the complex physics of the human body and the environment. Experts use a mix of visual, audio, and technical "tells" to identify these fabrications.
1. Visual "Tells" (The Face and Body)
The most common failures occur where the AI must blend a synthetic face with a real background or handle complex movements:
Edge and Adhesion Issues: Watch the jawline, ears, and hairline. When a person turns their head to a full profile, the AI often "glitches," causing the face to blur or detach from the neck.
The "Tongue Test": Many deepfake models still fail to render a realistic tongue. If the person opens their mouth wide (as in a shout or a sob), look for a missing tongue or teeth that appear as a single, blurry white block.
Unnatural Blinking: Real humans blink spontaneously every 2–10 seconds. AI often produces robotic, rhythmic blinking or no blinking at all.
Micro-expression Mismatch: Genuine emotions involve tiny muscle twitches around the eyes and forehead. Deepfakes often have "dead eyes" where the mouth is smiling or crying, but the upper face remains waxy and still.
2. Physics and Interaction "Tells"
AI struggles with "cause-and-effect" and how objects interact in 3D space:
Occlusion (The Hand Test): If you suspect a live video call is a deepfake, ask the person to wave their hand in front of their face. Current technology often creates "ghosting" where the hand and face melt together or the hand disappears momentarily.
Gravity and Momentum: Watch for hair or clothing that moves as a solid mass rather than individual strands, or objects that "float" slightly above surfaces rather than resting on them.
Lighting Inconsistencies: Look at reflections in the eyes or on glasses. AI often fails to match the direction of the light on the person's face with the shadows cast by objects in the background.
3. Audio and Narrative "Tells"
Breath and Cadence: AI-cloned voices often lack natural breathing patterns or insert breaths at linguistically "wrong" moments. Listen for a "robotic" rhythm, especially toward the end of long sentences.
Lip-Sync Drift: Over longer clips (more than 10–15 seconds), the audio and video often begin to "drift" out of sync, even by just a few milliseconds, which the human brain is wired to find "unsettling."
For high-stakes videos like legal evidence, experts use forensic tools:
Metadata Analysis: Right-click and check "Properties" or "Get Info." AI-generated files often lack standard camera metadata (like ISO or lens type) or show time stamps from editing software that don't match the purported event.
Reverse Search: Use Google Lens or TinEye to see if the video frames appear in other contexts. Many "new" viral videos are actually AI-altered versions of old, real footage.

No comments:
Post a Comment