La #UE quiere que las empresas de #IA etiqueten los #deepfakes: Europa empieza a trabajar en un código de conducta para que los textos, imágenes y videos deepfake sean reconocibles | Wired en españo. 07.11.2025 #dossierUE https://links.uv.es/TtLvL58
Denmark seems to be pursuing the idea of protecting people from "AI" deep fakes by addressing people's image as part of copyright law. [1]
I applaud the idea of doing something about this, and this is a better approach than none at all, but it's not quite how I would pursue it.
For one thing, there are a number of places where people are innocently captured in images that this will create complications for. And for another, I don't think it's powerful enough to address the real problem.
The "MOO" community (MOO is MUD, Object-Oriented, and MUD is Multiple-User Dungeon, and Dungeon was one of the first text-based interactive fiction games, also called Zork), this came up a long time ago. Ironically, since MOO is entirely text, images were not involved. But there was still the issue of appropriating people's view of themselves for ill purposes, and this was richly discussed.
MOO, which had its greatest popularity in the 1990's, before Second Life overshadowed it, functioned as a kind of textual sketch of things to come. It was a coarse level of detail because its technical layer doesn't allow for super-elaborate detailing, but that forced the social aspect to be the focus rather than the technology. Modern systems purport to capture reality, but they often get so side-tracked on making things photo-real visceral experiences that they give short shrift to the full complexity of human social interaction. So they're still catching up to some of the social issues MOO explored decades ago.
In Julian Dibbell's fascinating book "My Tiny Life: Crime and Passion in a Virtual World", which you can and should buy if you can afford to, but which the author arranged to be freely downloadable as a PDF for those who could not afford it [2], the focus is on "A Rape In Cyberspace", he explores some of these issues. Originally published in The Village Voice and later adapted for a book, this story is, in the author's words, "a True Account of the Case of the Infamous Mr. Bungle, and of the Author's Journey, in Consequence Thereof, to the Heart of a Half-Real World Called LambdaMOO".
This story will not tell you how to understand what Denmark is doing, but I think it informs my way of thinking about this issue.
At its core, both situations--the issue in cyberspace and the modern issues in the real world--are not "infringements" (in the way copyright would talk about them) but "violations" in the way a person's sense of self matters.
Some people will point to rape as a matter of physical violation, but just as others are quick to say it's not a crime of sex, it's a crime of violence, I would similarly say it's a crime of violation, of taking control of a person's sense of self. And that's what's in common with these other matters, like grabbing someone's image.
We don't presently have a standard for this, and like many matters of human endeavor, there is extraordinary nuance. Fair use, one might say. Certainly parody is one place where people don't have complete say. The sitting President wants to go after critics for disparaging his good name, for example, and ordinarily the disparaging of someone's good name might be seen as a violation, but in certain realms of public discourse, especially for public figures, we allow and insist on it.
This is partly true, too, because even underlying the issue of violation is the issue of power. The law is really at its core protecting those powerless to protect themselves. So, for example, while it might be a violation to appropriate the good work of an actor who's just struggling to eat, selling their image royalty-free, appropriating the name of a politician who can with the stroke of a pen cut the food supply of millions is not exactly exerting power over them, certainly not unconditionally dominating power.
So we should be careful in our understanding of good law to understand that it seeks not a bright line of pain to itself become a weapon, but rather just an ability to tip power balances back toward the middle, making the world an even battle among people who are born into different levels of power and who cannot therefore fairly be expected to solve their own problems.
I've swept through many issues here, but I have decades of thought underlying my reaction to Denmark's idea, informed by the lucky accident that I was there at the time LambdaMOO sketched the future.
I sometimes note in conversations with people for whom a topic is new and hypothetical that they will say "I wonder what would happen if..." and I reply in the past tense saying "Oh, this is what happened." Because I don't have to speculate. I saw it. It mightn't happen reliably that way again. Many possibilities were in play. But even those were tangibly close to my experience. I have rich, detailed thought because I lived at least one version of it. Just wanted to share that.
[1] https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id/
[2] https://epdf.pub/my-tiny-life-crime-and-passion-in-a-virtual-world.html
#AI #IP #Law #Crime #Copyright #DeepFake #DeepFakes #violation #rape
Denmark seems to be pursuing the idea of protecting people from "AI" deep fakes by addressing people's image as part of copyright law. [1]
I applaud the idea of doing something about this, and this is a better approach than none at all, but it's not quite how I would pursue it.
For one thing, there are a number of places where people are innocently captured in images that this will create complications for. And for another, I don't think it's powerful enough to address the real problem.
The "MOO" community (MOO is MUD, Object-Oriented, and MUD is Multiple-User Dungeon, and Dungeon was one of the first text-based interactive fiction games, also called Zork), this came up a long time ago. Ironically, since MOO is entirely text, images were not involved. But there was still the issue of appropriating people's view of themselves for ill purposes, and this was richly discussed.
MOO, which had its greatest popularity in the 1990's, before Second Life overshadowed it, functioned as a kind of textual sketch of things to come. It was a coarse level of detail because its technical layer doesn't allow for super-elaborate detailing, but that forced the social aspect to be the focus rather than the technology. Modern systems purport to capture reality, but they often get so side-tracked on making things photo-real visceral experiences that they give short shrift to the full complexity of human social interaction. So they're still catching up to some of the social issues MOO explored decades ago.
In Julian Dibbell's fascinating book "My Tiny Life: Crime and Passion in a Virtual World", which you can and should buy if you can afford to, but which the author arranged to be freely downloadable as a PDF for those who could not afford it [2], the focus is on "A Rape In Cyberspace", he explores some of these issues. Originally published in The Village Voice and later adapted for a book, this story is, in the author's words, "a True Account of the Case of the Infamous Mr. Bungle, and of the Author's Journey, in Consequence Thereof, to the Heart of a Half-Real World Called LambdaMOO".
This story will not tell you how to understand what Denmark is doing, but I think it informs my way of thinking about this issue.
At its core, both situations--the issue in cyberspace and the modern issues in the real world--are not "infringements" (in the way copyright would talk about them) but "violations" in the way a person's sense of self matters.
Some people will point to rape as a matter of physical violation, but just as others are quick to say it's not a crime of sex, it's a crime of violence, I would similarly say it's a crime of violation, of taking control of a person's sense of self. And that's what's in common with these other matters, like grabbing someone's image.
We don't presently have a standard for this, and like many matters of human endeavor, there is extraordinary nuance. Fair use, one might say. Certainly parody is one place where people don't have complete say. The sitting President wants to go after critics for disparaging his good name, for example, and ordinarily the disparaging of someone's good name might be seen as a violation, but in certain realms of public discourse, especially for public figures, we allow and insist on it.
This is partly true, too, because even underlying the issue of violation is the issue of power. The law is really at its core protecting those powerless to protect themselves. So, for example, while it might be a violation to appropriate the good work of an actor who's just struggling to eat, selling their image royalty-free, appropriating the name of a politician who can with the stroke of a pen cut the food supply of millions is not exactly exerting power over them, certainly not unconditionally dominating power.
So we should be careful in our understanding of good law to understand that it seeks not a bright line of pain to itself become a weapon, but rather just an ability to tip power balances back toward the middle, making the world an even battle among people who are born into different levels of power and who cannot therefore fairly be expected to solve their own problems.
I've swept through many issues here, but I have decades of thought underlying my reaction to Denmark's idea, informed by the lucky accident that I was there at the time LambdaMOO sketched the future.
I sometimes note in conversations with people for whom a topic is new and hypothetical that they will say "I wonder what would happen if..." and I reply in the past tense saying "Oh, this is what happened." Because I don't have to speculate. I saw it. It mightn't happen reliably that way again. Many possibilities were in play. But even those were tangibly close to my experience. I have rich, detailed thought because I lived at least one version of it. Just wanted to share that.
[1] https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id/
[2] https://epdf.pub/my-tiny-life-crime-and-passion-in-a-virtual-world.html
#AI #IP #Law #Crime #Copyright #DeepFake #DeepFakes #violation #rape
As someone who studied Mass Communication in college, I am absolutely shocked by the little media coverage of the AI slop video published on social media by the U.S. president... where he puts on a crown, gets on a jet & proceeds to drop massive quantities of fecal matter on protestors of the No Kings day.
Had this happened 10 years ago it would have been front page news everywhere.
Are we so desensitized by his undignified behavior + AI slop that nothing shocks people anymore?
@_elena I heard about it in European news. But my spontaneous thought was: He provokes with his shit to become number one in the news and let people forget what is much more important: reporting about the protests, the millions on the streets against him. Nothing better to make a #narcissist angry: don't report about every of his farts. Report about resisting him, also in #socialMedia. Don't become an #echoChamber of #fascism and #fakes.
Scammers are among the top political ad spenders on Meta's platforms, using deepfake videos of American politicians — including President Donald Trump — to promote fake government benefits. https://www.japantimes.co.jp/news/2025/10/02/world/politics/us-deepfake-political-ads-meta/?utm_medium=Social&utm_source=mastodon #worldnews #politics #meta #misinformation #deepfakes #socialmedia #advertising #us
Non-consensual Sexualization Tools (#NSTs) sind Apps und Websites, die sexualisierende #Deepfakes von echten Menschen generieren – ohne deren Zustimmung. Diese #NudifyApps genannten Tools zeigen Menschen ganz nackt oder in Unterwäsche oder Badeanzügen.
Hilf uns, NSTs zu finden! Wenn du Apps, Websites oder Accounts siehst, die sexualisierende Deepfakes erstellen oder verbreiten, melde sie uns bitte: https://algorithmwatch.org/de/lasst-uns-deepfake-apps-gemeinsam-stoppen/
Non-consensual Sexualization Tools (#NSTs) sind Apps und Websites, die sexualisierende #Deepfakes von echten Menschen generieren – ohne deren Zustimmung. Diese #NudifyApps genannten Tools zeigen Menschen ganz nackt oder in Unterwäsche oder Badeanzügen.
Hilf uns, NSTs zu finden! Wenn du Apps, Websites oder Accounts siehst, die sexualisierende Deepfakes erstellen oder verbreiten, melde sie uns bitte: https://algorithmwatch.org/de/lasst-uns-deepfake-apps-gemeinsam-stoppen/
North Korean hackers have used ChatGPT to help forge deepfake identification document for use in a phishing target in South Korea, according to cybersecurity researchers. https://www.japantimes.co.jp/news/2025/09/15/asia-pacific/crime-legal/north-korean-hackers-chatgpt-deepfake/?utm_medium=Social&utm_source=mastodon #asiapacific #crimelegal #northkorea #hacking #ai #chatgpt #deepfakes #southkorea
Denmark 🇩🇰 just passed a groundbreaking law: citizens now own the copyright to their own face, voice, and body.
This is a major win against AI deepfakes and unauthorized digital identity use. The digital self is officially personal property! 💥
#Denmark#Law#DigitalRights#Deepfakes#AIethics#Privacy#PrivacyMatters #AIRegulation#AI#ArtificialIntelligence#DataOwnership#DigitalIdentity#AIforGood#FacialRecognition#Copyright#EthicalAI#TechNews#IdentityTheft#PersonalData#LegalTech
Denmark 🇩🇰 just passed a groundbreaking law: citizens now own the copyright to their own face, voice, and body.
This is a major win against AI deepfakes and unauthorized digital identity use. The digital self is officially personal property! 💥
#Denmark#Law#DigitalRights#Deepfakes#AIethics#Privacy#PrivacyMatters #AIRegulation#AI#ArtificialIntelligence#DataOwnership#DigitalIdentity#AIforGood#FacialRecognition#Copyright#EthicalAI#TechNews#IdentityTheft#PersonalData#LegalTech
Content creators, often based in South Asia, are churning out AI-generated posts for money, targeting Westerners' emotional reactions to the Holocaust. https://www.japantimes.co.jp/news/2025/07/15/world/society/holocaust-ai-fakes-alarm/?utm_medium=Social&utm_source=mastodon #worldnews #society #holocaust #ai #deepfakes #socialmedia #misinformation #wwii #genocide
Denmark's solution to the problem of deepfakes is to let people copyright their own features. While the department of culture still needs to submit a proposal to amend existing copyright law, it has already secured cross-party support. “In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI,” Jakob Engel-Schmidt, Danish culture minister, told The Guardian. Here's more from @Techcrunch.
#AI#ArtificialIntelligence#Deepfakes#Copyright#CopyrightLaw#Tech#Technology
「 The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice 」
Denmark's solution to the problem of deepfakes is to let people copyright their own features. While the department of culture still needs to submit a proposal to amend existing copyright law, it has already secured cross-party support. “In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI,” Jakob Engel-Schmidt, Danish culture minister, told The Guardian. Here's more from @Techcrunch.
#AI#ArtificialIntelligence#Deepfakes#Copyright#CopyrightLaw#Tech#Technology
Parents of schoolchildren were outraged after the latest news that teachers were allegedly taking indecent images of young girls at their schools and sharing them in an online group chat of about 10 teachers. https://www.japantimes.co.jp/news/2025/06/27/japan/crime-legal/public-outrage-teachers-upskirt/?utm_medium=Social&utm_source=mastodon #japan #crimelegal #japanesepolice #nagoya #yokohama #children #schools #sexcrimes #childporn #deepfakes
Days after the Philippine Senate declined to launch the impeachment trial of the country's vice president, two interviews with Filipinos arguing for and against the move went viral. Neither were real. https://www.japantimes.co.jp/news/2025/06/26/asia-pacific/politics/ai-sara-duterte-impeachment-philippines/?utm_medium=Social&utm_source=mastodon #asiapacific #politics #ai #deepfakes #misinformation#saraduterte #philippines