Commons talk:Project scope
Certain content is excluded from Commons
This is confusing: "Certain content is excluded from Commons … Files that contain nothing educational other than raw text. Purely textual material such as plain-text versions of recipes, lists of instructions, poetry, fiction, quotations, dictionary definitions, lesson plans or classroom material, and the like are better hosted elsewhere, for example at Wikibooks, Wikiquote, Wiktionary, Wikiversity or Wikisource." We are required to host the original document at Commons to be used in the other projects, so why are we saying they must be deleted? It sounds like newspaper articles, books without illustrations, must be deleted, because they are raw text. RAN (talk) 04:13, 9 December 2025 (UTC)
- Not sure how to word it better, and it might be made clearer that there are certain exceptions (copies of legitimately published books or peer-reviewed academic papers, for example). It's basically meant to say, "No, you don't get to use Commons as a way to publish your original writing just because it is arguably educational," and "No, you don't get to write your own divergent version of a Wikipedia article and publish it here," etc. - Jmabel ! talk 04:42, 9 December 2025 (UTC)
- Source documents may be in scope; content created by Wikimedia users is generally not. The overall intent is that Commons uploads shouldn't be used to bypass the wiki editing process (e.g. writing an encyclopedia article and publishing it to Commons as a PDF), or as a back-door way means publishing content which would otherwise not be in scope on any Wikimedia project (like works of fiction). We adjusted this wording a few years ago at /Archive 2#Proposed change in wording.; if you can come up with a better way to explain the distinction, we'd be interested to hear it. Omphalographer (talk) 04:55, 9 December 2025 (UTC)
Proposed change: excluding images do not comply with COM:AIP from COM:INUSE rules
I'd like to point out a potential conflict between current COM:INUSE rules and the proposed COM:AIP guideline. Specifically, certain images that are currently 'in use' on other projects may fail to meet the criteria set forth in the COM:AIP proposal, creating a contradiction in our deletion process.
Therefore, I propose appending the following rule to the COM:INUSE section to exclude those images once that guideline is ratified.
- Images of people created by Generative AI that do not comply with the relevant guideline.
0x0a (talk) 09:51, 29 December 2025 (UTC)
- Is scope the right place or should that note be on COM:DIGNITY? GPSLeo (talk) 10:47, 29 December 2025 (UTC)
Oppose COM:INUSE applies regardless of the production method. Furthermore, COM:DIGNITY does not say anything about for example neutral depictions of ancient famous people. COM:AIP does not overrule COM:INUSE. The policy section that needs a change is COM:NOTCENSORED. Prototyperspective (talk) 10:08, 4 January 2026 (UTC)
Strong support - Of course, that's the whole point and likely main purpose of COM:AIP. Regards, Grand-Duc (talk) 23:43, 6 January 2026 (UTC)
- You need to change COM:NOTCENSORED then even more clearly because then Commons makes decisions for other projects where so far one of the most important principles of Commons has been that if a file is used in a Wikimedia project, it's considered within scope and Commons users don't make editorial decisions for other projects. Prototyperspective (talk) 00:22, 7 January 2026 (UTC)
- Nope. Stating and enforcing that some kind of media doesn't fall within the hosting purview, the scope of Commons is not censorship. NOTCENSORED already has a fitting line: "However, the statement "Wikimedia Commons is not censored" is not a valid argument for keeping a file that falls outside the normal permitted Wikimedia Commons scope." AIP only clarifies that AI-generated media of real people does fall "outside the normal permitted Wikimedia Commons scope". Regards, Grand-Duc (talk) 01:10, 7 January 2026 (UTC)
- The line you cited is not about what I wrote about above. I'm not arguing files should be kept because Commons is not censored. I was saying one can't have a title "Wikimedia Commons is not censored" and then have lots of policies that have people do extensive indiscriminate deletion of swaths of objectionable content. Thus the section title needs to be changed for accuracy [in my view] so as to not be false or misleading [in my view].
Maybe we have different understandings of some concepts or terms which is not a large problem, people can sometimes disagree. I was reading the recommendable Wikipedia article about the subject so I was relating to the definition in that article's lead. Whether or not or how that policy page is changed is not the main subject here though – as said, Commons so far had a policy pillar that basically said Commons users don't get to editorialize other Wikimedia projects and that files in use in other projects are kept. This policy is important for many reasons, for example because if it's not upheld, users from other projects may stop uploading files here and instead upload them locally because they can't feel sure anymore that it will remain here. - This principle and policy is very important and whether or not it remains standing relates to COM:NOTCENSORED as that's a further step than deleting a whole type of content where the practice of users making this or that specific exception – exceptions of media types/contents, not principles like e.g. low-quality – to it, starting with the AIP one is major relevance. Prototyperspective (talk) 01:45, 7 January 2026 (UTC)
- Well, in time of a single-user login over all projects (and file renaming tools that make your account edit in lots of individual projects), the premise of a different set of Commons users who "don't get to editorialize other Wikimedia projects" is, in my opinion, flawed anyway. We're all Wikimedians with the standing authorisation to participate in any project. Regards, Grand-Duc (talk) 02:41, 7 January 2026 (UTC)
- +1. It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. AI-generated images frequently fail to meet this standard - particularly ones which appear to be a factual depiction of something, but which are actually not. Omphalographer (talk) 00:12, 23 February 2026 (UTC)
- Paintings also frequently fail to meet various standards but we still have paintings. Whether or not this or that applies often doesn't matter that much because nobody is arguing we should indiscriminately keep all of them and DRs are always an option.
particularly ones which appear to be a factual depiction of something, but which are actually not
agree ("appear to be" I think means 'claim to be or are presented as if…'). Prototyperspective (talk) 00:19, 23 February 2026 (UTC)- We do actually delete paintings and other pieces of artwork as out of scope on a regular basis, particularly when it's amateur artwork created by users. (See e.g.Commons:Deletion requests/Files uploaded by Brandymodel, Commons:Deletion requests/Files uploaded by Sravanthi kokkula, Commons:Deletion requests/Files uploaded by Jonathan Garrett, etc.) It's not the exception you think it is. Omphalographer (talk) 00:51, 23 February 2026 (UTC)
- I was illustrating that just because some type of media in your opinion often fails some standards doesn't mean it always does. From watching hundreds of art images and creating categories for user-made art etc I know well that only rarely are paintings deleted and when they are it's useless ones. But let's take another example
- .
- It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. Amateur photos frequently fail to meet this standard. [insert missing reasoning here] Additionally, artistic files which appear to be a factual depiction of something, but which are actually not are not useful. Prototyperspective (talk) 01:34, 23 February 2026 (UTC)
- We do actually delete paintings and other pieces of artwork as out of scope on a regular basis, particularly when it's amateur artwork created by users. (See e.g.Commons:Deletion requests/Files uploaded by Brandymodel, Commons:Deletion requests/Files uploaded by Sravanthi kokkula, Commons:Deletion requests/Files uploaded by Jonathan Garrett, etc.) It's not the exception you think it is. Omphalographer (talk) 00:51, 23 February 2026 (UTC)
- Paintings also frequently fail to meet various standards but we still have paintings. Whether or not this or that applies often doesn't matter that much because nobody is arguing we should indiscriminately keep all of them and DRs are always an option.
- +1. It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. AI-generated images frequently fail to meet this standard - particularly ones which appear to be a factual depiction of something, but which are actually not. Omphalographer (talk) 00:12, 23 February 2026 (UTC)
- Well, in time of a single-user login over all projects (and file renaming tools that make your account edit in lots of individual projects), the premise of a different set of Commons users who "don't get to editorialize other Wikimedia projects" is, in my opinion, flawed anyway. We're all Wikimedians with the standing authorisation to participate in any project. Regards, Grand-Duc (talk) 02:41, 7 January 2026 (UTC)
- The line you cited is not about what I wrote about above. I'm not arguing files should be kept because Commons is not censored. I was saying one can't have a title "Wikimedia Commons is not censored" and then have lots of policies that have people do extensive indiscriminate deletion of swaths of objectionable content. Thus the section title needs to be changed for accuracy [in my view] so as to not be false or misleading [in my view].
- Nope. Stating and enforcing that some kind of media doesn't fall within the hosting purview, the scope of Commons is not censorship. NOTCENSORED already has a fitting line: "However, the statement "Wikimedia Commons is not censored" is not a valid argument for keeping a file that falls outside the normal permitted Wikimedia Commons scope." AIP only clarifies that AI-generated media of real people does fall "outside the normal permitted Wikimedia Commons scope". Regards, Grand-Duc (talk) 01:10, 7 January 2026 (UTC)
- Ok explain how Mr.Besya (talk) 01:16, 23 February 2026 (UTC)
- You need to change COM:NOTCENSORED then even more clearly because then Commons makes decisions for other projects where so far one of the most important principles of Commons has been that if a file is used in a Wikimedia project, it's considered within scope and Commons users don't make editorial decisions for other projects. Prototyperspective (talk) 00:22, 7 January 2026 (UTC)
Support per Grand-Duc. It's moon (talk) 01:51, 7 January 2026 (UTC)- I think this is premature. What if we want to modify the current AIP proposal and then pass it? Won't we have to re-do this vote after that? (And conversedly, aren't we going to be biased towards the binary choice of accepting or rejecting it, hindering the possibility of a potentially better, modified proposal?) whym (talk) 10:44, 7 January 2026 (UTC)
- I find this a tricky question. I've argued in the past that there's room for Commons' users to evaluate what this guideline refers to (ambiguously) as "legitimately in use". Given the thorny issues addressed by the ai images of people guideline, maybe it's reasonable to ask whether the in-use project in question has relevant guidelines for such images at all, and consider the use legitimate if there's demonstrable consensus that such images are permissible. It's unfortunate but not unrealistic to think that some projects may willingly embrace slop, but I'd feel more comfortable if that were demonstrated first. There's an old argument about whether Commons should be in the habit of questioning any use at all, but we've seen in the past examples of inuse files being deleted for various non-copyright reasons. I don't know the right answer. — Rhododendrites talk | 21:21, 22 February 2026 (UTC)
- "some projects may willingly embrace slop" Why "slop"? A project may be open to having some AI images of identifiable people such as long-dead people and that's not something indiscriminately unreasonable. Prototyperspective (talk) 21:25, 22 February 2026 (UTC)
- Using an extreme position for the sake of argument. Some projects may embrace all (or nearly all) AI-generated content, and we can't control that. Others may have more nuanced rules. — Rhododendrites talk | 14:19, 24 February 2026 (UTC)
- Has anybody ever pulled together that information, on the stances that different projects take on AI content? Outside of enwiki it's never been clear to me whether INUSE cases of AI images are a sign that the wiki endorses them, or just hasn't noticed them yet. Belbury (talk) 18:50, 28 February 2026 (UTC)
- Inclusion of an AI image does not imply or require the wiki to endorse it – it may mean that they don't reject them all on that basis. One can let relevant editors of the project know about the image, which is especially reasonable when it's an article with few views/watchers. This can be done by pinging article authors and/or making a talk page post or by asking about it at a discussion site of that platform. Commons so far hasn't really interfered with editorial decisions of other projects and I don't think doing so without at least sufficiently involving relevant participants of these projects in an inclusive manner is a good road to take. Prototyperspective (talk) 18:58, 28 February 2026 (UTC)
- It might be good to have a table somewhere with an overview for all projects, like we have a table for FoP regulations for all countries. I know that ruwiki also doesn't accept them: there's no explicit guideline but I recently asked about AI images on the ruwiki equivalent of the Village Pump, which started a long discussion with a pretty clear consensus against AI images (with maybe only 1-2 outliers who saw some potential use cases, but everyone else disagreed with them even on those use cases). Nakonana (talk) 10:44, 1 March 2026 (UTC)
- Has anybody ever pulled together that information, on the stances that different projects take on AI content? Outside of enwiki it's never been clear to me whether INUSE cases of AI images are a sign that the wiki endorses them, or just hasn't noticed them yet. Belbury (talk) 18:50, 28 February 2026 (UTC)
- Using an extreme position for the sake of argument. Some projects may embrace all (or nearly all) AI-generated content, and we can't control that. Others may have more nuanced rules. — Rhododendrites talk | 14:19, 24 February 2026 (UTC)
- "some projects may willingly embrace slop" Why "slop"? A project may be open to having some AI images of identifiable people such as long-dead people and that's not something indiscriminately unreasonable. Prototyperspective (talk) 21:25, 22 February 2026 (UTC)
Support per proposal. Redmin (talk) 12:43, 23 February 2026 (UTC)
Oppose This proposal targets AI slop, but is missing the core issue. Granted, the core issue was hardly ever a problem before AI slop because nobody was fooled by a painting and convincing CGI was time consuming.
If any project wants to create/use a painting, cartoon or w:cosplay depiction of an ancient famous person and the media is clearly identifiable as such without reading the description, meh. That's their business. May work if done tastefully, or for a children's history book or something. The infobox image for w:Cleopatra is a sculpture. That's fine. And if some project would wish to use AI-generated photorealistic Cleopatra in front of a server rack, meh. It's obviously not real.
Plausible photorealistic hallucinations on the other hand? No thank you. Those don't just fail COM:EDUCATIONAL, they actively corrupt knowledge. This is true regardless of whether a person is depicted or not. Imagine an AI-generated image of the w:Kallanai Dam as it looked right after it was built about 1900 years ago, you'd face the same problems.@0x0a, would you mind creating a proposal that targets all media, AI-generated or not, regardless of what it depicts, that is misleading and actively corrupting knowledge?They m:vanished.. - Alexis Jazz ping plz 15:12, 25 February 2026 (UTC)May work if done tastefully, or for a children's history book or something. The infobox image for w:Cleopatra is a sculpture. That's fine. And if some project would wish to use AI-generated photorealistic Cleopatra in front of a server rack, meh. It's obviously not real.
Exactly.Plausible photorealistic hallucinations on the other hand?
Hallucinations are obviously not useful. It gets more useful when carefully prompted to look exactly like on wanted to, especially if e.g. forensic studies or lots of sculptures from the time are available to use for that. A Plausible photo-realistic image of Cleopatra is obviously not real since there were no photo cameras back then. The production method could also be in the file title and caption. Please read COM:EDUCATIONAL and see educational documentaries and podcasts that already show nonreal imagery of ancient people to better understand that this is not just a realistic educational use-case but an already-real/realized educational use-case. Prototyperspective (talk) 13:15, 26 February 2026 (UTC)- Prototyperspective, there are various sculptures of Cleopatra, and big AI has no doubt trained on them. When I asked ChatGPT, it gave me two versions of Cleopatra and asked me to select the best one. In the other one, her head was closer to a sphere.
It gets more useful when carefully prompted to look exactly like on wanted to
When I asked Google's banana for photorealistic Cleopatra, it drew a Fortnite character. When I explained that's not photorealism, it added some details and shadows, making it look like a Fortnite character but with "RTX ON". Hallucinating is what AI does. Any detail you didn't describe is hallucinated. If you can describe it in sufficient detail to get almost-acceptable output, you can probably draw her yourself. Use an AI-generated sketch (or a sculpture) as an outline if you're having trouble with perspective.The production method could also be in the file title and caption.
These are frequently lost when files are re-used. - Alexis Jazz ping plz 01:48, 27 February 2026 (UTC)- Yes, describing in detail and/or using a sketch and/or using an input image(s) is needed to get a good quality output; never said anything else.
- .
you can probably draw her yourself
1. false 2. speculation 3. it's not about whether one could but whether people a) did and b) licensed it in a compatible way. But mostly importantly it's false and irrelevant.These are frequently lost when files are re-used.
so people think the image is a photo when it's a thousands years old person? Are there any other media on Commons that may get re-used in ways you don't like, say video of sexual intercourse, murder, anime, and other content that is available here more plentiful than the few educationally valuable images of a type that's already used in educational podcasts and documentaries? With educational innocuous media censored, there is no way in 30 years we'll still be relatively free of censorship; COM:NOTCENSORED is already written inaccurately now. Prototyperspective (talk) 11:39, 27 February 2026 (UTC)- Prototyperspective,
false
When you are forced to describe something in excruciating detail, drawing it yourself may really be easier. Even if you suck at drawing. At least humans understand context.so people think the image is a photo when it's a thousands years old person?
Who knows what people think when provenance is lost? What if it's w:Elephant man? Do you expect teenagers to know when color photography was invented?Are there any other media on Commons that may get re-used in ways you don't like, say video of sexual intercourse, murder, anime
This has nothing to do with what I like. If educational media of sexual intercourse gets distributed on Pornhub it doesn't suddenly start to corrupt knowledge. If media of a murder is posted on social media for clout and clicks it doesn't corrupt knowledge.
In such cases, corrupting knowledge requires malice. Adding false captions or context. Merely losing context doesn't corrupt knowledge. With AI slop, knowledge can easily be corrupted and no malice is required. Simply losing the context is enough. We delete an image, for whatever reason. A copy survives on Pinterest. No caption, no filename, no templates. Just the image. It gets copied to Fandom. Some local media or blog publishes it, and we copy it from them. Laundered. - Alexis Jazz ping plz 00:20, 1 March 2026 (UTC)may really be easier
false in many or most casesexcruciating detail
if you don't know much about prompting I recommend not being being very involved in enforcing your views onto the world. This is not how prompting works and it's not "excruciating" and in any case easier to do.At least humans understand context.
same for this. If you don't understand how sth works and is used, then don't act like everybody has to be ruled by your restrictive rules please. Understand it first and then bring nuanced informed suggestions to a debate. Humans use AIs. Humans understand context. Hopefully this is clear enough.Do you expect teenagers to know when color photography was invented?
this is absurd; they know it was not there thousands of years ago or 300 years ago.If educational media of sexual intercourse gets distributed on Pornhub it doesn't suddenly start to corrupt knowledge. If media of a murder is posted on social media for clout and clicks it doesn't corrupt knowledge.
I was talking exclusively about files on Commons.Simply losing the context is enough
people put the info into file titles and file descriptions and file captions where it's used; if they don't then that's either a violation of policy or could be required.With AI slop, knowledge can easily be corrupted and no malice is required.
With overly undifferentiated and heavy-handed knee-jerk reactions to a novel kind of media production method, people are corrupting free knowledge by dismantling core principles and rejecting a novel type of production that gets increasingly used throughout society, particularly of people and organizations that are not of the top 1% of privilege and use budgets efficiently so the free knowledge ecosystems doesn't get any of the large benefits and only experiences the downsides, further entrenching echo chambers and confirmation bias.Some local media or blog publishes it, and we copy it from them. Laundered.
Not sure what you're talking about. For example, let's talk about works prompt engineered by the uploaders. Also the context is not lost and the such copied files are very rare and can still be deleted on the grounds of not being useful etc. Prototyperspective (talk) 12:22, 1 March 2026 (UTC)
- Prototyperspective,
- Prototyperspective, there are various sculptures of Cleopatra, and big AI has no doubt trained on them. When I asked ChatGPT, it gave me two versions of Cleopatra and asked me to select the best one. In the other one, her head was closer to a sphere.
Support per Grand-Duc. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:07, 26 February 2026 (UTC)- Is there an example of an existing file that would be deleted due to this, or an already deleted file similarly? What does a borderline case that would not be deleted look like? Those examples will help us discuss more concretely. It appears that 0x0a (the proposer) vanished. Anyone else? whym (talk) 01:03, 28 February 2026 (UTC)
- One example would be File:Vladimir Putin with monkey (1173814355247235082).png. This image is currently in use on Wikibooks, but clearly violates COM:AIIP and is probably a COM:DIGNITY violation as well. Omphalographer (talk) 01:27, 28 February 2026 (UTC)
- We don't consider the dignity of the monkey. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:50, 28 February 2026 (UTC)
- I'm guessing the concern is the lack of the consent of Vladimir Putin. Thank you for the example. Politicians might warrant less protection, though. In Japan that's the case, at least for caricatures that are not too scandalous. whym (talk) 11:51, 5 March 2026 (UTC)
- Putin's dignity doesn't concern me. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:18, 5 March 2026 (UTC)
- As I noted in my close, Putin's dignity is not a concern for me either. Abzeronow (talk) 04:25, 7 March 2026 (UTC)
- Regardless of whether a specific politician warrants dignity based on personal preference, I believe having politicians misinformation on Commons does not help the Wikimedia project. It's moon (talk) 05:46, 7 March 2026 (UTC)
- How is this image which seems to be a caricature critical of that politician (their conduct) a politician's misinformation? Prototyperspective (talk) 13:20, 7 March 2026 (UTC)
- Human created caricatures often use exaggerated features and don't need proper descriptions or categorization to signal they are crafted for satire. AI deepfakes on the other hand tend to be hyperrealistic and introduce misinformation the moment they get reused outside of Commons. Furthermore AI slop is low-effort, can be mass-produced and often devalues artists' time and effort. I fail to see how that holds any educational value or fits within Commons:Scope. Some other examples - [1], [2], [3], [4]. It's moon (talk) 17:55, 7 March 2026 (UTC)
- These images can also use exaggerated features. And I'm more referring to images of this kind (featured in Caricature) where there is critical political content, not just mundane depictions of people with exaggerated facial features or similar which afaik isn't what caricatures refer to or require. The Putin example is clearly not a deepfake.
For people that are not long-dead (there's no photos of these available so anything looking like such isn't viewed as such), AI deepfakes is obviously a different subject than what is discussed here or much more specific. The amount of effort or time required does not matter when the end result is of good quality and useful or to get back to the subject: in actual use. Nevertheless, good quality AI images with meaningful content need a lot of work for the idea/concept and the refining and prompting. the examples you linked are useful/notable because they were shared by relevant public figures where the posting of these is subject of some articles. Prototyperspective (talk) 22:06, 8 March 2026 (UTC)
- These images can also use exaggerated features. And I'm more referring to images of this kind (featured in Caricature) where there is critical political content, not just mundane depictions of people with exaggerated facial features or similar which afaik isn't what caricatures refer to or require. The Putin example is clearly not a deepfake.
- Human created caricatures often use exaggerated features and don't need proper descriptions or categorization to signal they are crafted for satire. AI deepfakes on the other hand tend to be hyperrealistic and introduce misinformation the moment they get reused outside of Commons. Furthermore AI slop is low-effort, can be mass-produced and often devalues artists' time and effort. I fail to see how that holds any educational value or fits within Commons:Scope. Some other examples - [1], [2], [3], [4]. It's moon (talk) 17:55, 7 March 2026 (UTC)
- How is this image which seems to be a caricature critical of that politician (their conduct) a politician's misinformation? Prototyperspective (talk) 13:20, 7 March 2026 (UTC)
- Putin's dignity doesn't concern me. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:18, 5 March 2026 (UTC)
- I'm guessing the concern is the lack of the consent of Vladimir Putin. Thank you for the example. Politicians might warrant less protection, though. In Japan that's the case, at least for caricatures that are not too scandalous. whym (talk) 11:51, 5 March 2026 (UTC)
- We don't consider the dignity of the monkey. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:50, 28 February 2026 (UTC)
- One example would be File:Vladimir Putin with monkey (1173814355247235082).png. This image is currently in use on Wikibooks, but clearly violates COM:AIIP and is probably a COM:DIGNITY violation as well. Omphalographer (talk) 01:27, 28 February 2026 (UTC)
Support per proposal. --ReneeWrites (talk) 15:33, 1 March 2026 (UTC)
Just a note that it looks like we had two opposite DRs closed today, relevant to this discussion. The Squirrel Conspiracy closed Commons:Deletion_requests/Files_in_Category:AI_artwork_of_historical_figures_by_Netha_Hussain as AIP explicitly overrules INUSE
and Abzeronow closed Commons:Deletion_requests/Files_on_AI_art_caricatures_and_public_characters_in_AI_art as AIP doesn't currently override the IN USE policy
. :) This is not a challenge to either one, but a suggestion that perhaps they should be reopened until this discussion plays out. — Rhododendrites talk | 15:18, 1 March 2026 (UTC)
- Oh, so now these deletion requests are already closed; I intended to point them out here as current examples for why this needs clarification. Note, Commons:Deletion requests/Files in Category:AI artwork of historical figures by Netha Hussain are two DRs, the original one filed by me in April 2025, when I excluded any file that was in use then in order to honor COM:INUSE, and as we didn't have the AIP guideline back then. And the recent one by Dronebogus who nominated the remaining files in the category "AI artwork of historical figures by Netha Hussain" although most of them were still in use at the point of nomination. An example of a file that was still quite widely in use before deletion was File:Alan Turing in watercolour.png, in Spanish Wikipedia and in other language versions specifically as an example of AI-generated art, in appropriate articles. Although I'm very skeptical regarding AI-generated art and would consider most of it slop that should be deleted, I never had anything against using a small selection of such files for purposes such as articles about AI. The only argument against hosting the Turing image in this case is AIP, for it was in legitimate encyclopedic use otherwise. - My stance on the proposal is, I think,
Neutral, as I see a strong argument from both sides. On the one hand, I don't want heaps of AI-generated images of real people here, and would also frown upon projects that use them liberally in contexts that have nothing to do with AI. On the other hand, I think that the INUSE policy is very important and we basically should never overrule other projects, also from a practical point of view: If we start overruling other projects on a larger scale due to Commons-local policies such as AIP, they will start hosting more and more images locally in their projects, which defies one important purpose of Commons and its original goal, to serve as a common media repository for Wikimedia projects. Gestumblindi (talk) 16:38, 1 March 2026 (UTC) - I completely agree, there has to be some consistency on how policy is enforced. It's moon (talk) 16:38, 1 March 2026 (UTC)
Support per proposal. If other projects don’t like it they can host this AI slop locally. In fact, why not just make this a speedy deletion rationale? --Dronebogus (talk) 20:17, 1 March 2026 (UTC)
- Some projects do not host files locally. m:List of Wikipedias having zero local media files, this would essentially impose that on them. Abzeronow (talk) 04:30, 2 March 2026 (UTC)
- We already force projects to host their own files by not allowing Fair use on Commons. I don't see where's the problem with that. It's moon (talk) 07:53, 4 March 2026 (UTC)
- For variable values of "we". Commons' mandate from WMF specifically does not allow us to host files on a "fair use" basis. This is simply not analogous. Even if we speak loosely of a "fair use file," there really is no such thing: a file may constitute fair use in a particular context but there is no such thing as a file that is "fair use" in and of itself. That is, there are contexts in which particular copyrighted material may be used, but this is a property of a specific use, not of the material in question. - Jmabel ! talk 19:07, 4 March 2026 (UTC)
- We already force projects to host their own files by not allowing Fair use on Commons. I don't see where's the problem with that. It's moon (talk) 07:53, 4 March 2026 (UTC)
- Some projects do not host files locally. m:List of Wikipedias having zero local media files, this would essentially impose that on them. Abzeronow (talk) 04:30, 2 March 2026 (UTC)
- Yes, I can see that there is a conflict. It doesn't help that some of the same files were nominated in both DRs so both TSC and I could also only see half of the discussion around the affected files (consensus in mine was to keep, and Dronebogus's nomination felt like a test case to me). TSC's opinion on this matter may prevail via this current discussion, and I personally don't disagree much with TSC on AI. But COM:INUSE is a important policy, and behind it is a principle that Commons supports other projects, we don't dictate. Only when legal matters such as copyright and country-specific consent laws on photography do we use the power to delete. I can see a use case for a guideline or policy in which photorealistic depictions of real people should be treated the same as photography. I ruled the way I did because I followed the consensus and felt on balance that COM:INUSE at this moment in time outweighs COM:AIP as we are still figuring out the exact contours of the guideline. But I will follow our community's wishes, and if the community says that AIP can overrule INUSE, then that is what I shall follow. Abzeronow (talk) 04:10, 2 March 2026 (UTC)
- To add a third comment-not-a-vote, while I think the AIP carve-out would need to be added to this page at some point, it's worth resolving the open questions at AIP first. For example, using Glamorous to see which AI-generated images of people are actually in use anywhere, I see a lot of historical figures. That was one of the areas of the guideline proposal that some folks wanted to exempt from the guideline. There wasn't enough discussion of it there to justify delaying promotion to guideline status as-is, but it is something worth resolving one way or the other. In other words, anyone who wants to exempt long-dead figures may want to get that proposal underway before this concludes. — Rhododendrites talk | 15:25, 2 March 2026 (UTC)
Weak oppose. This is already covered in COM:INUSE by Files that are currently in use may still be subject to deletion for reasons beyond their scope, including but not limited to: * Files which are not free content; * Illegal content, such as child sexual abuse material; * Photographs of people that do not comply with the relevant guideline
(the last point links to COM:PEOPLE - personality rights - which is the same underlying principle for AIP according to the introduction of COM:AIP).
I say weak oppose because I'm not entirely opposed to adding a link to COM:AIP in that same bullet point as COM:PEOPLE (and additionally COM:AIP should be linked somewhere on COM:PEOPLE).
However, I'm seeing scope creep which I
strongly oppose, where the definition of COM:AIP seems to be expanding beyond the non-scope personality rights of COM:PEOPLE towards attempting to dictate educational COM:SCOPE itself. COM:INUSE clarifies that copyright and personality rights are non-scope restrictions, calling them "reasons beyond their scope
", which may overrule a project's educational use of a file. My reading of the discussions that resulted in COM:AIP, and the text of COM:AIP itself, is that COM:AIP is a non-scope restriction, and is not making a determination of educational use or scope. -Consigned (talk) 20:14, 11 March 2026 (UTC)- Coming back to this, I think my thoughts are a little more organized. I
Support clarifying that as an application of COM:DIGNITY, which already has an exception to COM:INUSE, COM:AIP does as well. It's just another example of how, from the text of COM:INUSE, Files that are currently in use may still be subject to deletion for reasons beyond their scope
. However, I
Oppose the scope creep of applying COM:AIP to determine scope, e.g. to prevent inaccuracy when there is no COM:DIGNITY concern. COM:INUSE very clearly says that exceptions to it are for reasons beyond their scope
- it should stay that way. -Consigned (talk) 22:01, 21 March 2026 (UTC)
- Coming back to this, I think my thoughts are a little more organized. I
Oppose: That's an editorial decision and should be up to each individual project to decide what to do. Commons should only enforce deletion when it is illegal to host the image for some reason, such as copyright, hate ideologies, etc; the whole AI vs. non-AI debate is clearly beyond those extreme cases. Cambalachero (talk) 19:44, 13 March 2026 (UTC)
- @Cambalachero: Please understand that all image creation AIs on the market were trained on copyrighted images, and that their works are tainted by that. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:25, 14 March 2026 (UTC)
- I'm familiar with that argument, but IIRC the legal standing in the US is that AI-generated works are public domain, and no new laws or judicial rulings have changed that yet (and it's not guaranteed that they will). I also understand that what truly matters when discussing derivative works is the final result. For example: if I take a Superman comic book and copypaste a small part of his cape to retrieve the color and use it in a completely unrelated illustration, it wouldn't be a derivative work; and if draw Superman myself, regardless of doing a 100% human-made work with no software or mechanical aids, it would be a derivative work. An argument may be made about the possession of such a large image database, but that wouldn't be a copyright-related argument, and thus better suited for a Reddit discussion than a Wikimedia Commons one. Cambalachero (talk) 00:58, 15 March 2026 (UTC)
but IIRC the legal standing in the US is that AI-generated works are public domain
— this might be different in other countries: , and Commons needs to follow that too. Nakonana (talk) 13:03, 22 March 2026 (UTC)- That also would not apply to files engineered by the uploader. This would affect files produced using AI tools where we either don't know where the prompter is located at or the prompter is located in China or UK (I think that's those countries). Prototyperspective (talk) 13:11, 22 March 2026 (UTC)
- I'm familiar with that argument, but IIRC the legal standing in the US is that AI-generated works are public domain, and no new laws or judicial rulings have changed that yet (and it's not guaranteed that they will). I also understand that what truly matters when discussing derivative works is the final result. For example: if I take a Superman comic book and copypaste a small part of his cape to retrieve the color and use it in a completely unrelated illustration, it wouldn't be a derivative work; and if draw Superman myself, regardless of doing a 100% human-made work with no software or mechanical aids, it would be a derivative work. An argument may be made about the possession of such a large image database, but that wouldn't be a copyright-related argument, and thus better suited for a Reddit discussion than a Wikimedia Commons one. Cambalachero (talk) 00:58, 15 March 2026 (UTC)
- It is true that Generative AI is in the public domain, however it is often unable to give proper attribution to material it has been trained on, many of which holds copyright or is licensed under an attribution share-alike license. Wikimedia Commons cares about copyright even when a copyright owner does not. It's moon (talk) 02:27, 15 March 2026 (UTC)
- In any case, that's a discussion for another day. The discussion here is about taking an editorial decision for ALL projects, which is not what Commons should do except in the most extreme circumstances (and this one is not). Jeff derailed the discussion with the off-topic comment of the "AI images are actually copyrighted", and yes, I fell for it, but I realized my mistake. Cambalachero (talk) 03:48, 15 March 2026 (UTC)
- @Cambalachero: You brought up copyright. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:06, 15 March 2026 (UTC)
- In any case, that's a discussion for another day. The discussion here is about taking an editorial decision for ALL projects, which is not what Commons should do except in the most extreme circumstances (and this one is not). Jeff derailed the discussion with the off-topic comment of the "AI images are actually copyrighted", and yes, I fell for it, but I realized my mistake. Cambalachero (talk) 03:48, 15 March 2026 (UTC)
- @Cambalachero: Please understand that all image creation AIs on the market were trained on copyrighted images, and that their works are tainted by that. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:25, 14 March 2026 (UTC)
Support COM:AIP was always intended to override COM:INUSE. The Squirrel Conspiracy (talk) 01:02, 15 March 2026 (UTC)
- Even if that's the case, it's neither in the discussion proposal nor in the policy texts.
Additionally, I don't think core policy should be dropped this easily. Are we now adding lots of other individual exceptions for files of things many object to next, like say depictions of Muhammad or whatever? Prototyperspective (talk) 16:02, 16 March 2026 (UTC)
- Even if that's the case, it's neither in the discussion proposal nor in the policy texts.
Proposal: clarify relationship between COM:INUSE and actual practice
Hello all,
I would like to propose a clarification to COM:INUSE, not primarily to change its intent, but to ensure that the written policy accurately reflects current practice.
A recent discussions on COM:INUSE not a suicide pact demonstrate that, in practice, files that are in use on sister projects are still being deleted based on qualitative judgments (e.g. being considered “AI-generated nonsense”, “decorative”, or lacking meaningful educational value).
This appears to go beyond the current wording of COM:INUSE, which states that:
- files in use are considered in scope,
- even if they are of poor quality or appear to lack educational value,
- and that Commons does not override sister projects’ editorial judgment in such cases.
However, the actual application introduces an additional, implicit criterion: that the use must be considered legitimate or meaningful according to Commons standards, not just the project where the file is used.
This creates a discrepancy between:
- policy (strong deference to in-use status), and
- practice (independent Commons-side evaluation of that use).
I propose that COM:INUSE be clarified to explicitly reflect this, for example by adding wording such as:
Files that are in use on Wikimedia projects are generally considered within scope. However, such use may still be evaluated for legitimacy. Uses that are deemed purely decorative, non-meaningful, or otherwise not contributing to educational understanding may not qualify as valid “in use” for the purposes of this policy.
This is not about arguing for or against specific deletions or specific types of content.
It is about:
- transparency,
- consistency,
- and maintaining trust between Commons and sister projects.
At the moment, contributors—especially newer ones—are guided by a policy that suggests one standard, while encountering a different standard in practice.
This has already led to concrete consequences, such as:
- files being moved to local hosting on Wikibooks instead of Commons,
- adjustments to local guidance (see b:nl:MediaWiki:Uploadtext),
- and growing uncertainty about whether Commons will retain files that are actively used.
If Commons intends to apply a stricter interpretation of “in use”, that is entirely reasonable—but it should be clearly documented.
Otherwise, we risk creating a situation where: Commons appears to defer to sister projects in theory, while overriding them in practice.
Clarifying this would help align expectations and preserve the integrity of cross-project collaboration.
Kind regards, BeeBringer (talk) 13:49, 31 March 2026 (UTC)
- Have a consistent robust policy is important. Deleting 3 or a handful of arguably low-quality files in the subjective opinion of some contributors among the over 130 million files on Commons is not important.
- Policy should be applied consistently and things can be removed from use in the other projects where Commons does not editorialize, especially not without involving contributors to that project. INUSE doesn't mean bad files must be kept; it means some things need to be decided with the inclusion of or on other projects. Prototyperspective (talk) 14:09, 31 March 2026 (UTC)
- Thank you, but I do not think this can be characterized as merely “3 or a handful of files” in any meaningful structural sense.
- There is an entire cross-wiki toolchain built around this exact situation. CommonsDelinker exists specifically to remove usages of files from Commons and other Wikimedia wikis after those files are deleted on Commons, so that pages on sister projects do not remain visibly broken. Meta describes this as a service brought to local wikis, and explicitly says that “file delinking” is the remedial action performed after a Commons file has been deleted due to policy infractions. It also maintains a searchable report of removals and renames. [1]
- Commons itself describes CommonsDelinker as a bot that “delinks deleted images from Commons and other Wikimedia wikis”. [2] On Meta’s small wiki toolkit page, CommonsDelinker is listed as removing links to files deleted at Commons, with a very large activity figure attached to it. [3]
- So my point is not that every deletion of an in-use file is common, nor that policy is routinely ignored in every case. My point is that cross-wiki removal of deleted Commons files is sufficiently routine and structurally expected that Wikimedia has built dedicated infrastructure for it. That makes it difficult to present these cases as if they are negligible edge cases with no practical significance.
- And this matters here because once Commons deletes such a file, the consequences are not theoretical:
- the file is automatically removed from pages on sister projects;
- local editors then have to notice that removal, decide whether to re-upload locally, replace it, or leave the page without it;
- and contributors on those projects may never have been meaningfully involved in the Commons-side judgment that triggered the removal in the first place.
- That is exactly why clarity is needed.
- If the actual practice is that Commons may sometimes override in use status based on an independent qualitative judgment, then that should be stated clearly. If instead the intention is that this should remain exceptional and coordinated with the affected projects, that should also be stated clearly.
- At the moment, the existence and function of CommonsDelinker show that the downstream effect on sister projects is real, systematic, and built into the workflow—not something too trivial to document transparently.
- Kind regards,
- BeeBringer (talk) 14:41, 31 March 2026 (UTC) BeeBringer (talk) 14:41, 31 March 2026 (UTC)
- Besides that, I would like to note that your characterization of this as “a handful of files” seems difficult to reconcile with prior discussions on your own talk page (User talk:Prototyperspective).
- From a brief review, there are multiple cases where files were explicitly kept due to COM:INUSE, even when concerns about quality or educational value were raised. For example:
- Given these cases, I find it difficult to understand the argument that such situations are negligible, as this appears to be something you have yourself encountered and engaged with in practice and that is only one user of so many.
- This suggests that COM:INUSE has been overruled as a meaningful and recurring factor in deletion discussions, not merely as a rare or negligible exception.
- For that reason, I believe the question of consistency remains relevant and worth clarifying.
- Kind regards,
- BeeBringer (talk) 15:09, 31 March 2026 (UTC)
- Regarding CommonsDelinker: the uses could be on sandbox pages, talk pages, and user pages. That concerns of subjective opinion about quality were raised doesn't mean the policy shouldn't be adhered to.
the question of consistency remains relevant and worth clarifying
Policy application should be consistent. Only this way can it be safeguarded and stay robust through the future and not be subject to arbitrary exceptions when some people dislike this or that etc. Prototyperspective (talk) 15:13, 31 March 2026 (UTC)- Thank you — I think we actually agree on the core point.
- I fully agree that policy should be applied consistently and not be subject to arbitrary or subjective exceptions.
- My concern is precisely that this does not appear to be how COM:INUSE is currently applied in practice. As seen in multiple deletion discussions, administrators and participants sometimes override “in use” status based on qualitative judgments (e.g. “decorative”, “AI slop”, etc.), while in other cases INUSE is treated as decisive.
- That creates uncertainty for contributors, because:
- the policy reads as a strong rule,
- but the outcomes do not always reflect that consistently.
- So the question becomes:
- should COM:INUSE be applied strictly in all cases, as you suggest, or
- is the current practice (where it can be overridden) the intended interpretation?
- In either case, I think we would benefit from making that explicit.
- If strict application is the goal, then deviations in deletion discussions may need to be addressed and corrected by administrators.
- If not, then the policy text should reflect the actual practice more clearly.
- Kind regards,
- BeeBringer (talk) 15:30, 31 March 2026 (UTC)
- Regarding CommonsDelinker: the uses could be on sandbox pages, talk pages, and user pages. That concerns of subjective opinion about quality were raised doesn't mean the policy shouldn't be adhered to.
- The primary use of Commons:Delinker is when someone uploads a copyright infringing work to Commons, which can't be kept on Commons or many of the projects, which don't allow fair use. We need to stop deleting files that are in use.--Prosfilaes (talk) 03:20, 1 April 2026 (UTC)