Commons:Village pump

Latest comment: 1 hour ago by Adamant1 in topic Images in Category:Wyman-Gordon, Houstoun

Shortcut: COM:VP

↓ Skip to table of contents ↓       ↓ Skip to discussions ↓       ↓ Skip to the last discussion ↓
Welcome to the Village pump

This page is used for discussions of the operations, technical issues, and policies of Wikimedia Commons. Recent sections with no replies for 7 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Archive/2023/12.

Please note:


  1. If you want to ask why unfree/non-commercial material is not allowed at Wikimedia Commons or if you want to suggest that allowing it would be a good thing, please do not comment here. It is probably pointless. One of Wikimedia Commons’ core principles is: "Only free content is allowed." This is a basic rule of the place, as inherent as the NPOV requirement on all Wikipedias.
  2. Have you read our FAQ?
  3. For changing the name of a file, see Commons:File renaming.
  4. Any answers you receive here are not legal advice and the responder cannot be held liable for them. If you have legal questions, we can try to help but our answers cannot replace those of a qualified professional (i.e. a lawyer).
  5. Your question will be answered here; please check back regularly. Please do not leave your email address or other contact information, as this page is widely visible across the internet and you are liable to receive spam.

Purposes which do not meet the scope of this page:


Search archives:


   
 
# 💭 Title 💬 👥 🙋 Last editor 🕒 (UTC)
1 Special:UncategorizedCategories 14 5 Jmabel 2023-12-25 21:36
2 Random deletion of perfectly good files from Gallica 37 9 Rosenzweig 2023-12-20 13:01
3 AI images 196 25 Jmabel 2023-12-27 01:41
4 Image of the marble bust of Hannibal 8 5 Yann 2023-12-20 12:42
5 The possibilities of AI enhancement 10 9 Omphalographer 2023-12-27 02:09
6 Sanborn FIre Insurance Map upload project 14 4 Jeff G. 2023-12-22 12:15
7 Flags or insignia of non-state actors in conflicts 5 4 HyperGaruda 2023-12-20 20:18
8 Request for opinion on copyright status 3 3 Jeff G. 2023-12-20 22:22
9 Do stuffed animals… 2 2 Jeff G. 2023-12-21 03:09
10 Renewal of lost bot flag 4 3 Bjh21 2023-12-21 13:06
11 Help needed with Template:Philippines photographs taken on navbox 4 3 Auntof6 2023-12-21 20:30
12 Prompt template now available to record AI prompts 1 1 Sdkb 2023-12-21 16:15
13 Incorrect PNG previews of SVG files 4 2 Jeff G. 2023-12-23 10:38
14 Request translation for File:Baltic states territorial changes 1939-45 es.svg 5 3 Great Brightstar 2023-12-25 13:32
15 Deletion of Android 14's screenshot. 5 2 Randomdudewithinternet 2023-12-25 00:39
16 staff situation. 4 3 Jeff G. 2023-12-24 18:21
17 Searching for unreviewed licenses 2 2 HyperGaruda 2023-12-27 07:14
18 File:Israel's Genocidal Assault on the Gaza Ghetto (53289186330).jpg 3 2 Yann 2023-12-25 08:57
19 Google & Commons 3 3 Rosenzweig 2023-12-25 16:43
20 Category renaming (move) 10 4 An Errant Knight 2023-12-26 16:00
21 What's the name of this gesture ? 3 2 Simon Villeneuve 2023-12-25 22:07
22 Senkaku Copernicus Photo Sentinel-2A Photo 3 2 Artanisen 2023-12-27 00:52
23 Close request for category discussion 2 2 HyperGaruda 2023-12-27 07:07
24 Problems with Kit body universitario23e.png 1 1 IBryanDP 2023-12-27 00:10
25 Images in Category:Wyman-Gordon, Houstoun 3 2 Jmabel 2023-12-27 07:49
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.
Broadwick St, Soho, London: a water pump with its handle removed commemorates Dr. John Snow's tracing of an 1854 cholera epidemic to the pump. [add]
Centralized discussion
See also: Village pump/Proposals   ■ Archive

Template: View   ■ Discuss    ■ Edit   ■ Watch
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 1 day and sections whose most recent comment is older than 7 days.

November 26 edit

Special:UncategorizedCategories edit

We now have 2,544 uncategorized (parentless) categories, down from about 8,000 in the beginning of September. At this point, most of the "low-hanging fruit" is taken care of. User:Billinghurst and I have done the bulk of the cleanup, although a few others have also helped in various degrees. We could definintely use more help, most of which does not require an admin as such.

  • Most of the remaining listings are legitimate categories, with content, but lacking parent categories. They need parent categories and they need incoming interwiki links from any relevant Wikidata item.
    • A disproportionate number of these would best be handled by someone who knows Hungarian or Estonian.
  • Some categories just need to be turned into cat redirects ({{Cat redirect}} and have their content moved accordingly.
  • A few categories listed here will prove to be fine as they stand; the tool messed up and put them in the list because it didn't correctly understand that a template had correctly given them parent categories. Many of these are right near the front of the (alphabetical) list, and involve dates.
  • Some categories probably either call for obvious renaming or should be nominated for COM:CFD discussions.
  • Some empty categories (not a lot of those left, but new ones happen all the time) need to be deleted.
  • At the end of the alphabetical listing (5th and 6th page) are about 75 categories that have names in non-Latin alphabets. It would be great if people who read the relevant writing systems could help with these. Probably most of these are candidates for renaming.

Thanks in advance for any help you can give. - Jmabel ! talk 03:21, 26 November 2023 (UTC)Reply[reply]

I'm a bit confused about something @Jmabel: I checked the page and some of the categories on there are for example Category:April 2016 in Bourgogne-Franche-Comté (through 2023), but these were created years ago in some instances and already had parent categories from the start. How do categories like that end up there? ReneeWrites (talk) 02:09, 29 November 2023 (UTC)Reply[reply]
@ReneeWrites: Insufficient follow-through and patrolling, combined with out of control back end processes.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 02:48, 29 November 2023 (UTC)Reply[reply]
@ReneeWrites: Actually, in this case this appears to be some sort of flaw in the software that creates the Special page. As I wrote a couple of days ago, "A few categories listed here will prove to be fine as they stand; the tool messed up and put them in the list because it didn't correctly understand that a template had correctly given them parent categories. Many of these are right near the front of the (alphabetical) list, and involve dates." It looks like today's run added a bunch of these false positives and that (unlike the previous bunch) they are more scattered through the list. I believe all of the 100+ files that use Template:Month by year in Bourgogne-Franche-Comté are on today's list; none of these were there three days earlier. That probably has something to do with User:Birdie's edits to yesterday to Template:Month by year in Bourgogne-Franche-Comté; those are complicated enough that I have no idea what in particular might have confused the software. The categories still look fine from a normal user point of view, but the software that creates Special:UncategorizedCategoriesn is somehow confused.
Other than that: we're a couple of hundred fixed or deleted categories closer to where we'd want to be, compared to a couple of days ago. - Jmabel ! talk 04:23, 29 November 2023 (UTC)Reply[reply]
Server-purges should fix this but apparently it doesn't. Some categories that didn't appear last time after purging the cache have disappeared now so I'm more confused as to what the problem could be since the iirc the refresh time was after some pages were updated (it has problems when pages get all their categories from a template). There should probably be a phrabricator issue about this, albeit it's possible things work fine once there are always just a small number of cats there which seems increasingly feasible. Prototyperspective (talk) 12:35, 29 November 2023 (UTC)Reply[reply]
@Jeff G., could you explain what "... out of control back end processes" means, so I can understand your comment? --Ooligan (talk) 16:54, 29 November 2023 (UTC)Reply[reply]
@Ooligan: As I understand it, there are processes that run on WMF servers that run too long or get caught up in race conditions or whatever, and that get terminated after running too long. I think updating this special page may be one such process, sometimes. Certainly, updating the read / not read status of stuff on my watchlist seems that way, especially when using this new reply tool. Turning off the big orange bar before displaying my user talk page would be helpful, too. <end rant>   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 19:26, 29 November 2023 (UTC)Reply[reply]
@Jeff G., thank you. --Ooligan (talk) 19:44, 29 November 2023 (UTC)Reply[reply]
@Ooligan: You're welcome.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 20:11, 29 November 2023 (UTC)Reply[reply]

Even with those 100 or so "Bourgogne-Franche-Comté" false positives, we are now down to 2079. Again, we could really use help from people who know languages with non-Latin scripts, all of which are grouped toward the end of the list. Also, Hungarian and Estonian, scattered throughout. - Jmabel ! talk 23:08, 2 December 2023 (UTC)Reply[reply]

Now down to 1905, again including 100+ false positives. Still really need help from people who read Estonian, Hungarian, or languages with non-Latin scripts. - Jmabel ! talk 21:58, 7 December 2023 (UTC)Reply[reply]

And now to 1701, again with the same number of false positives and still with the same need for help from people who read Estonian, Hungarian, or languages with non-Latin scripts. Those are probably now the languages for about half of the remaining categories. - Jmabel ! talk 00:23, 14 December 2023 (UTC)Reply[reply]

Now 1471, with the same provisos and the same needs for help. - Jmabel ! talk 18:42, 19 December 2023 (UTC)Reply[reply]

We are making major progress. As of today, we are down to 1031 (and seem to be rid of the false positives, so maybe the progress looks more dramatic than it is, but it's still nice). Only a few left in non-Latin alphabets. Still need a bunch of help with Estonian and Hungarian.

Thanks to whoever fixed the "false positives" thing. - Jmabel ! talk 21:36, 25 December 2023 (UTC)Reply[reply]

December 04 edit

Random deletion of perfectly good files from Gallica edit

There are literally tens to hundreds of thousands of books and files that overzealous editors will start in on deleting thanks to this apparently random edit of PD-GallicaScan by User:Rosenzweig.

A) I'd revert/undo it myself except I apparently lack the permissions to do so (?). Anyone know what's involved or where I sign up for those?
B) Can one of y'all undo it in the meantime?

Even if we are officially depreciating this template, 1st the phrasing should reflect that almost all of the affected files are old and in the public domain and simply request that the license be changed and 2nd whoever decided on this should be the one shunting over 1.4+ million pages to whatever they think the appropriate license is, not purging that many files from the service for no particularly good reason. Is there any evidence we even have files from Gallica that are so recent that PD is an issue? Gallica shouldn't be hosting most of those online itself. Are there any?
C) There should've been more discussion somewhere to link to before this change went through. That should be somewhere on the template's page or its talk page.
D) If the general response here is to pound sand and that it's great that we're deleting all these perfectly good files... well, for the several thousand of those files that I've been working with—from the 18th century for what it's worth—is there any product like HotCat where I can mass replace their PDs? Someone needs to be doing that with the new change and it can't be by going through 1.4 million files by hand.

— LlywelynII 00:10, 4 December 2023 (UTC)Reply[reply]

@LlywelynII: Please see COM:VPC#Deprecate Template:PD-BNF and Template:PD-GallicaScan. You may also use {{Editrequest}} at Template talk:PD-GallicaScan.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 00:16, 4 December 2023 (UTC)Reply[reply]
@Jeff G.: Thanks for the links but, if you're already that knowledgeable, do you know where I need to go to just get the permission added to my account to fix things like this on my own? (Obv not going to remove the change if there has been more discussion that Rz just forgot to link back to, but the wording could be much better and much better formatted and a link provided to that discussion. See also my long edit history and general trustworthiness.) — LlywelynII 04:47, 4 December 2023 (UTC)Reply[reply]
@LlywelynII: You can post on COM:RFR, and if that lacks any section for the right you want, COM:AN or Commons:Administrators/Requests.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 12:32, 4 December 2023 (UTC)Reply[reply]
Similarly, there are multiple thousands of images I've probably edited from Gallica that now need to preemptively get this fixed to avoid 'helpful' deletion. Are there any mass PD template-shifting add-ons or programs to make this reasonable work? — LlywelynII 04:52, 4 December 2023 (UTC)Reply[reply]
@Rosenzweig: FYI.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 00:21, 4 December 2023 (UTC)Reply[reply]
@LlywelynII: "Is there any evidence we even have files from Gallica that are so recent that PD is an issue?": Yes, there is. Gallica is hosting newspapers and magazines up to ca. the 1950s, where the BNF itself says they are "consultable en ligne" (viewable online), not 'in the public domain". These contain texts and images by authors who died less than 70 years ago. Compare Commons:Deletion requests/Files in Category:Robert Fuzier, Commons:Deletion requests/Files in Category:Paul Poncet, and others from the same date. The wording says might be deleted, not will. If you think the specific wording of the deprecation needs to be changed, feel free to chip in at COM:VPC#Deprecate Template:PD-BNF and Template:PD-GallicaScan (as already mentioned by JeffG, thanks). Regards --Rosenzweig τ 00:40, 4 December 2023 (UTC)Reply[reply]
My experience is planting enormous THIS FILE MIGHT NOT BE IN THE PUBLIC DOMAIN AND MIGHT BE DELETED AT ANY MOMENT headers is just begging "helpful" "editors" to go around deleting absolutely everything they can, frequently with bots and without any care. If you've found a few newspapers, put the header on those items. At minimum, the template replacement should be rephrased to This template has been depreciated and should be replaced with ... like normal, instead of begging for people to go around destroying things for no particularly good reason.
If there's a separate location for feedback, remember to put it in your template edits or edit summaries to help fix problems like this. — LlywelynII 04:42, 4 December 2023 (UTC)Reply[reply]
You're over-dramatizing this. Nowhere does it say that files might be deleted "at any moment". And I fail to see how you come to the conclusion that users might be deleting "absolutely everything they can" with bots (!) --Rosenzweig τ 05:46, 4 December 2023 (UTC)Reply[reply]
I agree with Rosenzweig here. When over-broad mass-deletions come in as DRs, there is usually a quick "this is too many files with different issues for one DR", and it gets closed with no action.
There were similar issues with some of 's massive batch uploads. I continue to track his user talk page for what is DR'd. So far, I can't recall seeing any "obvious keeps" get DR'd, and most have been such obvious copyvios that in other circumstances they would likely have been speedied.
We should think about categories that will be useful in sorting this out. Besides any possible maintenance categories on the files themselves, we should probably put a new subcat Category:Gallica-related deletion requests somewhere under Category:Sorted deletion requests; I'm not sure where, but we'd want to make it easy for someone to track these.
By the way, is there any evidence that there have been bad deletions on this basis? - Jmabel ! talk 06:24, 4 December 2023 (UTC)Reply[reply]
@Jmabel: See Category:Gallica-related deletion requests. --Rosenzweig τ 07:50, 4 December 2023 (UTC)Reply[reply]
According to https://templatecount.toolforge.org/?lang=commons&name=PD-GallicaScan&namespace=10#bottom there are 1,403,251 transclusions. That's an impractical amount of work without bot help, and I'd suggest undoing the depreciation until a bot can, in the first instance, spot every extant use where other copyright explanations exist, and replace it with a suitable note explaining, briefly, what the template said outside of the PD declaration. In many cases, I believe this was just a specific variant of PD-scan. Adam Cuerden (talk) 07:12, 4 December 2023 (UTC)Reply[reply]
Simply undoing the change "until a bot" can replace something (when will that be?) won't discourage people from uploading still protected Gallica scans like they are doing now, which will only make the problem worse. I don't think that's a good idea. And yes, bots will almost certainly be needed to help here. --Rosenzweig τ 07:18, 4 December 2023 (UTC)Reply[reply]
I think a literal over a million files losing a PD tag is a bigger problem. Files without a PD tag get put up for deletion. Prolific uploaders could be looking at thousands of files to review at once. Adam Cuerden (talk) 07:23, 4 December 2023 (UTC)Reply[reply]
Why would prolific uploaders be looking at thousands of files to review at once? No one is requiring uploaders to do that and nothing in the template says anything even slightly along those lines either. Really, they can just ignore the change completely if they want to and literally nothing will happen. Except clear COPYVIO will be deleted going forward, but that's probably about it and doesn't involve anyone reviewing thousands of files. --Adamant1 (talk) 07:30, 4 December 2023 (UTC)Reply[reply]
At this point, not a single one of those files has actually "lost a PD tag". The tag is still there. It's just deprecated. --Rosenzweig τ 07:50, 4 December 2023 (UTC)Reply[reply]
I've changed the template a bit so that the collapsed section with the deprecated tag is now expanded by default. --Rosenzweig τ 07:59, 4 December 2023 (UTC)Reply[reply]
Are you aware of User:AntiCompositeBot? If the tag isn't listed in a way that categorises an image as public domain, I'm pretty sure it will be automatically listed for deletion. Adam Cuerden (talk) 08:21, 4 December 2023 (UTC)Reply[reply]
If that bot didn't list files (or rather mark them as missing a valid license tag) tagged with PD-GallicaScan before, it won't list mark them now. Why should it? Nothing in the actual file pages has changed. --Rosenzweig τ 08:31, 4 December 2023 (UTC)Reply[reply]
Missed a closing > there, but no matter. I believe it's down to PD-GallicaScan's inclusion on lists of valid PD tags, things that it could readily be removed from now. Adam Cuerden (talk) 08:39, 4 December 2023 (UTC)Reply[reply]
Commons:Bots/Requests/AntiCompositeBot 4: “The bot uses a query [...] to find potentially eligible files that are in Category:Files_with_no_machine-readable_license, were uploaded in the last 1 month, and are not currently tagged for deletion. The restriction on upload time is to prevent the bot from tagging files that may have previously had a license or otherwise need human review.” --Rosenzweig τ 08:42, 4 December 2023 (UTC)Reply[reply]

@Rosenzweig: thank you for creating Category:Gallica-related deletion requests. All six of those I see at this time look valid, or at least plausible, to me. Yes, it is too bad that we have been trusting that mere inclusion on this site meant things were in the public domain, but clearly it does not.

One further suggestion: when URAA is the basis for proposed deletion, probably they should be marked with {{deletionsort|URAA}} as well. Note that per Commons:Licensing, "A mere allegation that the URAA applies to a file cannot be the sole reason for deletion" (italics in the original). - Jmabel ! talk 20:10, 4 December 2023 (UTC)Reply[reply]

Apparently you need to "subst" {{Deletionsort}} when adding. - Jmabel ! talk 20:14, 4 December 2023 (UTC)Reply[reply]
@Jmabel: Try {{subst:deletionsort|URAA}}.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 06:46, 5 December 2023 (UTC)Reply[reply]
  • Sorry if it was not clear that what Jeff G. says here was exactly the meaning of my last comment. - Jmabel ! talk 18:52, 5 December 2023 (UTC)Reply[reply]
@Jmabel: I know about the whole URAA thing (and I'm not making "mere allegations" when URAA is concerned, I check the country list at en:WP:Non-U.S. copyrights if I'm not sure). But frankly I wonder if the URAA tracking category for deletion requests (and several other DR tracking categories) still make sense or make sense at all. The FOP categories, yes, whenever South Africa actually does introduce FOP we can use the South Africa FOP category to restore files. But what's the point of the URAA tracking categories these days? The Supreme Court case (Golan v. Holder) was over a decade ago, the URAA was upheld. Does anyone still think this will change? --Rosenzweig τ 07:37, 5 December 2023 (UTC)Reply[reply]
@Rosenzweig: I don't honestly know. I just know there has never been any overt decision to stop tracking them; feel free to push for such a decision. - Jmabel ! talk 18:53, 5 December 2023 (UTC)Reply[reply]
@Rosenzweig Maybe we will have a solution to host that content again, as with WikiLivres in the past... In that case it is useful to know what was deleted due to URAA being PD in the country of origin, so it can be restored and transferred. Darwin Ahoy! 15:12, 11 December 2023 (UTC)Reply[reply]
@DarwIn: Since the URAA was held to be legal, we need to have a conversation about deleting everything that allegedly violates it.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 19:22, 11 December 2023 (UTC)Reply[reply]
@Jeff G. But we had it years ago, no? Wikilivres was in South Korea, if I well recall, so for some time it was used to host that content. I don't think we are still keeping any of them here at Commons, at least willingly. Darwin Ahoy! 20:19, 11 December 2023 (UTC)Reply[reply]
@DarwIn: Per 17 USC 104A, I mean to remove the sentence "A mere allegation that the URAA applies to a file cannot be the sole reason for deletion." from COM:URAA, and all that entails.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 23:50, 11 December 2023 (UTC)Reply[reply]
@Jeff G.: but that seems to me to be precisely to the point. You can't just say "I think this violates URAA". You have to say why it would have been in copyright in its home country in 1996, and why U.S. copyright would not have since expired. - Jmabel ! talk 01:40, 12 December 2023 (UTC)Reply[reply]
@Jeff G.: The thing is that the URAA is tricky. The URAA date is 1996-01-01 for most countries, but different for others (Seychelles: 2015-04-26), and many countries had a copyright term of 70 years (pma) on the URAA date, but some had shorter terms (France had 58 years and 120 days), or had different terms for different types of material (Austria had separate terms for photographs). So it's basically as Jmabel says. --Rosenzweig τ 07:31, 12 December 2023 (UTC)Reply[reply]
At least IMO the problem with the sentence is that it gives people an ability to argue that any DR based on the URAA is invalid "because allegations." Even if the claim that the file violates copyright is valid. Same goes for the whole bit in Commons:URAA-restored copyrights saying "it was determined that the affected files would not be deleted en masse" BTW. For some reason people just leave out the "en masse" part of that and treat it like no image can be nominated for deletion because of the URAA, period. Regardless of if it's an "en masse" deletion request or not. So the wording could probably be clearer in both cases. --Adamant1 (talk) 17:14, 12 December 2023 (UTC)Reply[reply]
URAA issue must be discussed in a separate section, not this section, as the issue is very broad and affects a large proportion of images depicting contemporary sculptural public monuments as well as a couple of monuments public domain in their respective no commercial FoP countries but NOT in the U.S.. JWilz12345 (Talk|Contrib's.) 04:05, 20 December 2023 (UTC)Reply[reply]

Most files from Gallica are NOT copyvios. I also noticed the giant sign advocating for deletion of perfectly good files that are scans of OBVIOUSLY hundreds of years old maps and manuscripts. Thanks for quickly removing that stupid sign. --Enyavar (talk) 18:59, 19 December 2023 (UTC)Reply[reply]

You didn't really read the discussion it seems. There is no "giant sign advocating for deletion of perfectly good files that are scans of OBVIOUSLY hundreds of years old maps and manuscripts", just a text saying that the file might not be in the public domain, and if it is, a different license tag should be used. Only if it is found that it really is not in the public domain, it might be nominated for deletion. --Rosenzweig τ 13:01, 20 December 2023 (UTC)Reply[reply]

December 06 edit

AI images edit

Category:Giovanna IV di Napoli is being overwhelmed with AI images that look to me like mediocre illustrations from children's books. I personally think that adding more than a couple of images (at most) like this to a category becomes a sheer liability, and I'd have no problem with saying "none at all, unless they are either in use on a sister project or about to be," but I don't know if we have consensus on that. I know there has been a bunch of back-and-forth on what is and is not acceptable by way of AI images, but I don't think I've seen a clear consensus on anything other than that we don't want a bunch of AI-generated quasi-porn.

I'm inclined to start a deletion request, but I thought it might be more useful to bring the discussion here where it would be seen by more people and we might get a more meaningful survey than a DR would draw. Pinging @Beaest as a courtesy; they uploaded these images and claim them as copyrighted "own work" for which they have given a CC-BY-SA 4.0 license. (Another question: is there any basis under either the law of Italy, where I presume from their talk page Beaest lives, or the U.S., where Commons is hosted, for a human to claim copyright on a work generated by Microsoft Bing? To the best of my knowledge, both countries have traditionally considered computer-generated art to be in the public domain, unless there is a great deal of control of the output by the person claiming copyright, far more than giving a prompt to Bing. Also, with no indication even of what prompt was given, these seem to me to be particularly lacking in any claim to being in Commons' "educational" scope.) - Jmabel ! talk 06:03, 6 December 2023 (UTC)Reply[reply]

Within our current guideline they are definitely public domain. On the number of photos: They should at least become moved to a subcat also to avoid accidental use as real painting. GPSLeo (talk) 06:45, 6 December 2023 (UTC)Reply[reply]
The images are interesting, but I kind of wonder what eductional purpose they serve when their likely to be historically inaccurate and that's probably not what Joanna of Naples looked like either. Although the images clearly present themselves as both. But what exactly are being educated 1about here, that AI can roughly recreate 15th century art? Anyway, I'd put images like these along the same lines as "fake" flags and the like. Although more good faithed, but still something that we probably shouldn't be screwing with because of the high chance of disinformation (intentional or not) and lack of verifiability. Otherwise its pure fan art, but being passed off as more legitimate (again, intentional or not). But if someone searches for on Google and these images come up it probably won't be clear they were AI generated. Regardless, why not just upload actual, original artwork of her at that point? I'm sure it would be PD. --Adamant1 (talk) 07:16, 6 December 2023 (UTC)Reply[reply]
These are high-quality and they mainly portray Ferdinand II of Naples where they seem to be quite realistic. Adamant1 already named one educational purposes, but there also are more. In contrast to that, I've yet to hear why hundreds of humorous professional-grade porn by the same photographer adding to an already vast collection would be "realistically" educational – I'm okay with these, just not that they are categorized in cats about children's games, foods or directly into "Pickachu" and so on. Please stop marginalizing AI art and maybe start worrying about things that are actually problematic on WMC. Concerning the license of these images, that should indeed probably be changed and including the prompts would be much better but isn't and shouldn't be required. A clearly visible prominent note about it being an AI artwork is something I think is necessary too but is currently not required. However, these images also make it quite clear that these are AI-generated images via the file description. Prototyperspective (talk) 12:14, 6 December 2023 (UTC)Reply[reply]
These are high-quality and they mainly portray Ferdinand II of Naples where they seem to be quite realistic. Maybe the images look realistic, but it's not a realistic portray of Ferdinand II of Naples by any means. The person this image looks nothing like the one in this one. I don't really care if the art style looks real, but the people depicted in the images should at least be close to actual historical figures. And there's nothing educational at all about an image that looks nothing at all like what it's suppose to be about. In no way I'm I trying to margalize AI art by saying so either. I'm simply saying that we shouldn't allow for AI generated of images of real people that look nothing like them. Since there's nothing educational about an image of Ferdinand II of Naples that looks absolutely nothing like him. I could really care less if someone wants to upload an AI generated image of a dragon or whatever though. I'd also probably be fine with these images being hosted on Commons if they were just uploaded as fantasy images of random people from the 15th century. Since that's essentially what they are. --Adamant1 (talk) 15:09, 6 December 2023 (UTC)Reply[reply]
Good points and agree; I guess where we differ there is that a) I think they do look similar albeit it could be better (quite good already for current AI generators with so few pics to train on though) and b) that's not necessarily a reason for deletion or objecting to these images entirely; it may be relevant to portraits but when the focus of an image is a historical scene/event rather than the person it matters much less how the person looks like.
(Moreover, prior artworks aren't photographs and are known to differ significantly from how the people actually looked like.) Prototyperspective (talk) 15:20, 6 December 2023 (UTC)Reply[reply]
Rispondo nella mia lingua madre presumendo che userete il traduttore, né io farei diversamente (e non sono riuscita a capire tutto del vostro discorso, ma provo comunque a rispondere). Le immagini che ho creato con l'intelligenza artificiale non sono limitate solo a Ferdinando II di Napoli e a Giovanna IV d'Aragona, ma visto che parliamo di loro incomincio col dire che ho ricavato l'aspetto fisico dai loro ritratti reali, che sono pochissimi: un giovane uomo con mascella pronunciata, labbra carnose, naso dritto, fronte corta, testa sollevata (e la bocca storta non me l'ha fatta, mi dispiace), chioma di capelli fluenti (sua caratteristica peculiare) per Ferdinando II d'Aragona, come appare da questa medaglia. Di Giovanna IV invece sappiamo che era bionda e aveva forse anche gli occhi azzurri. Lo scenario è quello della Napoli medievale, mi pare che alcuni dei monumenti richiesti (Maschio angioino e il porto medievale) siano venuti anche piuttosto somiglianti. Se le immagini sono di pubblico, non credo ci siamo motivo di cancellarle anche perché non ne creerò più su questi soggetti. Ne ho caricata più di una in modo tale che la gente possa liberamente scegliere quale usare. Beaest (talk) 12:47, 6 December 2023 (UTC)Reply[reply]
The AI images look nothing like Ferdinand II of Naples or Joanna IV of Aragon what-so-ever. Do you not see why that might be a problem when the claim being made here is that they are high-quality, quite realistic images of the historical figures? --Adamant1 (talk) 15:09, 6 December 2023 (UTC)Reply[reply]
the images generated by Beadest are inaccurate and inexact: these images contain many errors in history and anatomy (for example, in one image of Naples there is a monument that did not yet exist and some people have 6 fingers in one hand). In my opinion these images should be deleted Fresh Blood (talk) 15:16, 6 December 2023 (UTC)Reply[reply]
Create a DR for that specific image then or modify the image to remove that part or add Template:Factual accuracy to point that out just like it's commonly done for paleoart.
Things like extra fingers or an object that shouldn't be there can often easily be fixed and it would be good if the uploader did that before uploading – one could also ask the uploader to do so. See here for how this usually can be fixed in a minute or so. @Beaest it would be nice if you could do that for the image(s) referred to by this user. Prototyperspective (talk) 15:24, 6 December 2023 (UTC)Reply[reply]
@Prototyperspective I don't mind doing it but this is not good Fresh Blood (talk) 16:04, 6 December 2023 (UTC)Reply[reply]
I'm inclined to agree. These images aren't actually of the subject, and so don't necessarily have any intrinsic historical or educational significance. It is as authentic and useful as if I grabbed a pencil and drew a picture of Paris. GMGtalk 15:41, 6 December 2023 (UTC)Reply[reply]
Paris, 500 years ago, yes. And there are tons of drawings of cities. I value these images not for the people depicted in it but the ancient settings, the artistic aspects, and the way they were made. Prototyperspective (talk) 15:49, 6 December 2023 (UTC)Reply[reply]
I'm not a very good artist. Always been on the more music side of things, and so I may be liable to do something silly like draw someone with six fingers as per above.
At the end of the day AI generated images are little more than fan fiction. Don't get me wrong, my novella about James T Kirk waking up as a woman should absolutely be published and people should love me for it. But it doesn't necessarily have a lasting historical and educational significance in terms of Commons. GMGtalk 16:18, 6 December 2023 (UTC)Reply[reply]
@Adamant1 non capisco come tu faccia ad affermare che quelle immagini non assomigliano per niente a Giovanna IV, quando l'unico ritratto esistente (su Wikimedia) di questa regina è uno scarabocchio privo di qualunque connotazione fisiognomica. Non credi di essere leggermente condizionato nel tuo giudizio? O forse hai scambiato i ritratti postumi di duecento anni dopo e le attribuzioni con i ritratti autentici? Quanto a Ferdinando II, che è il re di Napoli e non il re di Spagna, ho già detto che mi sono affidata alla medaglia allegata in discussione, che non è pubblica su Wikimedia. La somiglianza c'è eccome.
@Fresh Blood abbiamo già osservato su Wikipedia italiana che l'intelligenza artificiale è imprecisa e che lo stesso metro di giudizio che tu stai usando con lei non lo usi però coi dipinti dell'epoca moderna che raffigurano eventi storici dei secoli passati, i quali sono pieni di anacronismi a partire dal vestiario e dai monumenti. Quelli vanno bene e l'intelligenza artificiale no? Wikimedia ha regole di funzionamento diverse da Wikipedia.
@Prototyperspective posso farlo a partire da domani, per adesso sono impegnata. Studierò bene la cosa. Beaest (talk) 15:44, 6 December 2023 (UTC)Reply[reply]
how you can claim that those images look nothing like Joan IV, when the only existing portrait (on Wikimedia) of this queen is a doodle devoid of any physiognomic connotation. By that logic I could upload an image of a potato with a face drawn on it, say it's Jesus Christ, put the image in Category:Jesus Christ, And you would be totally fine with that because it's not like historical paintings of Jesus Christ are accurate anyway. Right. I have to agree with Fresh Blood and GreenMeansGo here. The images should just be deleted. The suggestion that there's nothing wrong with the images because paintings of them at the time weren't 100% accurate either is totally ridiculous. --Adamant1 (talk) 16:00, 6 December 2023 (UTC)Reply[reply]
Non girare attorno alla questione: hai affermato qualcosa che nessuno è in grado di affermare, data l'assoluta scarsezza di ritratti di Giovanna IV, e la pessima qualità di quelli esistenti. Hai negato la somiglianza forse senza neppure verificare i ritratti, solo per la tua ostilità verso l'intelligenza artificiale. Inoltre la tua logica non regge: di Giovanna sappiamo che era bionda, che aveva (pare) gli occhi azzurri, e che portava la treccia catalana (coazzone). Questi elementi sono stati combinati e si è cercato di recuperarli nella maniera più verosimile possibile. Se avessi preso l'immagine di un uomo vestito da donna allora il tuo discorso avrebbe senso. Beaest (talk) 16:16, 6 December 2023 (UTC)Reply[reply]
just because of your hostility towards artificial intelligence. Not that your doing anything other then deflecting from addressing what I said, but I actually have a Flickr account where I upload AI images to instead of trying to upload my made up fan art to Commons when that's not what the site is for. This isn't a personal file host. --Adamant1 (talk) 16:38, 6 December 2023 (UTC)Reply[reply]
Mi risulta che Wikimedia accetti qualsiasi tipo di immagine (con licenza adatta) che abbia una qualche finalità istruttiva o rappresentativa. Se c'è gente che carica le foto dei propri genitali con la scusa di mostrare com'è fatto il corpo umano, mi domando perché io non possa caricare immagini generate da intelligenza artificiale per mostrare come funziona l'intelligenza e come risponde alle richieste di rappresentazione di determinati personaggi storici, tanto più che leggo nella guida di Wikimedia che questo è accettato. Nessuno qui vuole spacciarle per dipinti reali, nessuno ha mai negato la loro imprecisione, fatto sta che è così che l'intelligenza artificiale lavora in questo momento e quelle immagini hanno comunque un fine istruttivo. Beaest (talk) 16:44, 6 December 2023 (UTC)Reply[reply]
Template:Qt Images of peoples genitals are deleted as out of scope all the time. So that's not counter arguement you seem to think it is.
Nobody here wants to pass them off as real paintings, nobody has ever denied their imprecision the fact is that this is how artificial intelligence works at the moment and those images still have an instructive purpose. Re-up or rename/categorize them as images of random people from the 15th century instead of acting like they are of real people then. I don't think anyone would care. I know I wouldn't. And I don't see why you would either if this is purely about "how artificial intelligence works at the moment" or whatever and not just you trying to pass them off as images of historical figures when that's not they are. --Adamant1 (talk) 16:55, 6 December 2023 (UTC)Reply[reply]
Io non cerco di spacciare proprio nulla, ho impegnato giornate intere a ripulire Wikimedia dai finti ritratti di personaggi storici che erano in verità ritratti di ignoti e le cui attribuzioni erano state fatte totalmente a caso. I titoli che ho messo sono puramente riassuntivi e significano lo scopo per cui quelle immagini sono state generate. Se io ho chiesto Ferrandino d'Aragona alla battaglia di Seminara, così si deve chiamare. Poi si può anche rinominare mettendo "intelligenza artificiale" nel titolo, ma non mi metto a rinominare tutte le immagini a una a una. Inoltre le immagini oscene io continuo a vederle, e sono pure categorizzate, quindi non sono state cancellate tutte. Beaest (talk) 17:32, 6 December 2023 (UTC)Reply[reply]
Just because there's obscene material on Commons doesn't mean it's not being deleted. They aren't mutually exclusive. Regardless, you can say you aren't trying to pass off anything but then you also named the files after historical figures, put them in categories for those historical figures, and tried to argue they were perfectly fine because images of them at the time weren't 100% accurate either. Those no reason you would have done any of those things, especially the last one, if you weren't trying to pass the images off as representing real historical figures. Anyway, I nominated the images for deletion. So I guess it's up to the community to decide if we should allow for these types of images or not. --Adamant1 (talk) 18:13, 6 December 2023 (UTC)Reply[reply]
Se sapessi chi sono io e il lavoro di ricerca storica che ho fatto su Wikipedia, neanche ti sogneresti di formulare una simile accusa. Ma non perdo tempo neppure a risponderti. La mia buona fede risulta EVIDENTE già soltanto dalla creazione di categorie come "Ferrandino d'Aragona in immagini generate da intelligenza artificiale" che tutto vogliono fare meno che nascondere. Beaest (talk) 18:39, 6 December 2023 (UTC)Reply[reply]
E si dice wikipedia in italiano @Beaest, sto sul pezzo Fresh Blood (talk) 16:01, 6 December 2023 (UTC)Reply[reply]
The CC-BY-SA 4.0 license is pretty much copyright fraud Trade (talk) 18:10, 6 December 2023 (UTC)Reply[reply]

  Comment I nominated the images for deletion at Commons:Deletion requests/Files in Category:Giovanna IV di Napoli by Bing Image Creator. Trade makes a good point against hosting the images to on top of everything else. --Adamant1 (talk) 18:15, 6 December 2023 (UTC)Reply[reply]

Commons:Free media resources/fi "(...) Please always double check if the media is really freely licensed and if it is useful for Wikimedia Commons projects." Since much of the AI stuff on commons seems to fail the "useful" part, I would suggest to delete it. Regardless if it´s PD or not. Otherwise we drown in that stuff and contribute to the planet exploding - courtesy of carbon emissions from data centers. Alexpl (talk) 22:01, 6 December 2023 (UTC)Reply[reply]
I explained, with very specific examples, how these images are useful. I also noted that these >600 images are not useful but they are kept anyway which is fine. Nobody is arguing that we don't delete any AI art, just not the highest-quality actually useful ones please. Now even carbon emissions of data centers seems to be valid rationale for censorship here. Prototyperspective (talk) 22:13, 6 December 2023 (UTC)Reply[reply]
@Prototyperspective: From what it sounds like Beaest has already had problems with uploading these or similar images to Italian Wikipedia already and there's clearly opposition to them here. Plus Its not like people can't have Microsoft Image Creator or any other AI generator create the same exact images instsntly and in mass if they want to. if they want to either. So what useful purpose do they serve if they can't be used in a Wikipedia article and anyone can instantly create thousands of versions of the same exact images in a matter of minutes if they wanted to? The images don't even illustrate a novel usage of the technology either. Its just generic, lower quality versions of already exiting paintings. Really Beaest could have skipped all this by just uploading the originals. --Adamant1 (talk) 04:42, 7 December 2023 (UTC)Reply[reply]
@Adamant1 rispondo solo per smentire un'idea fallace: l'intelligenza artificiale non è in grado di ricreare la stessa immagine neppure se a chiederla è la stessa persona con la stessa identica trama. Le immagini sono uniche, se non vengono salvate sono perdute, per cui nessuna persona esterna potrà mai ottenere lo stesso risultato semplicemente chiedendo.
Adesso vorrei proprio sapere di che originali tu stia parlando, mostrameli. Non esistono. L'intelligenza dichiara apertamente di non ispirarsi a nessun dipinto esistente, a meno che non sia tu a chiedergli, per esempio, il David. Beaest (talk) 05:12, 7 December 2023 (UTC)Reply[reply]
The word "unique" there is doing a lot of heavy lifting. I look at this the same as "limited edition" art prints or NFTs. In both those cases every image is supposedly "unique" but only contain minor iterations of a general base image. I don't think every image is "unique" just because their slightly different though. In fact I can get pretty similar results to you just by putting "15th century oil painting of young King Ferrandino of Aragon and Queen Giovannella greeting cheering people in front of a castle close up portrait" into Image Creator. So no offense, but the images you uploaded are extremely generic. That's kind of baked into AI artwork. All it does is paint by numbers based on preexisting artwork. And to what artwork I'm refering to, there's plenty of images out there of knights riding horses in battle or kings and queens standing in a court yard. But I'm sure you'll claim yours are different and more unique then those because you put "Ferrandino d'Aragona" in the file names. Anyway, I'd still like Prototyperspective to answer the question I asked them about how the images are useful. So I'd appreciate it if we left at that so they can. --Adamant1 (talk) 05:38, 7 December 2023 (UTC)Reply[reply]
"Ci sono un sacco di immagini di re, regine e cavalieri". Grazie. Stai parlando con una che ha passato anni a setacciare la rete cercando immagini che fossero simiglianti a questi personaggi storici, e non ne ho trovato praticamente nessuna. Quindi no, non faccio prima a caricare gli originali perché non esistono. E se anche trovassi il ritratto di una persona somigliante, non lo troveresti mai nella scena richiesta, mai coi vestiti richiesti, mai in compagnia delle persone richieste. Fai tutto facile perché non ti interessi di ricostruzione storica, evidentemente.
E non hai nemmeno capito il significato di unico. Le immagini sono uniche perché l'intelligenza non è in grado di ricrearle neppure se glielo chiedi. Avrai sempre un risultato diverso. Non c'entra nulla con l'edizione limitata che sono 500 copie uguali di uno stesso disegno. Qui ti dà una immagine soltanto e basta. L'unico modo per copiarla è condividerla.
E a ben vedere non mi ci porta nessuno a mettere il frutto di ora di lavoro, selezione e ritocchi a disposizione di tutti anonimamente. Meglio che il merito rimanga a me. Beaest (talk) 07:13, 7 December 2023 (UTC)Reply[reply]
I think I was pretty clear about it, but since you didn't seem to get my point the first time I wasn't talking about limited edition prints where it's 500 copies of the same image, but limited edition prints where "new" editions are just variations on the same base image or theme with only minor differences. Anyway, I'd appreciate if you dropped it like I asked you to so Prototyperspective can have a chance to answer my question about how these images are useful. The endless side tangents about things no one disagrees with and have nothing to do with this aren't really helpful. --Adamant1 (talk) 07:46, 7 December 2023 (UTC)Reply[reply]
anyone can instantly create thousands of versions of the same exact images in a matter of minutes if they wanted to Unsubstantiated false claim. People claim all the time how easy it is to create AI art. Go ahead and create the same quality of images if that's so easy. Furthermore one needs to know which things work. The images don't even illustrate a novel usage of the technology either. They do, as I already explained. lower quality versions of already exiting paintings As far as I know that is not the case. The images used are on WMC which aren't higher quality paintings of these. how the images are useful. I.explained.that.already. Unlike in other DRs where images are kept for no apparent reason other than being allowed by current policy, I made three or more very specific examples of how they could be used in educational ways. I explained it multiple times already but you always ignored it. This is more than enough: "I…use the image for my blog that talks about life in the middle ages or whatever or a Wikipedia article about AI art depicting historical figures and how current generators currently fail" Prototyperspective (talk) 09:06, 7 December 2023 (UTC)Reply[reply]
Go ahead and create the same quality of images if that's so easy. As I've pointed out once already I pretty easily created images that were similar to the ones uploaded by Beaest in Microsoft Image Creator by using the prompt "15th century oil painting of young King Ferrandino of Aragon and Queen Giovannella greeting cheering people in front of a castle close up portrait." The guy in the image picture in particular looks almost the same. So yes it is easy to create the same quality of images. They aren't at all unique or hard to reproduce like both of you are acting.
use the image for my blog Any image can be used for a blog post about anything. I was hoping for something more specific and geared towards a general audience then vague assertions about how you think the image would be useful for your personal blog. Commons isn't a personal file host. Maybe the images could be used in a Wikipedia article about AI art, but it sounds like Beaest has already had trouble adding the images to the Italian language Wikipedia. So sure AI generated images can be included in a Wikipedia article about AI artwork, but these specific images apparently aren't going to be used in any articles. --Adamant1 (talk) 09:23, 7 December 2023 (UTC)Reply[reply]
@Adamant1 tu sei lo stesso che ha affermato che le immagini di Giovanna IV non fossero somiglianti ai ritratti, quando non sapevi nemmeno quali fossero i ritratti esistenti. Non sei affidabile per giudicare la qualità dell'immagine. Dice il proverbio: per un astemio tutti i vini sono uguali. Se non conosci il periodo storico, i protagonisti e i dettagli, è inutile che ti metti a valutare. Ti sembreranno tutti egualmente validi, ma non lo sono. Non riconosci nemmeno i lineamenti del volto. Beaest (talk) 09:44, 7 December 2023 (UTC)Reply[reply]
It does not matter. It's still nothing but an unsubstantiated claim. If that specific prompt works well that is a nice find and there were hardly any images of that kind here before. The ease of creation is not or not necessarily a factor. I explained a specific educational use-case and you did nothing to address it but misquoted one of multiple to derided it. Stop marginalizing AI art and start adhering to Wikimedia Commons policies please. Prototyperspective (talk) 10:19, 7 December 2023 (UTC)Reply[reply]
So we can assert: There are no real AI related policies. Based on current rules, Commons will become a massive dump for AI stuff. Images + Audio and Video (in the near future). Until the foundation runs out of money for hosting, that is. Bye Bye. Alexpl (talk) 10:34, 7 December 2023 (UTC)Reply[reply]
All of that is not true.
Moreover, my concerns about hosting unexpected inclusions in pages of nonamateur porn were dismissed – why does it suddenly matter in this case when AI art is finally enabling us to have CCBY images for subjects for which there are nearly none in the public domain. Prototyperspective (talk) 11:17, 7 December 2023 (UTC)Reply[reply]
Why delete porn? At least those guys usually don´t pretend to be "old" or "original". Just keeping the stuff that scammers try to upload to Alamy and the like is kind of boring. Jokes aside: There have to be easily understandable rules for all AI stuff, other than PD or, worse, CC-by. But that will become pretty obvious soon enough. Alexpl (talk) 13:50, 7 December 2023 (UTC)Reply[reply]
It's still nothing but an unsubstantiated claim. Try the prompt yourself. Or I can always upload the image to Commons so we can compare them. I can guarantee that King Ferrandino has essentially the same look as he does in Beaest images though. This isn't magic. Bing Image Creator uses a base template that essentially every image of the same person and setting is based on with minor variations depending on what prompts you to add it. That doesn't inherently change that images of King Ferrandino will probably look the same or extremely similar even if people add things to the prompt. More then likely Beaest's images turned out how they did despite the extra descriptors, not because of them. --Adamant1 (talk) 11:06, 7 December 2023 (UTC)Reply[reply]
@Adamant1 non c'è bisogno di garantire, se condividi le immagini qui tramite i
indizamento possiamo verificare tutti la loro qualità zo Beaest (talk) 18:18, 7 December 2023 (UTC)Reply[reply]
I don't know what you mean by "verify their quality" since that's not what I was talking about, but I rather not give you and Prototyperspective more fodder to argue about. So I think I'm good for now. Maybe I'll upload the image after things calm down a little and other people have a chance to give their opinion on the subsection though. But I don't really have anything else to say about it at this point. Especially considering the way you and Prototyperspective have been treating me over the whole thing. --Adamant1 (talk) 18:30, 7 December 2023 (UTC)Reply[reply]
@Adamant1 uno che non ha niente da nascondere non esisterebbe a condividere queste immagini anche solo per dimostrare le proprie affermazioni.
P.S. io non ho trattato male nessuno e soprattutto non ho vilipeso il lavoro altrui né fatto commenti sprezzanti sulle immagini caricate da altri utenti definendole disegni per bambini Beaest (talk) 18:44, 7 December 2023 (UTC)Reply[reply]
Whatever you say, Beaest. Why not drop it? --Adamant1 (talk) 18:55, 7 December 2023 (UTC)Reply[reply]
@Adamant1 Se ritieni che non siano utili allora potresti evitare di fare commenti sulla mancata somiglianza o su altri dettagli artistici che sono tue opinioni personali. "Solo piccole differenze" può dirlo uno che non abbia mai usato l'intelligenza artificiale e non si preoccupi di raggiungere la perfezione. Talvolta servono anche centinaia di tentativi, di modifiche di parole, talvolta serve tanta tanta fortuna, prima di ottenere un risultato soddisfacente. Qui lo scenario è perfetto, ma il personaggio poco somigliante. Qua è molto somigliante, ma è vestito male. Qui invece è tutto adatto, ma gli sono venute sei dita, o gli manca la spada. Talvolta riesci a ottenere l'assoluta perfezione dell'immagine richiesta, e figuriamoci se potrà mai rigenerarla, non dico uguale, ma anche solo simile! Altro che piccole differenze! Beaest (talk) 09:07, 7 December 2023 (UTC)Reply[reply]

General discussion about AI on Commons edit

Let's move this discussion to the initial question: try to get a clear consensus on AI images on Commons, what is acceptable and what is not. My thoughts:

  1. About copyrights:
    1. Fact: Computer-generated art is in the public domain, but there may be exceptions.
    2. But there might be a problem: what was the source of the AI tool to make an image? Did it just copy it from another image (copyrighted or PD), combine copyrighted images and those that are in the Public Domain, or did it totally created it himself?   Question Has this been discussed before?
  2. Are AI images educational? My opinion:
    1. AI illustrations about historical persons, situations, locations and so on: only as illustrations in novels and at other stories where imagination and fantasy are more important than facts and truthful details. Is this part of the "educational purpose" of Commons? I doubt it.
    2. We do not accept images/photographs of every contemporary artwork on Commons as well, only of artworks of established artists and there are perhaps other criteria, am I right? So why should we accept all AI images that look like art? Who can judge the quality of AI images?
    3. AI illustrations for other purposes, can be useful. I think of diagrams, theoretical models, explanations of how things work or how to make something (like in manuals, guides and handbooks), abstracted drawings of for instance tools and architectural elements, perhaps icons. (Note: this is not an exhaustive overview.)
  3. Recognizability: All AI illustrations should be clearly recognizable as such.
    1. I plea for a message in every file with an AI illustration, preferably by a template. I saw that nowadays in the Upload Wizzard you have to tick a box to indicate that you upload an AI image. Perhaps a template can be linked to such a check mark.
    2. They should be all in a (sub) category of Category:AI-generated images.

--JopkeB (talk) 11:52, 7 December 2023 (UTC)Reply[reply]

On point one, at least with Dall-E they trained it on a mix of public domain and licensed works. Which is probably why its really good at creating images that look copyrighted characters. Regardless, its probably not to allow images created with Dall-E since there's no way to confirm if the original images were PD or not (I assume it would violate copyright to re-use images that are similar to pre-exiting artwork or characters even if the AI generated image itself isn't copyrightable). --Adamant1 (talk) 12:21, 7 December 2023 (UTC)Reply[reply]
  • 1.2 In general AI generators create artwork based on training on the image output of all of humanity, and that is the case also if the image looks similar to some existing one, it wouldn't work any other way. And yes it has been discussed before if you consider discussing me bringing that img2img issue up twice but being ignored; don't know if there's a further discussion about it elsewhere.
  • 2.1 WMC hosts a lot of art and it can be educational in a myriad of ways such as information about art styles, art movements, subjects depicted in the image, and so on. Is the educational purpose of Commons humorous/artistic nonamateur porn? I doubt it but nevertheless these images have been kept over and over. There are many more ways they can be educationally useful and I already described many specific of these albeit we probably don't know all the potential educational use-cases. For example an AI-generated hamburger may look useless but since there's no other CCBY image of how advertisements depict fast food burgers, it can be used to illustrate how ads portray them. Don't assume we can easily anticipate all the many use-cases; the specific ones I mentioned should be more than enough.
  • 2.2 Nobody is arguing we should accept all. Source for that not all photographs of contemporary artwork is allowed? I thought none was allowed if the artwork is not CCBY and the image taken in a way where you get a good view of the artwork.
  • 2.3 Please stop marginalizing AI art and treating it in a special way where suddenly our existing practices and policies don't matter anymore.
  • 3.1 Very much agree and suggested exactly this at the Upload Wizard improvements talk page.
  • 3.2 Very much agree. I worked a lot to implement this in contrast to the people complaining about it. I also asked that porn and nude people must be categorized into a subcat of Nude people but apparently only AI art is considered to require such maintenance.
Prototyperspective (talk) 12:27, 7 December 2023 (UTC)Reply[reply]
My above counterpoints have been largely ignored. --Prototyperspective (talk) 01:14, 13 December 2023 (UTC)Reply[reply]
No one ignored your counterpoints. You just repeatedly dismissed it as "some myth/misinfo/misunderstanding" when they did. You can't just refuse to listen or otherwise dismiss everything anyone says and then act like no one responded to you. --Adamant1 (talk) 01:58, 13 December 2023 (UTC)Reply[reply]
Please stop marginalizing AI art and treating it in a special way I mean...it kindof is special. The copyright issues alone are almost completely unlitigated. You can't very well expect people to plug their nose and treat it like a human with a camera, backed by a century of precedent regarding ownership of the work. GMGtalk 15:48, 7 December 2023 (UTC)Reply[reply]
I agree with JopkeB, with his concern about the usefulness and accuracy of AI generated content. AI are tools, and as any tool, they can be well used or not. Imho, inaccurate pictures should be deleted without any blame on the use of AI, but even without the fear of losing a masterpiece, adopting the same approach used for the several hundred pictures crowding the categories about 'actors' and 'actresses'. --Harlock81 (talk) 22:14, 7 December 2023 (UTC)Reply[reply]
Yes Photoshop can also be used in problematic ways. Art is characteristically not accurate. Welcome to the 15th century where nearly all art was portraying realistic scenes and the upcoming theocratic totalitarian/oligarchic enforcement of w:Realism (arts). Concerning categorization, I agree that they should not be categorized in misleading ways. For the same reason I don't consider it okay to categorize porn in children's games or foods cats just because the name of it is written on the body which is currently being done. And several hundred pictures crowding the categories about 'actors' is also provably false. Prototyperspective (talk) 22:19, 7 December 2023 (UTC)Reply[reply]
I don't understand the reason of such a note when you can go and give a look: 1966 files in Category:Actors, 2400 file in Category:Actresses, most of them unused because uploaded just for promotional articles that have been deleted time ago. --Harlock81 (talk) 23:06, 7 December 2023 (UTC)Reply[reply]
How is that relevant to AI art – those are not AI generated I thought you were talking about AI images. That low-quality images crowd out more relevant and higher-quality images is exactly what I pointed out here earlier and suggested to be one of the top priorities to fix. I'm currently thinking about how to describe my suggested changes in a better way. Deleting AI art as useful as the above does not help that cause, it's just an additional problem. Prototyperspective (talk) 23:21, 7 December 2023 (UTC)Reply[reply]
I'm thinking about making a proposal so anyone who brings up porn in a non-porn discussion automatically gets a one-day block. AI on Commons is far too complex and discussable for it to be derailed by someone who brings up porn over and over again.--Prosfilaes (talk) 21:31, 8 December 2023 (UTC)Reply[reply]
Except that it's a tool where not even the people who created it know what it's doing or how it's doing it. We just need one jurisdiction in the entire world to say that AI art is derivative, and it nukes everything, because we have no way of knowing what work it's drawing from. Saying, as a number of people have said, that there is simply a bias against AI isn't really a response that addresses the underlying issues of what it is and how it works and how it's fundamentally different. GMGtalk 13:58, 8 December 2023 (UTC)Reply[reply]
@GreenMeansGo:   Done, please see COM:VPC#Court in Beijing China ruled AI pic copyrightable   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 14:08, 8 December 2023 (UTC)Reply[reply]
@Jeff G.: that decision doesn't say that AI art is derivative, it says it is (at least in some cases) copyrightable in China. - Jmabel ! talk 19:44, 8 December 2023 (UTC)Reply[reply]
@Jmabel: It's a reason to not accept such images, at least from China.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 19:56, 8 December 2023 (UTC)Reply[reply]
Or, more precisely, if they are from China they need to be free-licensed by the owner of the copyright. - Jmabel ! talk 20:03, 8 December 2023 (UTC)Reply[reply]
I respect your opinion immensely, but I think you may be missing the punchline there. If it's derivative anywhere, it's derivative everywhere so long as you don't know the source. GMGtalk 20:27, 8 December 2023 (UTC)Reply[reply]
@GreenMeansGo: Did I say otherwise? I was commenting on the fact that the Chinese case had nothing to do with derivative work; it was a ruling that it is possible under Chinese law to copyright the output of generative AI. - Jmabel ! talk 22:10, 8 December 2023 (UTC)Reply[reply]
It will be interesting to see how that works in practice or if other countries follow suit. My guess is that at least the United States won't, but who knows. --Adamant1 (talk) 22:14, 8 December 2023 (UTC)Reply[reply]
Sorry. I was replying to Jeff. Just trying to make the point that jurisdiction becomes meaningless when the scope is the entire web. GMGtalk 23:08, 8 December 2023 (UTC)Reply[reply]
we have no way of knowing what work it's drawing from That is irrelevant and just a talking point of AI art critics trying to sue some money out of the companies building the generators. They are built via training data of billions of images. Just like what everything you saw with your own eyes makes up your experience that you leverage whenever you create something. A lot of things you saw were copyrighted – does that mean you now have to pay a hundred thousand artists every time you draw something? That's just one example of what the implications of generative software based on training data are. I'm saying the anti AI bias is here, not that the underlying issues are stacked against AI – it's only a couple of profit-oriented people and some sulking artists trying to make it appear as if it's derivative when it works like other machine learning systems where such issues were never raised. Did you get some cents because ChatGPT trained on some text you put online without licensing it under CCBY yet? It does not work like that. It's people's heads being stuck in the Middle Ages. Prototyperspective (talk) 20:29, 8 December 2023 (UTC)Reply[reply]
I've generated plenty of images that have parts left over from originals that shouldn't be there. For instance water marking in the corner, spots of the image that were clearly cropped from preexisting photographs and not computer generated, Etc. Etc. Stable Diffusion has also clearly generated either exact, or essentially exact, scenes from movies. Almost to the point where I think it's a screenshot. So even if they are trained on billions of images, that doesn't mean it combines those images every time it generates a new one or that it doesn't reuse source material. Like say it was trained on millions of photographs of airplanes, but you ask it for a particular airplane flying over a specific area of land. It's not going to generate that image based on the millions of images having to do with other plans and locations. If I ask if it for an image of a cult movie, it doesn't then give me an image based on every other movie in existence. It will give me one from the four or five images it was trained on of that specific movie. And more importantly it will probably be a derivative because the AI just wasn't trained on enough images to come up with something original. --Adamant1 (talk) 20:41, 8 December 2023 (UTC)Reply[reply]
Those are not cropped from preexisting photographs. Should I start from the basics of how diffusion works or can you educate yourself first please instead of making grave decisions based on false opinions of how these things work? Those are not "parts left over from originals", it's what the AI found matches the prompt to generate there based on its training data and the seed. Then ban those scenes of movies, obviously if it has a lot of pictures of that scene and you precisely prompt it toward achieving that result it may be able to generate sth looking very similar. I've not seen any such image here and you have also not linked it. Some AI generated art was already deleted based on copyright concerns as a movie-scene-lookalike-image would be. And that is fine. So where's the problem? And yes, exactly that's why a lot of AI fan art was deleted, proving how things are working very well. Should we now ban all image uploads because it could look like a movie scene? It's like talking to a wall. Prototyperspective (talk) 20:52, 8 December 2023 (UTC)Reply[reply]
You'll have to take my word for it. I'm not going to upload COPYVIO to Commons just to prove a point. But I know what I'm talking about and just acting like I'm lying isn't a counter argument. No one who has any knowlege about the technology would dispute that it creates derivatives. But I can tell you right now I've had Dalle-E create a 100% realistic image of Henry Cavill from Superman. Regardless, what's your explanation for there being watermarks and signatures in the images if they aren't there because it's copying preexisting photos? Or are you going to argue it just inserts watermarks and signatures into images randomly for no reason? --Adamant1 (talk) 21:10, 8 December 2023 (UTC)Reply[reply]
I thought you found it elsewhere. I didn't ask you to upload it here but link sth from somebody else or upload to a site like imgur if you made it yourself. If you're referring to the movie-scene-look-alike or parts left over: I addressed that and all you did was making strawman fallacies like saying I claimed you lied; please read what I wrote. Unreadable watermarks not present in any image are there because they train on 0.1 billion images or so with watermarks. Every second sentence is some myth/misinfo/misunderstanding. If there are ten thousand images of a specific movie scene all with the same terms and you enter these terms as a prompt you may get an exceptional rare very uncommon lookalike which we can and possibly should delete. Prototyperspective (talk) 21:25, 8 December 2023 (UTC)Reply[reply]

There are three fundamental issues with AI-created images, some of which JopkeB touched on above:

  • Accuracy: AI creations are not based on knowledge as human creations are; they cannot incorporate specific sources or discuss artistic decisions. There is zero guarantee that the output will be accurate, merely that it will superficially resemble the intended output. Yes, human illustrators are also fallible, but they are capable of sourcing, discussing, and editing their work. That allows for provenance of information to be tracked, like an error being the result of a similar error in a reference image. AI art programs can't tell you which of thousands of images that error came from. The result of this is that every AI image is a potential undetected hoax, with no ability to extend good faith as with human creators. Worse, as soon as the image is used anywhere that doesn't link back to the Commons file page, the AI origin of its creation will often be forgotten. Users and search engines treat Wikimedia projects as trustworthy, and will assume that these images are factual.
  • Scope: We only allow artworks when they have a specific historical or educational value - whether that is due to the artwork or creator being notable, being a good example of a particular style or technique, or so on. We do not allow personal artworks by non-notable creators that are out of scope; they are regularly deleted as F10 or at DR. Because AI works are not useful as educational or historical illustrations due to accuracy issues, they are no different than any other personal artworks.
  • Rights: While emerging law seems to treat AI works as PD, there are still rights issues:
    • It is possible (and common) for AI works to contain fragments of source images, including of the many nonfree images in their dataset. Unlike typical copyvios, these are almost impossible to trace: the copied work could be any of the images in the AI dataset, rather than an image of the same subject as copyvios generally are. (For example, a face in an AI image could be copied from any face in any image in its dataset.) Additionally, the copied portion may be only a portion of either original or AI work, making it more difficult to detect with automated methods.
    • Similarly, the AI may replicate images of copyrighted subjects, such as buildings in non-FOP countries or advertisements in the background of a photograph. The same concerns about detectability apply.
    • Most of these datasets include stolen images (including from Commons) that were used in violation of their copyright or licensing terms. We should not be encouraging the production of unethically produced AI images by hosting them.

The combination of these means that hosting of AI-produced images is fundamentally incompatible with the purpose of Commons, and we should not accept most AI-produced works. (There are of course exceptions – images to illustrated facets of AI art production, use of ethically-sourced AI to produce works like heralds that inherently involve artistic interpretation of specifications – but these are specific exceptions to the general rule.) Pi.1415926535 (talk) 06:49, 8 December 2023 (UTC)Reply[reply]

The issue with stolen source material is debatable as least in US court has been stated that least in cases where generated material doesn't compete with original. Another point is that it is likely that newer models will have addressed the problems with shady source data (by using PD material, by using licenced materials, by using self-generated materials ) so this issue is something which will solve itself sooner than later also because of this. --Zache (talk) 07:01, 8 December 2023 (UTC)Reply[reply]
I tend to agree with Pi.1415926535: Hosting of AI-produced images is fundamentally incompatible with the purpose of Commons, unless there is a clear statement which sources has been used, as Zache wrote there will be in future. In that case AI images on Commons still have to comply with rules about scope and accuracy, and I hope with recognizability as pointed out above. JopkeB (talk) 07:59, 8 December 2023 (UTC)Reply[reply]
I think that there should not be differentation between AI generated and human manipulated images in cases where there should be historical accuracy and images could be considered rather straightforwardly as fakes. This includes how images are categorized etc. (ie. categories containing images about real places and persons should not be flooded with fake images) and differentation between real and generated images should be done in category level also. --Zache (talk) 08:41, 8 December 2023 (UTC)Reply[reply]
@Zache: Do you mean "composite" images, were a human has combined elements of real photos, should be allowed? Alamy offers those, but it doesn´t seem quite right. Alexpl (talk) 09:26, 8 December 2023 (UTC)Reply[reply]
I mean that in last year i found badly photoshopped 60s look like advertisements which were added to wikipedia articles. Some with changed images and some with modified texts. When I saw them I feeled that they should be deleted as fakes and it was bad thing that they were in same categories than real scanned adds. In this context I dont think that I would like the AI generated works which tries to be genuine either when i didn't like the human manually created fakes. No matter how good they are. --Zache (talk) 09:37, 8 December 2023 (UTC)Reply[reply]
Strongly agree with all of this. I'd add that:
  • Most AI-generated images are uploaded "on spec", with no specific use case in mind. The vast majority of these images never go on to be used. Given the relatively low amount of effort required to generate these images, there's no reason for this to be the case; they should be generated and uploaded on an "as-needed" basis to satisfy specific requirements from projects.
  • Having AI-generated images on Commons is not harmless. The presence of AI-generated images can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one.
  • The justification that AI-generated images can be used as examples of AI-generated images is wildly overused. It cannot be used to justify the inclusion of every such image, or to justify uploading large sets of similar images.
Omphalographer (talk) 19:47, 8 December 2023 (UTC)Reply[reply]
they cannot incorporate specific sources or discuss artistic decisions […] human illustrators are also fallible, but they are capable of sourcing, discussing, and editing their work.
So false I don't even know where to begin. The images are not made by entering some sentence, generating some image and being done with it. You change the prompt, generate many times until you have an image you find suitable and then also can edit it through ordinary or unconventional means. Moreover, who ever said it has to be "accurate", again since when are we in the totalitarian realism art genre era? Art is not always inteded to be "accurate" Please see Category:Visual arts by genre‎ as a staring point to learn more about what art is, what its subjects are, art styles and more. Why the heck do you think humans who generate AI art are not able of "of sourcing, discussing, and editing their work"?!
I won't even start to address all the other misinformed falsehoods in your comment because there are too many. Just. AI art programs can't tell you which of thousands of images that error came from because it is not based on a few images; it's based on billions. It would not work otherwise. You don't even understand the fundamental basics of how these things work. It trains on billions of images and learns to associate terms with visual output like a specific object. If you make AI art it's you who is trying to achieve a certain result that looks sufficiently good and/or realistic. Please don't make any decisions on such fundamental misunderstanding of what these things are. This is getting really unconstructive. hosting of AI-produced images is fundamentally compatible with the purpose of Commons and a great historic boost to the public domain despite of a few misinformed biased editors who don't know much about it. Prototyperspective (talk) 20:40, 8 December 2023 (UTC)Reply[reply]
Art is not always intended to be accurate; art is usually not within scope of Commons, either. Of all the paintings in the world, we only accept the works by notable artists or that have historic value. Users uploading their own art, except for notable artists, is generally out of scope. In a few cases, art, AI or otherwise, is useful for showing something that we couldn't have a picture or a historical painting for, but I'm thinking Cthulhu, not a living person who had actual features. While historical paintings may be inaccurate, they at least show some reflection of how some notable person saw the subject, and maybe how a culture saw it.--Prosfilaes (talk) 21:42, 8 December 2023 (UTC)Reply[reply]
The number of and types of files in (subcategories of) Category:Visual arts show that this is false. It may be especially useful for Cthulhu types of artistic illustrations but it can also be useful for many other things such as showing how an art style looks like.
I get the pictures discussed here have some specific title, description, category and reference to an actual human. However, that does not mean that is all what the value of that image is limited to that. As I explained earlier, I value those images not for any historical figure depicted (it could be a random person) but the ancient settings, the artistic aspects, and the way they were made, all of which is educational, and certainly more educational than many other DR-kept images. Such images don't necessarily show how a culture saw it but they can be engineered/designed so that they are useful for illustrating how some culture past or present imagined things where it's also worth noting that the AI generators trained on a sizable fraction of the human visual culture. Would it be better if those images were not titled in a way that suggested the focus/subject of the image is an artistic portrait-type portrayal of some actual historical person? I do think so. Prototyperspective (talk) 01:10, 13 December 2023 (UTC)Reply[reply]
@Prototyperspective: I think you could make an argument that specific images that display key moments or works in the history of AI generated artwork have eductional value. That's quickly lost when what's being uploaded has minimal value when it comes to actually showing where the technology is or was at during certain phases of it's development. Or to be more precise, not every single image created by an AI art generator has value in regards to where the technology is at the time though. Otherwise your just creating an arbitrary, de-facto standard where anything created by an AI art generator nom matter what is worthy of inclusion "because technology", which you'd have to agree would be ridiculous. "Well, I know it's just an image of a blue line, but AI. So..."
And it's not like that's even why these images were uploaded in the first place. Your acting like it was after the fact because you have no better argument. I'd look at this similar to photographs taken with cameras though. There's some photographs, taken by some cameras that are notable for playing an important role in the medium somehow. Be that the first photograph taken with a pinhole cam or an iconic photograph of an especially notable event, does that mean that every photograph is de-facto educational or allowed "because cameras" though? No. And I'm sure people felt the same way when cameras were invited then they do about AI artwork now. But what about when the novelty wears off? Then 99% of this will be no better then shovelware. Just like 99% of photographs are now. --Adamant1 (talk) 01:52, 13 December 2023 (UTC)Reply[reply]
It makes no sense to reply to you. You ignore my points but strawman-like fully distort them and/or raise other points that I already addressed earlier such as implying – probably without thinking about, being actively engaged in, or being experienced with AI art – that they can only be educational for illustrating AI art capabilities but not other purposes. "because technology" is not what I wrote about, in fact that is what you are arguing – ban it because xy technology is used...what's next, anything created or modified with Photoshop? Prototyperspective (talk) 17:09, 13 December 2023 (UTC)Reply[reply]
Come on, Prototyperspective. I've said multiple times now that I have a Flickr account where I upload AI generated artwork. Honestly, I wasm't even in support of a ban to begin with. But would be now just so that totally unhinged, bad faithed people like you will shut up about it. Does anyone really want to deal with this kind of demented nonsense every time they nominate an image of art work for deletion? Just make it so unused AI artwork can be speedy deleted and be done with it. Otherwise your just asking for this kind of disturbed, angry badgering every time this comes up. --Adamant1 (talk) 18:24, 13 December 2023 (UTC)Reply[reply]
Yes you've said that but not linked it and so in a way that supports your point. Once again you ignored my points. I'm fine with deleting large swathes of AI-generated media but not high-quality educationally valuable ones and certainly not outright indiscriminately banning it early on. Do you really think saying multiple times now that I have a Flickr account where I upload AI generated artwork addresses any of my points or that saying that is a point itself somehow? Prototyperspective (talk) 18:45, 13 December 2023 (UTC)Reply[reply]

Hosting of AI-produced images is fundamentally incompatible with the purpose of Commons ---- +1 sounds like a reasonable policy. Alexpl (talk) 09:26, 8 December 2023 (UTC)Reply[reply]

I would say that there are useful usecases for AI generated images. Icons, placeholder and decoration images are good examples for useful usecases (ie. cases where we do not need historical accuracy). Also one gray area is the AI enhanced / retouched images where AI is used for improving resolution and source image quality. However, by definition GAN's for example generate content from thin air and even if it looks credible it may not be accurate (and they hallucinate also). Should in these cases the guideline be that if one edits the image then new image should be uploaded with different filename and there should be link to the original image in the image description? (in addition that how new image was edited compared to original) --Zache (talk) 09:44, 8 December 2023 (UTC)Reply[reply]

  • As I stated in a current DR[1], I think that, well-intentioned or not, it is basically all Commons:out of scope; unverifiable, and in the worst cases, misleading and harmful, not to mention potential copyright infringement issues. Using AI to upscale existing images or similar is one thing, creating fake "historical" or nature images from scratch is another. This practice should be entirely banned, in my opinion, unless maybe for limited use as joke images on user and talk pages, while clearly labelled as such. FunkMonk (talk) 11:02, 8 December 2023 (UTC)Reply[reply]
What steps are necessary to make an AI-ban a rule/policy on Commons? Alexpl (talk) 13:53, 8 December 2023 (UTC)Reply[reply]

@Alexpl: the main Commons project page on this topic is Commons:AI-generated media; the talk page of that project page appears to be the main place relevant policy has been discussed. There is probably a fair amount above that ought to be copy-pasted to that talk page, which would probably be the best place for this discussion to continue. - Jmabel ! talk 19:57, 8 December 2023 (UTC)Reply[reply]

It seems like there's at least a preliminary consensus to ban it. So if it were me I'd wait at a couple more days to make sure everyone has commented on who wants to. Then have @JopkeB: summarize the main points and formally close it. Probably at least the main points should be copy-pasted to the talk page after that, but I don't see why it shouldn't stay here for a little longer and then be properly concluded by JopkeB when everyone is done discussing it. But I don't think the whole thing needs to be transferred over to the talk page, or really that it needs anymore discussing after this once the main points are nailed down. The policy can be updated accordingly. --Adamant1 (talk) 20:13, 8 December 2023 (UTC)Reply[reply]
There is absolutely no preliminary consensus to ban it. The points made so far were addressed and refuted and largely misinfo with nothing backing them up and no reference to any WMC policy. My points have not been addressed, reasons and truth don't matter here, it's just headcounts of very few misinformed users in something like an echo-chamber. Prototyperspective (talk) 20:45, 8 December 2023 (UTC)Reply[reply]
You refer to your claim that AI work is based on "the image output of all of humanity" and that AI-uploads won´t really be a problem with existing rules. I don´t see what to adress there - because we can´t tell either of those things for sure. Neither can you.
We need a vote on the subject. The more formal, the better. Alexpl (talk) 21:39, 8 December 2023 (UTC)Reply[reply]
  1. AI generators trained on billions of images on the Web (and elsewhere; possibly a little less). That is a fact. Source
  2. For deep learning, the software needs to train on large amounts of images. It would generate piles of lines and colors for a prompt if it was based on just on a few images. That's just not how it works or could work.
  3. No specific policy quote why it would be a problem according to current rules has been provided.
Prototyperspective (talk) 01:21, 13 December 2023 (UTC)Reply[reply]
I forget the exact number, but I think it's like 14 people who think it should be banned to 1 person who doesn't. Your really the only one thinks it shouldn't be banned at this point. We don't every single person in the conversation to agree for there to be a consensus. Nor does every single counter point need to be addressed for there to be one either. But that's why I said "preliminary consensus." Although I agree with Alexpl that there should probably be a vote, but that doesn't mean there isn't a rough consensus to ban it in the meantime. I certainly haven't seen you or anyone else propose an alternative either. And if your going to take issue with a ban, at least put forward something else. Otherwise your just yelling into the void. I'd probably support something other then a ban myself if someone put it forward, but I just don't see an alternative at this point. --Adamant1 (talk) 21:57, 8 December 2023 (UTC)Reply[reply]
I don't see 14 to 1 at all.--Prosfilaes (talk) 22:02, 8 December 2023 (UTC)Reply[reply]
I don't see a preliminary consensus to ban. AI is certainly useful in some rare situations; I've mentioned Cthulhu above, FunkMonk mentioned "limited use as joke images on user and talk pages", Zache mentioned " Icons, placeholder and decoration images are good examples", and Omphalographer mentioned "examples of AI-generated images". I see a lot of people against an absolute ban, even if they're broadly against AI or strongly against certain uses of AI.--Prosfilaes (talk) 22:02, 8 December 2023 (UTC)Reply[reply]
@Prosfilaes: It's possible I miscounted. The conversation is pretty long and hard to follow. There's clearly more support for banning it then not regardless of the exact numbers though. And Prototyperspective seems to be the main, if not only, person objecting to it. So I think my point still stands. Banning it doesn't necessarily preclude exceptions either. Although they would have to fit the guidelines, and there doesn't seem to be any way for AI art to do that. Except for maybe clear joke images used on user talk pages, but that's not what the conversation is about. Icons, placeholder and decoration images would inherently have the same problems as any other type of AI art though. But it's not like we can't "ban" it and then make allowances for certain types of art once the technology and laws around it mature. But it would still essentially be banned even if we allow for joke images on people's user pages. --Adamant1 (talk) 23:35, 8 December 2023 (UTC)Reply[reply]
Beaest was clearly a support for AI. Moreover, you take a lot of complex statements and assume support for a ban that wasn't on the table, when, in reality, the people in question might not support a preliminary ban at all. "That's not what the conversation is about"; well, the conversation was not about a preliminary ban, either, and people might prefer to set out those lines before banning.--Prosfilaes (talk) 10:30, 9 December 2023 (UTC)Reply[reply]
the conversation was not about a preliminary ban @Prosfilaes: I'm aware. Which is why I said there's preliminary support for a ban. Not that the conversation is about a preliminary ban. I'm sure you get the difference. Also, Beaest hasn't participated in this part of the discussion, which is what I was talking about. Regardless, I have no problem with to setting out those lines before banning it. That's why agreed with Alexpl that there should be a vote on it, which is where I assume those things will be ironed out. Ultimately the best way to do it would be to have a multi-option vote based on the various options brought up here. Although there's still clearly more support for a ban then not either way. I just want to see the conversation continued as is on another talk page where it will probably just fizzle out. But your boxing ghosts. So I'd appreciate it if we ended it there. --Adamant1 (talk) 10:41, 9 December 2023 (UTC)Reply[reply]
I was surprised that the existence of Commons:AI-generated media and the talk page there wasn't pointed out earlier in this discussion... Gestumblindi (talk) 22:03, 8 December 2023 (UTC)Reply[reply]
That is a community issue, not something a subpanel of interested uploaders can decide. I scrolled down the discussionpage and had to stop at "There is exactly zero benefit from trying to do original research on whether or not a image was AI generated without the author admitting so." and "Anybody wanting to reuse a notable fake (...) should be able to get that direct from Commons.". Such quotes and the like indicate to me, to leave those people alone and conclude the matter in a less "restrictive" atmosphere. Alexpl (talk) 23:18, 8 December 2023 (UTC)Reply[reply]
Lol. Topic specific talk pages don't get much participation anyway. --Adamant1 (talk) 23:41, 8 December 2023 (UTC)Reply[reply]
Thanks a lot Jmabel, for your reference to Commons:AI-generated media. Perhaps I should have done more research before I started this section of the discussion. I agree with Gestumblindi, this should have been pointed out earlier in this discussion.
So what should we still discuss here?
  • Additions to and corrections of/on this page.
  Question Anything else?

Other remarks:
  •   Question What is the status of Commons:AI-generated media? This page does not have a parent category, it is not (yet?) an official Commons Guideline, so what is it? And how could we have found it?
  • I think a complete ban is not necessary. We have formulated several exceptions and rules. I'll include them in the summary (thanks for the trust to let me make a summary, Adamant1).
  • "composite" images on Commons are OK as long as:
  1. the images that were used to make the composite, permit it (so no images were used with a licence that is not allowed on Commons);
  2. these underlying images are mentioned in the file of the "composite" image, whether they are on Commons or not, whether they were made/generated by men, by IA or any other computer program; if that is not possible, then the image should not be on Commons;
  3. it has been made very clear in the file that this is not an original image and
  4. the licence of the "composite" image is the same as that of the underlying image with the most strict licence (so if one has a CC BY-SA 4.0 and another is PD, then the licence should be CC BY-SA 4.0).
  • On the Talk page of Commons:AI-generated media we should mention: a summary of the changes, the arguments for those changes based on this discussion and a link to this discussion.
JopkeB (talk) 11:31, 9 December 2023 (UTC)Reply[reply]
As there was - I think - no proposal to adopt Commons:AI-generated media as an official policy yet, I think - unless we want to use this opportunity to make it a policy, with all the changes deemed necessary -, we could add {{Essay}} which would categorize the page accordingly, like for example Commons:Patient images. - For precedents of AI-generated images that were kept or deleted, and the reasons for these decisions, Category:AI-generation related deletion requests is useful. As an admin, I processed some of them (deletions as well as kept ones). I try to follow a cautious approach, but always strictly apply COM:INUSE - if another project deems an AI image useful, it's not the place of Commons to judge on this. Gestumblindi (talk) 12:19, 9 December 2023 (UTC)Reply[reply]
I'm just not sure I'm comfortable with something like this and how it jives with COM:PCP. In the vast majority of cases it's going to be completely impossible to identify a definite source. That's not even getting into the weird inception stuff, where AI images are becoming so common that AI is using images from other AIs, or the fact that the sourcing may be hundreds or thousands of images over scores of jurisdictions all with different standards.
There's just too many unanswered questions for my taste. The discussion is probably moot regardless because we're liable to soon start getting actual court rulings and new laws all over the place, many of which may be completely counterintuitive against existing laws. But the ethos of Commons is very much to be cautious, and I don't know that we would regularly keep non-AI images in cases where sources cannot be identified. Yes, there is an argument that it's no different from me listening to a lot of Tom Morello and adopting elements of his style, but that's in no way a settled question, and at least to me, very much seems like personifying a computer program in a way that isn't necessarily supported by precedent out in the real world. GMGtalk 12:51, 9 December 2023 (UTC)Reply[reply]
This whole thing reminds me of how images from Flickr are handled sometimes. If I were to upload a clearly OOS image as regular user it would probably be deleted without much fan fair. If the image is imported from a random Flickr account then suddenly there's a debate to be had about if it's in scope or not, which most likely would lead to the image not being deleted. Same here. There's nothing that makes this image appropriate for Commons but for the fact that it was created by an AI image generator. 99% of the images in Category:AI-generated images including prompts wouldn't be appropriate if they were created by an amateur artist who uploaded them using their personal account though.
Is anyone seriously going to argue this image would be in scope if it was uploaded by a random user after creating it in Blender? If the answer is yes then Commons:Project scope really needs an overhaul. But also, what makes that educational but for the technology used to create it? It's certainly not an accurate depiction of people in a 2000s nightclub as it's described. But now it will come up if someone wants an image of people in a 2000s nightclub "because AI." In fact, three of the top ten results for "2000s nightclub" are AI generated images. How is that at all good for the project? --Adamant1 (talk) 13:17, 9 December 2023 (UTC)Reply[reply]
It doesn't have a definite source, any more than if you handed the words to an artist to work from memory.--Prosfilaes (talk) 15:04, 9 December 2023 (UTC)Reply[reply]
@Adamant1 It isn´t, but you saw Gestumblindi trying to reason why the image mentioned above in the LD should be kept. They couldn´t tell what images copright may have been violated - so the stuff is "in". Edit: Even worse - that stuff was on commons unchallenged for 6 month before even beeing categorized as AI-work [2]. This has to be stopped - within a prescribed legal framework. Alexpl (talk) 15:22, 9 December 2023 (UTC)Reply[reply]
I have deleted AI images that were clearly derivative works of copyrighted works or characters, for example in Commons:Deletion requests/File:Silver skin man a.i.jpg and Commons:Deletion requests/File:Joker art dream a.i.jpg, but a mere presumption "it might be a derivative work from a work I don't know and can't name" is never enough, be it an image created by a human or by an AI. - By the way, COM:INUSE, which I mentioned above, is of course only applicable if there are no copyright concerns, such as a demonstrable derivative work. Gestumblindi (talk) 16:03, 9 December 2023 (UTC)Reply[reply]
@Gestumblindi: I assume that with a human you can at least ask the uploader. In which case they will either say no leading to the image being kept or yes or they don't know. Both of which I assume would lead the image being deleted. It seems to completley opposite with AI artwork though where is no way to out if an image is a derivative work to begin with, which then gets treated as if it isn't, when in fact it might be. It sounds like I could upload two images though, one AI generated and one not. Then say I don't know if either one is a derivative work, and the AI artwork wouldn't be deleted but the "normal" image would be. Or really I could even say the AI artwork is a derivative work, but it still wouldn't be deleted because I don't know the exact works it was trained on. Whereas a "normal" image wouldn't get the same pass. --Adamant1 (talk) 16:21, 9 December 2023 (UTC)Reply[reply]
Il problema delle immagini derivate da opere protette da diritto d'autore riguarda esclusivamente una categoria di soggetti moderni. Se io gli chiedo i ritratti di gente morta da 500 anni, specificando che voglio uno stile antico da "dipinto a olio", e il risultato che viene fuori sono dipinti in stile al massimo ottocentesco, e non oltre quella data, direi che il problema del diritto d'autore non si pone affatto. Certamente non è andato a trarre ispirazione dai fumetti, e nemmeno dalle foto.
Inoltre visto che sono stata menzionata, è ovvio che se si tenesse una votazione sarei favorevole all'uso dell'intelligenza artificiale, come lo sarebbe pure @Giammarco Ferrari
(Mi scuso ancora se scrivo in italiano ma dal cellulare non è facile usare il traduttore). Beaest (talk) 18:12, 9 December 2023 (UTC)Reply[reply]
Your confusing asking it for a portrait of a person who has been dead for 500 years with it generating an actual image of that person based on contemporary works. They aren't the same thing. There's plenty of recently created artwork of 15th century knights riding horses in battle that images like te ones in Category:Ferrandino d'Aragona in immagini generate da intelligenza artificiale could have based on. And we have no way of knowing if your images are based on them or not. It's totally ludicrous to act like an AI generated image of anyone who died more then 100 years ago inherently has to be based on artwork created when they were alive anyway. Like if I ask Bing for an image of Jesus Christ it will probably be based on artwork from the last 100 years, not Christ Pantocrator. But you'd probably act like there's no chance it's a derivative work anyway because its an AI generated image of a historical figure. Which is exactly the double standard I'm talking about. The default is clearly to assume AI artwork isn't or can't be a derivative work even if there's zero evidence of that being the case "because AI." --Adamant1 (talk) 20:13, 9 December 2023 (UTC)Reply[reply]
@Adamant1 no, non mi confondo, perché per creare un dipinto antico deve ispirarsi per forza a dipinti antichi, altrimenti verrebbe fuori un disegno, e si nota benissimo la differenza tra un dipinto, un disegno, una foto e un fumetto. Inoltre se noi non siamo in grado di risalire alle sue fonti, vuoi che ne siano in grado gli autori di presunte opere che siano servite da ispirazione? Preoccuparsi che possano mai avanzare pretese a riguardo è semplicemente ridicolo. Nessun tribunale gli darebbe retta perché non potrebbero dimostrare nulla, e neppure loro stessi se ne accorgerebbero, perché l'intelligenza pesca da migliaia e migliaia di esempi e li combina tra loro, mica ne copia uno o due soli. Beaest (talk) 20:58, 9 December 2023 (UTC)Reply[reply]
Cool, where's a 15th century oil painting of Ferrandino d'Aragona riding a horse in battle then? Regardless, as I've said at least a couple of times now Dall-E was partially trained on licensed works. Ones that are obviously copyrighted and there would be a record of. So 100% someone could prove an AI generated image is a derivative of their artwork if they wanted to. Otherwise there'd be no need for discussions like this one in the first place, not just on Commons either BTW. But I'm sure you'd say that's because everyone who questions the legality just hates artificial intelligence. --Adamant1 (talk) 21:11, 9 December 2023 (UTC)Reply[reply]
O non hai capito o non vuoi capire. Se io gli chiedo un dipinto, deve comporlo per forza prendendo pezzi di altri dipinti. Non può comporre un dipinto prendendo pezzi da foto o da fumetti. Perciò se le immagini di Ferrandino in battaglia hanno l'aspetto di un quadro ottocentesco, deve essersi ispirato a quadri di quell'epoca. Mi domando come uno possa essere mai in grado di dire "ehi, l'intelligenza artificiale ha copiato la mia opera!" Se di quell'opera può aver copiato al massimo l'unghia del mignolo o la punta del naso. È facile parlare di diritto d'autore quando si tratta di Batman e Joker, è impossibile quando si tratta di un cavaliere a cavallo che potrebbe essere stato generato a partire da migliaia di altri dipinti simili. Beaest (talk) 21:32, 9 December 2023 (UTC)Reply[reply]
@Baeast: certainly the images you uploaded, which started this conversation, do not resemble 15th-century paintings. They resemble contemporary children's book illustration and science-fiction-and-fantasy art, the bulk of which is presumably copyrighted. - Jmabel ! talk 02:49, 10 December 2023 (UTC)Reply[reply]
@Jmabel I would like to know which of the many images you are referring to. Because those of Ferrandino in battle are only clearly similar to nineteenth-century paintings. If you mean those of Ferrandino and Giovanna in front of the castle of Naples, perhaps you haven't realized that they resemble engravings or illustrations from nineteenth-twentieth century books, they have nothing to do with children's books. Saying something like that is not only insulting to me, but only demonstrates your desire to discredit artificial intelligence. These are children's drawings, not mine. Images like this and like this are clearly engravings as can be this, this, this and, to return to Ferrandino, this.
And now I would like to know if you have the courage to call these children's drawings too, or if they don't remind you of the Divine Comedy.
Furthermore, I repeat that it is useless to cling to the fact that they would violate an alleged copyright when it comes to ORIGINAL creations that at most recover pieces from other people's works, without it being in any way possible to establish from which ones. Beaest (talk) 11:00, 10 December 2023 (UTC)Reply[reply]
@Beaest I am referring to the images that I originally mentioned when this conversation started, the ones that were in Category:Giovanna IV di Napoli and now seem to have been sectioned out to a subcategory. No, they could not possibly be mistaken for 19th-century paintings by anyone with more than a passing knowledge of art history. There's a certain pre-Raphaelite influence there (I assume that is what you are referring to, you aren't more specific than a century), but the way the shading and highlighting is done in the color images makes these instantly identifiable as art that was created as digital. And the overall "bloodless" look is immediately one that says "illustrator" rather than "fine artist". - Jmabel ! talk 20:35, 10 December 2023 (UTC)Reply[reply]
@Jmabel: Some of them, like File:Re Ferrandino d'Aragona e la moglie Giovannella di Napoli 07.jpg, do resemble cheap "kitsch" art prints that were common at the 19th/20th turn of the century. Gestumblindi (talk) 11:22, 11 December 2023 (UTC)Reply[reply]
@Gestumblindi Discounting the fact that those certainly do not have 6 fingers. Darwin Ahoy! 13:14, 11 December 2023 (UTC)Reply[reply]
@Gestumblindi besides DarwIn's well-taken point, the skin on the faces in the foreground is a dead giveaway, or at least that's how it looks to me. Can you find anything from the period where the representation of a face looks a lot like that? For something more typical of the period, compare https://www.bridgemanimages.com/en/scarpelli/portrait-of-the-king-of-italy-victor-emmanuel-victor-emmanuel-iii-1869-1947-during-the-reading-of/drawing/asset/4864336 or https://www.mediastorehouse.com/north-wind-picture-archives/home-life/young-women-talking-early-1900s-5881679.html. - Jmabel ! talk 20:30, 11 December 2023 (UTC)Reply[reply]

Anyway, the conversation is looping and I'd like to see it go somewhere. So can someone setup a pole or what? IMO the best way to do it is by having multiple options based on people's feedback about where they think the line should be, but I'll leave that up to whomever decided to do one. Same with where its done. Probably Commons:Village_pump/Proposals would be the best place instead of creating another subsection of this discussion. I don't really care either way though. I'd do one myself but I'm to involved in this already and have other things going on. So I'd appreciate it if someone else created one. But this conversation really needs to have an actual conclusion regardless of who creates the pole or what questions it involves. Thanks. --Adamant1 (talk) 20:39, 9 December 2023 (UTC)Reply[reply]

I don't really have a concrete suggestion to offer. My position is that they are infinite with no particular value to any individual image, and so are out of scope almost all of the time. It's just convenient fan fiction. When they're not, they should be uploaded as fair use images on local projects in cases where the topic under discussion is AI generated images. I look toward something like the Getty court case dropping and toodeloo, we have to delete everything and sort out a complete mess of lord knows how many uncategorized images.
We drop something like the UK standard for AI images casually, but we don't seem to address the issue that it's not a single work by a definite citizen in a definite jurisdiction. What if it uses a UK image? What if it uses a UK AI generated image? What if one of the source images is from the UK? What if the transaction happened to interact with servers in the UK? It sounds paranoid a bit, but at least in the US, we've used this kind of thing to issue warrants, like me sending an email to another US citizen in the US, but it dings off some random server in Germany, and so it's technically international communication. If one of these cases goes sour against AI, there is going to be a metric boat load of fallout that will last for years.
The honest answer is that we simply don't know yet. I look forward to sitting on my deck at 6am and reading the rulings, but we don't know. That's not normally a space where Commons is aggressive in assuming something that is undecided. GMGtalk 05:39, 10 December 2023 (UTC)Reply[reply]
The Policies and guideline pages, that should proactively acknowledge potential problems with AI stuff, currently ignore it Commons:Licensing, Commons:Ownership of pages and files, Commons:Derivative works, Commons:Fan art a.o.
One could mark all AI stuff for deletion based on policies violation I assume, based on Project scope: Commons:Project scope/Evidence Alexpl (talk) 10:15, 10 December 2023 (UTC)Reply[reply]
COM:INUSE is something we fundamentally have to respect. As long as there are no tangible copyright concerns (like: derivative work of a specific work or copyrighted character you can point to), this policy applies: "A media file that is in use on one of the other projects of the Wikimedia Foundation is considered automatically to be useful for an educational purpose". AI art is (except in the United Kingdom, China, and possibly some other countries) not protected by copyright, so to fulfill the requirements of Commons:Project scope/Evidence, naming the AI generator as the source should be sufficient, unless it is - demonstrably - a derivative work. That being said, I think I would be fine with only accepting AI images that are in use in Wikimedia projects, for - as GMG correctly points out - they can be infinitely generated and there is a risk of flooding Commons with content of little or no value. So, that would be my proposal for a compromise between the "AI fans" and those who would like to ban AI-generated images altogether. Gestumblindi (talk) 11:27, 10 December 2023 (UTC)Reply[reply]
@Gestumblindi: All AI-generated images are derivative works. The vast majority of Internet-available works they draw from are not free as per COM:NETC. It is likely that any AI-generated image has drawn from non-free works. Therefore, we should not have any AI-generated images per COM:PCP.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 11:41, 10 December 2023 (UTC)Reply[reply]
@Jeff G.: I beg to differ. I do not think that "all AI-generated images are derivative works", not in the sense of copyright. AI image generators have been trained on a vast amount of existing and often copyrighted works, that is true. But for copyright protection as a derivative work, an AI artwork must be close enough to a specific pre-existing work. Creating a new work "inspired" by thousands or millions of existing works is not in itself, automatically, a derivative work - whether the creator is a human (who has also been "trained" by their knowledge of existing works) or an AI. COM:PCP requires "significant doubt about the freedom of a particular file", I don't think this can blanket cover all AI creations. You would have to show that a particular file is particularly close to a specific copyrighted work. - So, my main concern with AI art is not copyright, but the scope issue. I agree that most AI-generated images will be out of scope, and therefore a strict limitation (as suggested, maybe only accept if in use) could be appropriate. Gestumblindi (talk) 12:28, 10 December 2023 (UTC)Reply[reply]
@Gestumblindi: Yes, there is that scope issue. AIs are not generally notable artists.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 12:32, 10 December 2023 (UTC)Reply[reply]
On this, I fully agree with you. Gestumblindi (talk) 12:33, 10 December 2023 (UTC)Reply[reply]
Regarding the UK/China issue, I would delete AI images where we know that they were generated in a country where AI art is copyrightable. We could even require a statement from AI art uploaders along the lines of "I assert that this image was generated in a country which doesn't have copyright protection for AI art" (such as the USA). Gestumblindi (talk) 11:40, 10 December 2023 (UTC)Reply[reply]
But you just run back into the cross-jurisdictional issue of things that happen purely online, and of having no real way of verifying that information. If I fire up Photoshop or GIMP and make a cute cat picture, that generation is physically happening on a machine that is located in the US, by a person who is in the US, and therefore subject to US laws. It's something that can be done on airplane mode.
AI generated images aren't happening on your machine. All you have is access to the interface. Just like my Gmail doesn't exist on my hard drive. I just have access to the interface and only when I'm online. AFAIK the standard is pretty solid that if I use my Gmail in the commission of a crime, and that email bounces off some server in Ireland, I could potentially be charged with a crime in that other jurisdiction. I don't like it. I think it's kindof silly and the laws/precedent haven't caught up with modern technology, but that would constitute conduct that crosses international borders. There's a reason that the WMF is explicit that their servers are all in the US. In the case of AI, we don't know. It's not clear if anyone can ever know. It's a w:black box where even the creators can't fully explain what's happening in detail. GMGtalk 14:10, 10 December 2023 (UTC)Reply[reply]
@GreenMeansGo: "AI generated images aren't happening on your machine" - in many cases not, but there are AI image generators, namely Stable Diffusion, that can be installed locally (if you have a powerful enough graphics card) and generate images on your own machine without the need of an online connection. I've tried this out myself with the Easy Diffusion distribution and it was quite fun (though my somewhat older graphics card just about manages to run it). - By the way, I don't think the physical location of servers is considered crucial for legal matters nowadays, the WMF servers also aren't all in the USA, see here: there are WMF servers in the Netherlands, in Singapore, and in France as well. AFAIK for legal matters, the WMF relies on its legal seat in the USA, not on where the servers are located. Gestumblindi (talk) 14:58, 10 December 2023 (UTC)Reply[reply]
Okay okay. Fair point about the WMF servers. Maybe I'm showing my age. Otherwise yes, having done something locally on your machine may address some preliminary issues. But isn't it a bit necessary in the whole paradigm that what you're running locally is based on sourcing images on the web at large? GMGtalk 17:07, 10 December 2023 (UTC)Reply[reply]

My main concern with AI is, in the medium term, if people are able to create images out-of-the-blue, and Courts sistematically deem those images as not eligible for copyright, the utility of this project as a "free media repository" will be seriously diminished. I mean, why I need to search in Commons a free picture of a black cat if I can ask its creation to Bing for free. Then, we should probably specialize, and redirect our niche to "free human-made non-algorithmic content" (jack of all trades master of none, etc). That content could be used by people requesting human flavoured content, or even to train AIs with human-made "real" content, instead of with more AI fake images. That said, IMHO I find AI amazing (and pretty scary too), copyright concerns exist but they are probably being a little bit overstated (well, Courts will say), and right now I can think of many Wikipedia articles on "fictional topics" that may take advantage of AI imagery ("scope" issues are relative). Anyway, whether AI images are banned or not... people will continue to upload them (as it happens with copyright violations...), only they would do that in a more ninja and undetectable manner (en:Prohibition in the United States?), which is probably not good. Strakhov (talk) 13:25, 10 December 2023 (UTC)Reply[reply]

It´s reasonable to assume that that already happens. AI works uploaded among the usual bulk-stuff, without AI category: "Look, it´s a atticfind of a previously unseen photo, I found on Flickr under CC-by license.". Everything post 2022 without provenance should be given a warning and be deleted after a few month. Why bother. Alexpl (talk) 18:52, 10 December 2023 (UTC)Reply[reply]
I very much agree with this. Recently uploaded stuff without a reliable source should be sent for scrapping to mitigate the risk of being AI fakes posing as historical stuff. I also think that inaccurate stuff like this File:Sirisha Bandla drawing.png (just look at how the helmet "ties" on the neck...) should be deleted (IMO independently of being in use or not) for being an inaccurate fantasy / fan fic not only without any educational value but at the same level as fake news & articles which usually are deleted on sight at wp. Darwin Ahoy! 13:03, 11 December 2023 (UTC)Reply[reply]
That stance goes against the entire purpose of Commons. The point of Commons is to consolidate all files that may be legally hosted under a permissive license in one place so that each project does not need to maintain its own files in order to use them. If you want to keep bad AI images from being used, you need to push for better usage policies and enforcement on local projects, not violate a fundamental tenet of Commons. Remember, Commons does not enforce global norms - if Volapük Wikipedia is OK with AI-generated illustrations and would otherwise have uploaded them locally if Commons didn't exist, then they are fine to host on Commons. -- King of ♥ 02:16, 12 December 2023 (UTC)Reply[reply]
Commons has not been made for hosting peoples "stuff". We already have the rule, that "(...) "unauthorized" derivative works (...), must be deleted." Commons:Licensing a.o. . It can not be expected of a volunteer force to dig through tons of source matrial to get one file deleted, while in the same amount of time the uploader (or "artist") has put dozens of new files on commons, via bot uload from flickr - because "f-it". Some of that stuff is going to stick and end up in articels, books, academic works and the like. Let´s end it now. Alexpl (talk) 10:34, 12 December 2023 (UTC)Reply[reply]
I suggest we ban all image uploads because it could be a copyright violation. It's too much effort for us volunteers who so have so constructively engaged with the existing AI-generated images already. BAN BAN BAN it. There is nothing we could do to stop this great menace. Only when removing things we don't like is it not censorship. See this image which according to file title and description clearly was generated by AI has a person with an extra finger in the background. I'd say we ban it all, if we ban all .webp .jpg and .png files this place will be much more useful. Prototyperspective (talk) 17:24, 13 December 2023 (UTC)Reply[reply]

  Comment I firmly believe that AI-generated images depicting historical events, persons, or objects should be banned from the Commons until further notice. There are several reasons for this stance, but the most compelling one, in my opinion, is the significant risk of creating echo chambers of historical inaccuracies with these images. It is well-known that image generators currently lack the capability to accurately capture essential aspects of historical contexts, such as cultural, political, and social nuances. Once these images are uploaded to the Commons, they will inevitably become part of the training datasets for future models, perpetuating these inaccuracies in a cycle that is worryingly opaque. Furthermore, mixing dubious yet photorealistic simulations with factual historical documents does not serve the purpose of spreading knowledge, but rather muddles it. So... what's the point? Rkieferbaum (talk) 02:05, 12 December 2023 (UTC)Reply[reply]

I agree with your assessment regarding "AI-generated images depicting historical events, persons, or objects". I don't think they are useful, quite the contrary, they're potentially dangerous. However, Commons doesnt' make decisions for other projects. We do have the policy of COM:INUSE. And not all Wikimedia projects are encyclopedic - if, for example, Wikiquote would want to use AI-generated images to illustrate quotes (I have previously been irritated by the often very associative use of images on Wikiquote, but that's their decision), they are automatically in scope for Commons. Only if there are tangible copyright / derivative work issues with a specific file, we can delete it even if it's in use in Wikimedia projects. That being said, most AI images currently in Category:AI-generated images aren't in use in any Wikimedia project, e.g. Category:Images generated by Stable Diffusion is full of unused images - so I would like to reiterate my proposal that we limit the hosting of AI images to those that are in use in Wikimedia projects, and delete the rest - maybe with the exception of a select few that can be used as examples. This would already be an unusual restriction (normally, COM:EDUSE isn't limited to Wikimedia projects), but given the potentially infinite amount of quickly-generated AI images of little value, it would be a prudent approach IMHO. Gestumblindi (talk) 11:41, 12 December 2023 (UTC)Reply[reply]
I don't think it goes far enough, but if it's the workable improvement, then it's still an improvement. GMGtalk 12:38, 12 December 2023 (UTC)Reply[reply]
I'd support that in absence of anything better. Although I agree with GreenMeansGo that it ultimately doesn't go far enough, but something is better then nothing. Hopefully other projects will clarify their stances on it in the meantime to and then the rules can be changed in light of whatever they decide. I doubt many projects will allow for AI images except for in extremely rare cases though, if at all. But there's no reason for Commons to host images that are dubious at best if they aren't even being used anywhere to begin with. --Adamant1 (talk) 17:30, 12 December 2023 (UTC)Reply[reply]
I would support this proposal. We already routinely delete other types of generated media if they're unused (e.g. charts, images of text, screenshots, parliament diagrams, etc), and much of the same logic applies to unused AI-generated images. Omphalographer (talk) 18:11, 12 December 2023 (UTC)Reply[reply]
What proposal? Gestumblindi didn´t make one. Just a list of stuff that can allegedly be done with existing rules and policies. Which are clearly not up to the task. Alexpl (talk) 21:26, 12 December 2023 (UTC)Reply[reply]
We already routinely delete other types of generated media if they're unused False. Prototyperspective (talk) 17:26, 13 December 2023 (UTC)Reply[reply]
These illustrate countless generative AI applications, in some cases are the only images for large art genres/movements & styles, and have many other reasons why they're valuable. Why would we suddenly delete all AI art images when not even once this has been done for any other kind of media which clearly has much less or no educational value? These images are in some cases the only or the only high-quality ones in various categories but you find them not useful? What grants you this authority to disrespect and evade all WMC policies and practices just because you don't like something? Prototyperspective (talk) 17:30, 13 December 2023 (UTC)Reply[reply]
Because it's AI generated, and essentially a complicated facsimile of infinite proportions. I kindof figured this bit would be fairly self-evident. GMGtalk 17:56, 13 December 2023 (UTC)Reply[reply]
  1. It's made by humans tasking AI to generate images that are high-quality and/or closely match what they intend to portray. Lots of it is low-quality are not uploaded to here and/or should be deleted. Apparently you think Because it's AI generated is enough of a reason. That is a statement, not a reason or explanation. I could also say Because it's made using Photoshop.
  2. Via machine learning those generators learned from billions of images to enable humans to generate images from texts that are entirely novel or in some cases similar to some existing image(s); facsimile comes closer to being an actual argument/point but it misses e.g. an explanation why it would not be fine and we already delete the subset of those images that are derivatives of artworks so a ban is not needed. Is an artwork of a specific art movement also an "facsimile" and 'thus requiring deletion here' since it's based on and inspired by other artworks of that same genre?
Prototyperspective (talk) 18:56, 13 December 2023 (UTC)Reply[reply]
@Prototyperspective: You beg the question a little bit in the first few words there. It isn't actually made by humans. If I ask you to paint a picture of a horse, I don't then have some ownership over the painting because I asked. Neither do the people who created the AI, any more than if I made the brush you used to make the painting. I have no idea what you'll do with that brush and neither to the people who made the AI. UK excepted, the programmer doesn't own the copyright. The user doesn't own the copyright. The program can't own the copyright because it's non-human.
The more core issue maybe is that it's all potentially derivative depending on the jurisdiction and probably lots of oncoming court decisions. You aren't really making an argument that it isn't in-fact derivative, but only that it somehow doesn't count because we can't identify the source. The only route I see for that argument is COM:DM, that a sufficient number of sources makes the sources trivial. It's not clear that's the case in the real world, and it's not clear we could tell, because apparently nobody knows what the sources are. GMGtalk 16:54, 18 December 2023 (UTC)Reply[reply]
Maybe all you did so far was enter word "horse, painting" once and surprised by the result now base all your claims on that brief personal experience
but that is not how most AI art engineering looks like despite the bias and myths you have against it. One needs to continuously adjust & use prompt expertise etc to achieve intended results. See e.g. this workflow; see 'Modifiers' which need to be developed anew for each artwork. I'm not saying you have no experience with these tools in artist toolboxes and didn't/can't make good-quality images, just that you only made unsubstantiated allegations. But even better if it doesn't have copyright then also images made by other people who did not explicitly set a PD/CCBY license could be here, if considered useful, under PD-algorithm. For txt2img the training sources are largely identified and consist of billions of images but AI training does not mean all AI software outputs are now not in the public domain. Next you'd say Google search results are copyrighted since it learned from copyrighted texts and other absurdities. Prototyperspective (talk) 17:20, 18 December 2023 (UTC)Reply[reply]
And on we go Commons:Village pump/Proposals#Low quality AI media deletion system Alexpl (talk) 19:08, 14 December 2023 (UTC)Reply[reply]

Summary and conclusions edit

The question for this discussion was: AI images on Commons: what is acceptable and what is not?
Aspects:

  1. Copyrights:
    1. In general: Computer-generated art is in the public domain, but there may be exceptions. At least in the United Kingdom and China AI art is protected by copyright, so should not be on Commons anyway unless they have a license compatable with Commons.
    2. An AI work is a derivative one, whether it was derived from one or a million examples, whether the original works are known or not. So on Commons AI works should be judged and treated as derivative works. But files only can be deleted for copyright infringement when there are tangible copyright concerns (like: a derivative work of a specific work you can point to). Creating a new work inspired by thousands or millions of existing works is not in itself, automatically, a derivative work. So as long as you cannot point to a specific work, AI-generated images are allowed on Commons for copyright reasons. [Rewritten]
      1. Usually there is no clue about the sources an AI tool uses to make an image, and therefore there is no clue about the copyrights of that AI-generated image either. It is common for AI works to contain fragments of source images, including many non-free images; these are almost impossible to trace.
      2. Most of the AI datasets include stolen images (including from Commons) that were used in violation of their copyright or licensing terms. Commons should not be encouraging the production of unethically produced AI images by hosting them.
      3. The AI datasets may contain images of copyrighted subjects, such as buildings in non-FOP countries or advertisements.
  2. Accuracy: There is zero guarantee that the output will be (historically) accurate.
  3. Scope: We only allow artworks when they have a specific historical or educational value. We do not allow personal artworks by non-notable creators that are out of scope; they are regularly deleted as F10 or at DR. Because AI works are not useful as educational or historical illustrations due to accuracy issues, they are no different than any other personal artworks and should be deleted.
  4. Negative effects: AI-generated images on Commons can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one. Therefor we should recommending editors to find, upload and use good images.

Because of these aspects and issues, the majority of the participants (all but two out of twenty, Prototyperspective and Beaest) of this discussion is pro some kind of ban on AI images on Commons but with exceptions.

Exceptions: Exceptions to the rule "No AI images on Commons". AI images are allowed on Commons if they:

  1. meet the conditions of Commons (like mention of sources, use of only free copyrights, scope, notability)
    1. the underlying images are mentioned in the file of the "composite" image, whether they are on Commons or not, whether they were made/generated by men, by IA or any other computer program; if that is not possible, then the image should not be on Commons
    2. the licence of the "composite" image is the same as that of the underlying image with the most strict licence (so if one has a CC BY-SA 4.0 and another is PD, then the licence should be CC BY-SA 4.0)
  2. are clearly recognizable as such:
    1. there should be a clearly visible prominent note about it being an AI image, mentioning that it is fake, perhaps add Template:Factual accuracy and/or another message in every file with an AI illustration, preferably by a template, perhaps every file that is uploaded by Upload Wizzard and where the box has been ticked to indicate that AI image has been uploaded
    2. differentiation between real and generated images should also be done at category level, categories containing images about real places and persons should not be flooded with fake images; AI-generated images should be in a (sub) category of Category:AI-generated images;
  3. are not mentioning that is "Own work"
  4. mention the prompt that was given to the AI generator (  Action: should be discussed yet)
  5. contain no obviously wrong things, like extra fingers or an object that shouldn't be there; this should be fixed before uploading
  6. were generated and uploaded on an "as-needed" basis to satisfy specific requirements from (other Wikimedia) projects, they should be in use on a sister project [within a week after uploading (  Action the period should be discussed yet)].

Then there might be AI images on Commons, for instance:

  1. Images to illustrate facets of AI art production, use of ethically-sourced AI to produce works like heralds that inherently involve artistic interpretation of specifications.
  2. Icons, placeholders, diagrams, illustrations of theoretical models, explanations of how things work or how to make something (for manuals, guides and handbooks), abstracted drawings of for instance tools and architectural elements, and other cases where we do not need historical accuracy.
  3. Images that are used as joke images on user and talk pages (should be limited).
  4. For enhancing/retouching images, improving resolution and source image quality, as long as the original image stays on Commons, so the enhanced one gets a different filename and there should be link to the original image in the image description.
  5. For illustrating how cultures and people could have looked like in the past. (Disputed)

Questions
For @Jmabel, GPSLeo, Adamant1, Prototyperspective, Beaest, Fresh Blood, GreenMeansGo, Trade, Alexpl, Harlock81, Prosfilaes, Jeff G., Pi.1415926535, Zache, Omphalographer, FunkMonk, Gestumblindi, DarwIn, Strakhov, King of Hearts, Rkieferbaum, and Omphalographer:

  1. Is this an accurate and correct summary of this discussion? Did I left out something important?
  2. To get an answer to the question "Do you agree with the conclusions?", we need a vote on the subject. The more formal, the better. Who is going to organize that? For suggestions: see the contribution of Adamant1 on 20:39, 9 December 2023.
  3. What steps are necessary to make these conclusions about AI a rule/policy on Commons? Is adding {{Essay}} to page enough? Can we then implement our conclusions, delete perhaps thousands of files?

Actions to be taken after there is consensus:

  1.   Action: Copy-paste at least the main points to the talk page of Commons:AI-generated media and a link to this discussion; the purpose is to add our conclusions to the main page as additions and/or corrections.
  2.   Action: Take the necessary steps to make Commons:AI-generated media a Commons policy or Commons Guideline.
  3.   Action: Current AI images on Commons that do not meet the conditions, should be deleted from Commons. Commons:Project scope/Evidence may be helpful.
  4.   Action: Make a template for the visible prominent note (or use one if there is already one)
    1. add it to all current files with an AI image
    2. implement a routine to add it to all future files with an AI image.
  5.   Action: Add a note or line to Commons:Bad sources.
  6.   Action: Perhaps some policies and guideline pages need adjustments, like Commons:Licensing, Commons:Ownership of pages and files, Commons:Derivative works, Commons:Fan art a.o.
  7.   Action: There should be an obligation to mention in a file with AI-generated art in which country the image was generated. If that country is China or the United Kingdom, than the file should be deleted because of copyright laws of those countries.

Please put your name at the action(s) you want to implement.
--JopkeB (talk) 16:35, 18 December 2023 (UTC)Reply[reply]

Lots of it seems reasonable but some issues:
  • 1.1 Not true for the UK and probably not for China either; also that seems to be about whether a person using AI software can claim copyright on a work which is not needed if it's uploaded with PD; and it's also not about AI art made by the uploader. 1.2 In that sense human/manually-made artworks are also derivative ones, deriving from the human's surrounding culture and experience; it's the wrong term to use if not referring to img2img but this may be too pedantic 1.2.1 Stable Diffusion for example trained on at least LAION-2B-EN, a dataset of 2.3 billion English-captioned images 1.2.2 Strongly oppose this part; these were not "stolen", please look up and understand definitions of stealing and digital use and I also strongly oppose that this has been unethical 1.2.3 But these are not shown in the result images uploaded here
  • Nobody is claiming so. The same applies to all paintings and drawings. In some cases the artists of paintings claim or imply they aimed for accuracy. The file description and the proposed AI template should address this issue.
  • Brief note: The main reason for why there is not more human artwork is that artists very rarely license it under CCBY. Of course the artwork needs to be realistically (educationally) valuable.
  • discouraging uploaders from contributing freely licensed real images of subjects Nobody in this debate made this claim and it's fully false. We should encourage people to close major gaps, not speculate about what may discourage them. It does motivate people to contribute freely licensed images of subjects you describe as "real" because they'd see that the best alternatives so far are only AI images. No idea where you got that from, it's unsubstantiated speculation that is false.
  • No, the majority of the debate which is a very small percentage of WMC and Wikimedia projects users did not support some kind of ban. They just supported some measures, but not some "ban". Moreover, further participants were probably repelled by this large wall of text due to repetitious arguments and ignoring of points. I know of many other people who'd oppose what has been proposed but didn't ping or see them participate here and that's understandable with the style and length of discussion here.
  • I oppose adding the prompts as a requirement, it should be encouraged but not required. For example one may have lost the prompt or used 10 different prompts to generate one image which you'd know if you had some practical experience with these art tools.
  • "as-needed" basis also includes yet non-existing categories on notable subjects as well as existing categories with few or no high-quality images etc. I also listed many more examples of potential use cases here and elsewhere in many cases you very much misrepresent what has been said. I don't know why people make the assumption they can now already readily immediately see and anticipate all potential constructive applications. It's like trying to define to define allowed applications of the GIMP paintbrush tool.
Prototyperspective (talk) 17:04, 18 December 2023 (UTC)Reply[reply]
(Edit conflict) Great work, thanks a lot!
Specific points:
1. AI-generated images can be allowed when they are notable as such: en:Théâtre D'opéra Spatial, File:Pope Francis in puffy winter jacket.jpg. Yann (talk) 17:06, 18 December 2023 (UTC)Reply[reply]
2. Prompt should be mentioned as source. This should be mandatory. Yann (talk) 17:06, 18 December 2023 (UTC)Reply[reply]
Most of the AI datasets include stolen images (including from Commons) that were used in violation of their copyright or licensing terms. Commons should not be encouraging the production of unethically produced AI images by hosting them. -- I do not agree with this. I think that if person has legal access to use the work then user should be able to use it as to teach LLM:s. There maybe national opt-out procedures, but without any additional restrictions CC-licences allow using images for datamining. (this is least the position afaik of EU:s copyright directive and US court cases) --Zache (talk) 17:22, 18 December 2023 (UTC)Reply[reply]
Current EU copyright law even explicitly allows the use of copyrighted works for "text and data mining"; in Germany, for example, this is codified in §44 b (English translation) of the copyright law. This is defined as "the automated analysis of individual or several digital or digitised works for the purpose of gathering information, in particular regarding patterns, trends and correlations". Per (for example) this article, this exception may be applicable for the use of images to train an AI. Per § 44b subsection (3), an "opt-out" is possible: Uses in accordance with subsection (2) sentence 1 are permitted only if they have not been reserved by the rightholder. A reservation of use in the case of works which are available online is effective only if it is made in a machine-readable format. So, if rightholders of works (in this case, images) that are available online explicitly state in a machine-readable format that it's not allowed to use these works for "data mining" (machine learning, in this case), it would be a copyright violation to use them. However, as these regulations are so new, it's to be assumed that most of the images used for AI training had no such machine-readable opt-out statement attached to them at the time of harvesting. This makes it difficult to answer (and, I think, legally untested as yet) what it would mean if rightholders try to opt out retroactively. Gestumblindi (talk) 19:24, 18 December 2023 (UTC)Reply[reply]
The main issue with trying to follow country-specific laws is that there is no consistent way to determine country of origin for modern online-only works. Therefore I have argued that we should only follow US law. -- King of ♥ 17:29, 18 December 2023 (UTC)Reply[reply]
  •   Comment From taking a quick glance at the discussion it seems most of the proposed changes are based on premises that have no basis in reality whether it being Commons policy, court cases or statements from the legal department--Trade (talk) 20:35, 18 December 2023 (UTC)Reply[reply]
  •   Comment @JopkeB: several problems with the wording of the above, implicit premises, etc. that I'd like to see cleared up before I comment on the substance; I've taken the liberty of correcting some straight-out typos and solecisms, but these were of a different order:
    • "protected by copyright, so should not be on Commons anyway" roughly half of the content on Commons (maybe more) is copyrighted and free-licensed. Is this saying that we would not accept free-licensed AI content from countries that allow copyright? If so, why would this be different from other copyrighted content?
    • "use of only free copyrights" I have no idea what this means. As far as I'm aware, there is no such thing as a "free copyright".
    • "are not mentioning that is 'Own work'" I can't work out what this means to say. <Answer: the uploader may not say that the image is his/her "own work", but should mention the AI generator as the author. JopkeB (talk) 14:30, 21 December 2023 (UTC)>Reply[reply]
    • Am I correct to understand that the section beginning "Then there might be AI images on Commons, for instance" is a continuation of the exceptions? <Answer: No, these are examples, suggested by the participants in the discussion. JopkeB (talk) 14:30, 21 December 2023 (UTC)>Reply[reply]
      • @JopkeB: so these are not intended as permitted exceptions? Then I'm completely confused. Which of these are not permitted?- Jmabel ! talk 19:19, 21 December 2023 (UTC)Reply[reply]
        Not permitted are images that are out of scope, like IA artworks which have not a specific historical or educational value (extensively discussed above). JopkeB (talk) 05:49, 22 December 2023 (UTC)Reply[reply]
- Jmabel ! talk 21:24, 18 December 2023 (UTC)Reply[reply]


"Usually there is no clue about the sources an AI tool uses to make an image, and therefor there is no clue about the copyrights of that AI-generated image either. It is common for AI works to contain fragments of source images, including many nonfree images, these are almost impossible to trace." This is not true. Any human artist has seen a vast number of paintings, of which we have no clue what they are in specific. No one has shown any evidence that AI works contain "fragments of source images". It is true that some AI works, generic requests that happen to hit certain very common images ("generic Afghanistan woman"), can turn out to be more or less copies of a work; most likely many humans would hit that as well, and also be more or less impossible to trace.--Prosfilaes (talk) 21:26, 18 December 2023 (UTC)Reply[reply]
No one has shown any evidence that AI works contain "fragments of source images". Excuse me? There is copious evidence that AI image generation tools will sometimes produce images with recognizable watermarks from stock photo providers, e.g. [3], [4]. Omphalographer (talk) 21:30, 18 December 2023 (UTC)Reply[reply]
That's not "fragments of source images". That's the AI being loaded with many images with a common feature reproducing that feature. It's not even a copyrightable feature; it's a PD text logo.--Prosfilaes (talk) 21:40, 18 December 2023 (UTC)Reply[reply]
@Prosfilaes: Just to play devils advocate, why wouldn't it contain fragments of source images? Like look at this way, the whole thing is based on combining multiple images to create an "original." So say someone wants to generate an image of something that there aren't many images of to begin with because it's fairly niche. There's only two options there, one of which necessarily has to contain "fragments" of source images. Either the AI seriously fabricates things to fill in the blanks or it just creates a composite of the few images it has of the subject, which would inherently involve "fragments" of said images. The fact that images will contain watermarks sometimes is clear evidence that the later is happening, not the former. Otherwise it would just leave them out completely. Since it's extremely unlikely that watermarks would be included in images just for the aesthetics or whatever. I don't buy the idea that they are there simply because watermarks are a commons feature of the source images either. As most images aren't watermarked and that's not really how it works anyway. --Adamant1 (talk) 21:56, 18 December 2023 (UTC)Reply[reply]
Brief note regarding watermarks: as I've stated before, the watermarks included in AI images are hallucinations and not actual watermarks used in training images; it's rather further evidence how it doesn't copy in parts of other images but falsely associated the presence of some text in a corner with any of the prompted concepts such as term "high quality"; StableDiffusion is open source so people could check but that's not how ML training works. Prototyperspective (talk) 22:08, 18 December 2023 (UTC)Reply[reply]
Exactly - very often, the AI includes "watermarks" because it was trained on a lot of images that contain watermarks, so it thinks that something "should" contain a watermark, because it's something it has seen so often in its training material - that doesn't mean that the image generated is necessarily a derivative work of any specific source image(s). Gestumblindi (talk) 23:19, 18 December 2023 (UTC)Reply[reply]
@Gestumblindi: We have no way of knowing exactly how many images with watermarks the model was trained or if that's why they are being added to images. Although I'd be interested why it specifically adds watermarks instead of just integrating text into the images in general if not for it being fragments of source files. Otherwise you'd think the text would just be added to a sign in the background or something. Since that's how it seems to do things in other instances. --Adamant1 (talk) 23:27, 18 December 2023 (UTC)Reply[reply]
We should ban photographs of people, because we don't know if it steals the soul of the photographed. We could work from best understanding, but you don't seem to be big on that.--Prosfilaes (talk) 00:07, 19 December 2023 (UTC)Reply[reply]
Such a good counter argument. It's funny how triggered people like you are getting over the this whole thing. Maybe I could understand if we were actually "banning" AI artwork, but that's not even what's happening and no one is advocating for it either. So your acting distraught over literally nothing. But hey, my bad for having an opinion. --Adamant1 (talk) 00:29, 19 December 2023 (UTC)Reply[reply]
You have an opinion that you express loudly and repeatedly, and it seems unaffected by actual facts. But describe anyone who disagrees with you as being "triggered", that's productive.--Prosfilaes (talk) 15:54, 19 December 2023 (UTC)Reply[reply]
But describe anyone who disagrees with you as being "triggered" @Prosfilaes: No, not "anyone", just you and Prototyperspective. Plenty of people have had opinions about AI art that I disagree with and I still wouldn't describe them as acting "triggered" over this. Prototyperspective is clearly upset over the whole thing though. --Adamant1 (talk) 19:10, 21 December 2023 (UTC)Reply[reply]
You said above that 1 person disagreed with your idea for a ban. So you have described everyone who disagrees with you in this conversations as "triggered". And you ignore my complaint about loud opinions that you even admit are uninformed.--Prosfilaes (talk) 01:47, 22 December 2023 (UTC)Reply[reply]
@Prosfilaes: You seem to have the false belief that the only disagreement on my side is if it should be banned or not. I also disagree with it being used for abstracted drawings of tools and architectural elements, or icons. The same with allowing it in cases where the images are being used on other projects. Although again, I wouldn't say anyone who has those opinions is acting "triggered" even if I disagree with them. For instance Zache was pretty reasonable when they said they thought icons, placeholder and decoration images are good examples for useful use cases. Same goes for Gestumblindi's idea to keep AI generated artwork that's in use, which I'm more then willing to support as a good faithed middle ground even if I don't think it goes far enough. Enjoy the cope though. --Adamant1 (talk) 05:30, 24 December 2023 (UTC)Reply[reply]
Maybe I should keep my mouth shut because I'm not a Commons superstar or anything, but "it's funny how triggered people like you are getting" and "enjoy the cope" seem like completely unnecessary comments that would be better off not being made at all. JPxG (talk) 14:19, 24 December 2023 (UTC)Reply[reply]
I could say the same for Prosfilaes saying that taking a conservative approach to hosting AI artwork is tantamount to banning photographs of people, because we don't know if it steals the soul of the photographed. I don't see you or anyone else caring though. So you'll have to forgive if I find your concern less then genuine. Anyway, who cares? I can say someone who accuses me of acting like photographs steals people souls is being overly emotionally if I want to. The conversation is concluded though. So lets leave it that way huh? --Adamant1 (talk) 18:21, 24 December 2023 (UTC)Reply[reply]
You could say the same, but that would be a "tu quoque" response and logically invalid. If you want the conversation to be over, the rule in life is that you walk away. Unless you're the boss, you don't get to have the last word and tell everyone else to shut up. My comment may not have been the best phrased, but you replied to someone giving correct information with "we don't know", as if there were dark magic going on.--Prosfilaes (talk) 21:03, 25 December 2023 (UTC)Reply[reply]
Template:You don't get to....tell everyone else to shut up. Be my guest and comment on AI artwork being hosted on Commons. I could really care less. The conversation isn't about if I think photographs steal people's souls and I have zero problem saying that's not my position if your going treat me like that's what I think. No one is stopping you or anyone else from talking about AI artwork though. I'm certainly not. But I don't see you doing that for some reason. --Adamant1 (talk) 22:23, 25 December 2023 (UTC)Reply[reply]
Devil's advocates aren't helpful. If you read the articles, everyone is saying the same thing; that the watermarks are signs that entire databases of watermarked images were used as training material for the AI. If you ever tried using an AI image generator, you'd know it doesn't work like that. Using NightCafe it has no idea what a ysoki is, and it doesn't pull up the picture of a ratfolk grenadier you would expect. It does not remember single images like that, unless they have been repeated over and over.--Prosfilaes (talk) 00:00, 19 December 2023 (UTC)Reply[reply]
I'm not really involved in the specific debate here, but if you want a reason why they don't contain "fragments of source images", I can give one: it is physically impossible. I can't speak to what goes on with closed-source models, but checkpoints for publicly available models are a few billion bytes of neuron weights (e.g. Stable Diffusion XL 1.0 is 6.94 GB). The datasets these models are trained on constitute a few billion entire images (LAION-5B is 5.85 billion images). I would like to see someone explain how images -- fragment or otherwise -- are being compressed to a size of one byte.
One byte is eight bits: the binary representation of the number 255 by itself takes one full byte (11111111).
A single colored pixel (i.e. yellow-green, #9ACD32) is a triplet of three bytes (10011010, 11001101, 00110010).
The smallest file on Wikimedia Commons, a GIF consisting of a single transparent pixel, is 26 bytes. This 186 x 200 photograph of an avocado (as a JPEG -- a highly optimized, lossily compressed file format) is eleven thousand bytes. Even if we disregard the well-documented literature concerning neural networks (and the subset of generative models that create images like these) actually work, it is not mathematically possible to achieve the compression ratios necessary to simply store training images inside the model. JPxG (talk) 02:58, 24 December 2023 (UTC)Reply[reply]
"AI-generated images on Commons can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one." You could also complain about illustrations or historical photos from stopping modern photos (or modern photos stopping people from getting older photos; we have three pretty bad pictures of Category:Anne McCaffrey, which have discouraged people from trying to find or license historical photos of her in her prime.) I'd prefer to phrase this positively, about recommending editors find and use good images. I think the concern is largely overblown; if we appropriately label AI photos, WP users will mostly use AI photos only when necessary or useful.--Prosfilaes (talk) 21:33, 18 December 2023 (UTC)Reply[reply]

That is much more comment than I had expected. First of all: this summary was extracted from the previous discussion, I did not make up myself new views. IMO it was not the intention to continue the discussion about the content (that should have done before), but to get conclusions and identify follow-up actions. My reply:

  • In response to your comments I made some adjustments (additions in teal, proposed removals with strike through).
  • "discouraging uploaders from contributing freely licensed real images of subjects" was brought up by Omphalographer (Overleg) 19:47, 8 December 2023 (UTC), and I could not find any objection, so I copied it to the summary.
  • If you still want to adjust the summary, please do so yourself. This is part of a wiki, so anybody can make changes.
  • @Prototyperspective When you have suggestions to make discussions like this more attractive to participate: please let us know!

--JopkeB (talk) 14:30, 21 December 2023 (UTC)Reply[reply]

Your updated summary still contains blatantly false claims regarding copyright, e.g.: An AI work is a derivative one, whether it was derived from one or a million examples, whether the original works are known or not. I would highly recommend reading some research by academic legal experts (or actual court decisions) on the matter, instead of trying to base new Commons policies on personal speculation or talking points of the copyright industry.
Furthermore, there is no community consensus at all about some kind of ban on AI images on Commons but with exceptions, as evidenced by the outcome of various deletion discussions since (at least) April 2022, and the numerous conversations that informed the current revision of Commons:AI-generated media.
Personally, I think some more targeted restrictions may be worth considering, e.g. regarding the labeling of AI-generated images depicting real people, an outright ban of which was discussed and rejected recently. (As I noted there, Commons doesn't currently seem to have a policy against image descriptions or files names that misrepresent such images as as actual photos - whether they are AI-generated "deepfakes" or century-old manipulations such as Category:Altered Soviet photographs.)
Regards, HaeB (talk) 06:25, 23 December 2023 (UTC)Reply[reply]
I would highly recommend reading some research by academic legal experts Just an FYI, but it's a summary of the discussion. Not what "academic legal experts" have to say about the artificial intelligence. So JopkeB's personal knowledge on the subject or what "research by academic legal experts" she's read doesn't really matter. What does is if the summary is an accurate reiteration of the points that were made in the discussion, which it clearly is. The main thing now is to do any follow-up actions that have been identified, not rehash things. --Adamant1 (talk) 07:18, 23 December 2023 (UTC)Reply[reply]

Summary and conclusions 2 edit

I know JopkeB said we could edit their draft of a summary and conclusions, but what I want to do as much as anything is organize it a little differently, so I'm writing it separately. I don't think JopkeB and I have any large disagreements. I'm trying to incorporate as much of the language above as I can, so that it will be clear where we are saying exactly the same thing. I'm not reiterating the questions and actions, which I think are entirely fine. Instead, I'm trying to write more of a possible draft guideline. As with JopkeB, friendly edits are welcome, including if someone wants to do some highlighting.

1) Licensing: Commons hosts only images that are either public-domain or free-licensed in both the U.S. and their country of origin. We also prefer, when possible, for works that are in the public domain in those jurisdictions to also offer licenses that will allow reuse in countries that might allow these works to be copyrighted. As of the end of 2023, generative AI is still in its infancy, and there are quite likely to be legislation and court decisions over the next few years affecting the copyright status of its outputs.
As far as we can tell, the U.S. considers any work contribution of a generative AI, whether that is an enhancement of an otherwise copyrightable work or is an "original" work, to be in the public domain. That means that if a work by a generative AI is considered "original" then it is in the public domain in the U.S., and if it is considered "derivative" then the resulting work has the same copyright status as the underlying work.
However, some countries (notably the UK and China) are granting copyrights on AI-generated works. So far as we can tell, the copyright consistently belongs to the person who gave the prompt to the AI program, not to the people who developed the software.
The question of "country of origin" for AI-generated content can be a bit tricky. Unlike photographs, they are not "taken" somewhere in particular. Unlike content from a magazine or book, they have no clear first place of publication. We seem to be leaning toward saying that the country of origin is the country of residence of the person who prompted the AI, but that may be tricky: accounts are not required to state a country of residence; residence does not always coincide with citizenship; people travel; etc.
Consequently, for AI-generated works:
a) Each file should carry a tag indicating that it is public domain in those countries that do not grant copyrights for AI-generated art.
b) If its country of origin is one that grants copyrights for AI-generated art, then in addition to that tag, license requirements are the same as for any other copyrighted materials.
c) If its country of origin is one that does not grant copyrights for AI-generated art, then we [require? request? I lean toward require] an additional license to cover use in countries that grant copyrights for AI-generated art.
For AI-enhanced works, the requirements are analogous. We should have a tag to indicate that the contribution of the AI is public domain in those countries that do not grant copyrights for AI-generated art, and that in those countries the copyright status is exactly the same as that of the underlying work. We would require/request the same additional licenses for any copyrightable contribution as we do for AI-generated work. In most cases, {{Retouched}} or other similar template should also be present.
2) Are even AI-generated "original" works derivative? There is much controversy over whether AI works are inherently all derivative, whether derived from one or a million examples, and whether the original works are known or not. Files only can be deleted for copyright infringement when there are tangible copyright concerns such as being a derivative work of a specific work you can point to.
Most currently available AI datasets include stolen images, used in violation of their copyright or licensing terms. Commons should not encourage the production of unethically produced AI images by hosting them.
AI datasets may contain images of copyrighted subjects, such as buildings in non-FOP countries or advertisements. Can we say that if, for example, a building in France is protected by copyright, an AI-generated image of that building would be exactly as much of a copyright violation as a photo of that building? Seems to me to be the case.
3) Accuracy: There is zero guarantee that any AI-generated work is an accurate representation of anything in the real world. It cannot be relied upon for the accurate appearance of a particular person, a species, a place at a particular time, etc. This can be an issue even with works that are merely AI-enhanced: when AI removes a watermark or otherwise retouches a photo, that retouching always involves conjecture.
4) Scope: We only allow artworks when they have a specific historical or educational value. We do not allow personal artworks by non-notable creators that are out of scope; they are regularly deleted as F10 or at DR. In general, AI-generated works are subject to the same standard.
5) Negative effects: AI-generated images on Commons can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one. As always, we recommend that editors find, upload and use good images, an it is our general consensus that an AI-generated or AI-enhanced image is rarely better than any available image produced by more traditional means.

That said, there are good reasons to host certain classes of AI images on Commons. In decreasing order of strength of consensus:

  1. Images to illustrate facets of AI art production.
    clearly there would need to be a decision on how many images are allowed under this rubric, and what sort of images.
  2. Use of ethically-sourced AI to produce heraldic images that inherently involve artistic interpretation of specifications.
  3. Icons, placeholders, diagrams, illustrations of theoretical models, explanations of how things work or how to make something (for manuals, guides and handbooks), abstracted drawings of for instance tools and architectural elements, and other cases where we do not need historical accuracy.
  4. For enhancing/retouching images, improving resolution and source image quality, as long as the original image stays on Commons, so the enhanced one gets a different filename and there should be link to the original image in the image description. AI-based retouching should presumably be held to the same standards as other retouching.
  5. Because Commons generally defers to our sister projects for "in use" files, allow files to be uploaded on an "as-needed" basis to satisfy specific requirements from any and all other Wikimedia projects. Such files are in scope on this basis only as long as they are used on a sister project. We will allow some time (tentatively one week) after upload for the file to be used.
    The need to allow slack for files to be used on this basis will raise some difficulties. We need to allow for a certain amount of good-faith efforts to upload such images that turn out not all to be used, but at some point if a user floods Commons with such images and few or none are used this way, that needs to be subject to sanctions.
  6. Our usual allowance for a small number of personal images for use on user and talk pages should work more or less the same for AI-generated images as for any other images without copyright issues, as long as their nature is clearly labeled. E.g. an AI-generated image of yourself or an "avatar" for your user page; a small number of examples of AI-generated works where you were involved in the prompting. (In short, again "same standard as if the work were drawn by an ordinary user.")
  7. (Probably no consensus for this one, but mentioning it since JopkeB did; seems to me this would be covered by the "Scope" section above, "same standard as if the work were drawn by an ordinary user.") For illustrating how cultures and people could have looked like in the past.

While there is some disagreement as to "where the bar is set" for how many AI-generated images to allow on Commons, we are at least leaning toward all of the following as requirements for all AI-generated images that we host:

  1. All files must meet the normal conditions of Commons. Files must fall within Commons' scope, including notability, and any derivative works must only public-domain and free-licensed materials. File pages must credit all sources.
  2. AI-generated or AI-enhanced images must be clearly recognizable as such:
    1. There should be a clearly visible prominent note about it being an AI image, mentioning that it is fake, perhaps add Template:Factual accuracy and/or another message in every file with an AI illustration, preferably by a template, perhaps every file that is uploaded by Upload Wizard and where the box has been ticked to indicate that AI image has been uploaded
    2. Differentiation between real and generated images should also be done at category level, categories containing images about real places and persons should not be flooded with fake images; AI-generated images should be in a (sub) category of Category:AI-generated images;
  3. Whether in countries that allow copyright on AI-generated images or not, these images should not be identified simply as "Own work". The AI contribution must always be explicitly acknowledged.
  4. There is at least a very strong preference (can we make it a rule?) that file pages for AI-generated or AI-enhanced images should indicate what software was used, and what prompt was given to that software. Some of us think that should be a requirement.
  5. With very rare exceptions—the only apparent one is to illustrate AI "hallucinations" as such—AI-generated or AI-enhanced images should contain no obviously wrong things, like extra fingers or an object that shouldn't be there; these should be fixed. Probably the best way to do this is to first upload the problematic AI-generated file, then overwrite that with the human-generated correction.

Jmabel ! talk 20:55, 24 December 2023 (UTC)Reply[reply]

I agree that this output needs to be disclosed and labeled accurately (I wrote a draft policy at en.wp which is currently under rfc saying the same thing for LLM output). Regardless of anything else I think disclosure is an absolute minimum requirement. JPxG (talk) 21:39, 26 December 2023 (UTC)Reply[reply]

Probably also need to say something concrete about "deepfakes". - Jmabel ! talk 21:15, 24 December 2023 (UTC)Reply[reply]

I don't know if this has a real definition. People use the term to refer to a very large variety of things; technologies described as 'deepfake' range from generative image models to normal photoshoops with no neural nets involved at all. And the synthetic aspects range from very simple corrections (fixing red-eye, artificially enhanced or reduced depth of field) to alterations (making skin look smoother etc) to major alterations (changing someone's facial expression, putting one person's face on a other person's body) to 100% synthesis (generating a completely synthetic photo of a person that isn't working off any actual base image). I get the impression it is something of a buzzword. JPxG (talk) 09:15, 26 December 2023 (UTC)Reply[reply]
@JPxG: I think there is a range of what it can mean technically, though I've never heard it applied to something as mild as red-eye correction: was someone seriously calling that a "deepfake" or setting up a strawman to argue against the usefulness of the term? But I think it pretty consistently refers to content that deliberately misleads: an image (or video, or audio) that could pass, falsely, for documentary evidence: that a person was where they were not, was with someone they were not with, said something they never said, etc. - Jmabel ! talk 21:24, 26 December 2023 (UTC)Reply[reply]
Well, in the case that something is fake, cannot it already be called "fake"? It seems similar to how e.g. there are policies against harassment, incivility etc but there is no Commons:Asshole policy or Commons:Being a jackoff. JPxG (talk) 22:09, 26 December 2023 (UTC)Reply[reply]
Not so much a matter of what we call them, but whether we ever host them. If we go with the assumption that (at least in the U.S.) such works cannot be copyrighted then (except insofar as they may violate the copyrights of other works) it's strictly a policy decision, not a legal one. I suspect that some such works will be notable enough that we will want to host them (e.g. if a "deepfake" figures into the upcoming U.S. presidential campaign, and given the trial balloons a certain candidate has already been floating that seems likely), but in general I'd want to see us host such things only if they are notable in their own right. - Jmabel ! talk 01:41, 27 December 2023 (UTC)Reply[reply]

December 13 edit

Image of the marble bust of Hannibal edit

Hello, is this image of the marble bust of Hannibal public domain? There are images of the bust on Commons, but this image is a bit different. -Artanisen (talk) 01:56, 13 December 2023 (UTC)Reply[reply]

I can't prove it isn't PD but under our precautionary principle it would be up to the uploader to prove beyond a reasonable doubt that it is PD. I don't see any particular evidence either way, and certainly not on that page. Am I missing something? - Jmabel ! talk 07:53, 13 December 2023 (UTC)Reply[reply]
This dates from the Renaissance, and was discovered in the 17th century. More recent copies shouldn't be an issue. See Capuan bust of Hannibal. Yann (talk) 11:20, 13 December 2023 (UTC)Reply[reply]
Yeah, if it's from the 17th century then the bust should be public domain and there are already photos of the same on commons. -Artanisen (talk) 19:34, 13 December 2023 (UTC)Reply[reply]
The bust is in the public domain, but that doesn't mean that this photograph is. Photographs of 3D works, like sculptures, are considered derivative works of the sculpture. Omphalographer (talk) 19:48, 13 December 2023 (UTC)Reply[reply]
The photograph (if cropped) looks identical to File:Mommsen p265.jpg (so same original photograph?), but unless there is more information about the picture on Reddit, we can only make assumptions about whether it is public domain. --HyperGaruda (talk) 20:09, 13 December 2023 (UTC)Reply[reply]
Yes, when you zoom in then this photo File:Mommsen p265.jpg has an identical angle and lighting so it is most likely a cropped version of this photo marble bust of Hannibal . The source of the cropped photo is Courtesy of © Phaidon Verlag (Wien-Leipzig) - "Römische Geschichte", gekürzte Ausgabe (1932). . Below this image it says Hannibal. (Neapel, National-Museum.) -Artanisen (talk) 23:39, 13 December 2023 (UTC)Reply[reply]
Some questions remain, so Commons:Deletion requests/Files in Category:Capuan bust of Hannibal. Yann (talk) 12:42, 20 December 2023 (UTC)Reply[reply]

December 17 edit

The possibilities of AI enhancement edit

File:Hitler portrait AI.jpg and File:Hitler portrait crop.jpg Hello. I am very interested in the possibilities that flow from the latest AI image enhancement software. AI image enhancement has e.g. recently used to enhance this file used in the infobox on Yevgeny Prigozhin's English Wikipedia article. AI enhancement may be something Wikipedia Commons would want to create guidelines about. I've uploaded this image of Adolf Hitler using some free low-quality AI software. I hope others will upload better AI versions of it so that just maybe this file can come to serve as a sort of forum for experimentation with AI on historical photographs.--Zeitgeistu (talk) 00:54, 17 December 2023 (UTC)Reply[reply]

This is a topic under active debate; see Commons talk:AI-generated media for details.
At the present time, I'd recommend that you not upload AI-enhanced photos unless you have a specific use case in mind for the altered photos, and there is consensus on the project where you plan on using them that this use is acceptable. Omphalographer (talk) 02:06, 17 December 2023 (UTC)Reply[reply]
Tangentially related to that, AI image enhancing software sometimes likes to add non-exiting details to photographs it improves as part of the "enhancement" process. Although it's not an issue in most cases, but it can be especially problematic when what's being "enhanced" is an image of someone like Adolf Hitler. To the point that I'd say it's probably not even worth bothering with until the technology is more advanced and the software allows for such features to be turned off. Currently most of them don't. The same goes for AI colorized images. Really, we shouldn't be hosting images that were altered through either process. Regardless, a slightly fuzzy image of Hitler is better then one with fake objects inserted into it. --Adamant1 (talk) 02:38, 17 December 2023 (UTC)Reply[reply]
I would argue that the image of Prigozhin should not have been overwritten with a questionable "enhancement". It produces a more dramatic picture, but also almost certainly a less accurate one, even more so than the earlier sharpening of the photo. - Jmabel ! talk 06:52, 17 December 2023 (UTC)Reply[reply]
I share many other editors' misgivings about the wisdom of using AI software on Commons images. To me, Hitler acquires an unnatural stiffness, an almost mannequin-like appearance in the AI version of the image discussed here. And this is a dubious quality I have seen in other AI images. People tend to look too crisp, like overcooked pieces of bacon. AI image software just isn't where it needs to be yet. -- WikiPedant (talk) 08:00, 17 December 2023 (UTC)Reply[reply]
In addition, just look at the artifacts. Hitler's left ear looks unnatural and the eye pupil shape as well, especially because the AI fused the left eye with a shadow. The problem with upscaling is that it is usually impossible to "intelligently" recreate missing details. If you try, such as the AI upscaler did in this case, then you create artifacts and a mix of unusually sharp lines and blurry areas. --Robert Flogaus-Faust (talk) 17:44, 24 December 2023 (UTC)Reply[reply]
It's not us to dictate if Wiki projects want to use enhanced images Trade (talk) 13:35, 19 December 2023 (UTC)Reply[reply]
It is for us to dictate whether we want to facilitate it. --Njardarlogar (talk) 16:55, 22 December 2023 (UTC)Reply[reply]
  • Always name the software used so we can be aware of the limitations. Some AIs just upscale by smoothing the grain and strengthening borders, other AIs replace the eyes from a roster of similar eyes. Knowing what software and adding that category will be helpful. The images can always be used to show how the AI functions and their limitations, ones without acknowledgement of the software used, usually get deleted. See: Category:Upscaling. Currently AI enhanced images are not copyrightable in the US, but some jurisdiction may require acknowledging the software used as the laws change. --RAN (talk) 23:21, 17 December 2023 (UTC)Reply[reply]
    And, of course: if you're going to upload an AI-"enhanced" image, please upload the original as well! That way, when the tools (hopefully) improve in the future, we can rerun them instead of being stuck with a poorly enhanced image forever. Omphalographer (talk) 02:09, 27 December 2023 (UTC)Reply[reply]

December 18 edit

Sanborn FIre Insurance Map upload project edit

I have created a perl script to organize Sanborn maps currently on commons and those currently available for download, but not yet added to commons.

User:Nowakki/sanborn_test

i would soon be ready to generate a very big list of download jobs, but i don't have the bandwidth to fulfill them.

Considering that the majority of plates in these map collections are of residential areas and thus of little interest to an encyclopedia, i wonder if it is technically possible to create an empty File: on commons that can be automatically converted by download to a genuine file on the request of a user. A user would be anyone who wants to link to a specific plate that shows a structure of interest. Or if commons is interested anyway to become a mirror of the Library of Congress Sanborn map collection. Nowakki (talk) 23:00, 18 December 2023 (UTC)Reply[reply]

@Nowakki: Not sure why you think residential areas are of little interest. - Jmabel ! talk 04:28, 19 December 2023 (UTC)Reply[reply]
i am not sure either. can you hook me up with some server access for the upload? I think wmcloud would expect an endorsement from a project leader before they let people touch the goods. you look like you are in charge of something. Nowakki (talk) 04:41, 19 December 2023 (UTC)Reply[reply]
@Nowakki: Not in charge of anything, any more (or less) than any other admin here. I'm just more active than most in fielding questions.
I suspect you want Commons:Batch uploading. - Jmabel ! talk 06:57, 19 December 2023 (UTC)Reply[reply]
Some questions also have to be answered:
Do we really need tiff files that are 2000% larger than jp2 files with no discernable benefit?
Shall we rename the existing Sanborn files, so that they include the plate number instead of the LoC sequential number. Without an index, the usability of the LoC sequence numbers is not acceptable. One has to click on multiple files first to find out where a target plate number is. Renaming the file allows comfortable navigation without needing the index?
How many / which map collections should be transferred to commons? Nowakki (talk) 07:24, 19 December 2023 (UTC)Reply[reply]
Should we download jpg files instead of jp2
https://tile.loc.gov/image-services/iiif/service:gmd:gmd434m:g4344m:g4344pm:g088851908:08885_1908-0015/full/pct:100/0/default.jpg (2.7MB)
https://tile.loc.gov/storage-services/service/gmd/gmd434m/g4344m/g4344pm/g088851908/08885_1908-0015.jp2 (7.5MB)
there is not much of a difference. Nowakki (talk) 07:29, 19 December 2023 (UTC)Reply[reply]
@Nowakki: Hi, There is no limit as what can be copied here. Everything is potentially interesting. From my experience copying files from elsewhere, JPEG versions are usually of lesser quality than JP2 or TIFF (it should be checked). So I suggest to upload JPEG with the highest quality created from TIFF or JP2 (I use 98% compression). IMO it is not necessary to upload the TIFF or JP2 files to Commons, as the LoC files won't disappear any time soon. For the name, it is up to you, it should consistent across the set, and easily recognizable. So place, ID number, etc. Thanks for taking care of that. I can help you with setting Pywikibot. Yann (talk) 07:40, 19 December 2023 (UTC)Reply[reply]
in that case, i will upload the jpg from the server (i see no difference) and i will rename the files currently on commons.
can i upload to commons by specifying a URL (tile.loc.gov/...)? Nowakki (talk) 08:01, 19 December 2023 (UTC)Reply[reply]
@Nowakki: From the links in your test page, on [5], the JPEG (1627 × 1926 px) is much smaller than JP2 and TIFF (6,510 × 7,707 px, i.e. file uploaded by Fae: File:Sanborn Fire Insurance Map from Provo, Utah County, Utah. LOC sanborn08885 001-1.jpg). Yann (talk) 08:16, 19 December 2023 (UTC)Reply[reply]
these 2 urls are selectable from the drop-down menu.
https://tile.loc.gov/image-services/iiif/service:gmd:gmd434m:g4344m:g4344pm:g088851888:08885_1888-0001/full/pct:25/0/default.jpg (300k)
https://tile.loc.gov/image-services/iiif/service:gmd:gmd434m:g4344m:g4344pm:g088851888:08885_1888-0001/full/pct:12.5/0/default.jpg (96k)
however the server will also spit out a version with /pct:100/
https://tile.loc.gov/image-services/iiif/service:gmd:gmd434m:g4344m:g4344pm:g088851888:08885_1888-0001/full/pct:100/0/default.jpg (3000k) Nowakki (talk) 08:30, 19 December 2023 (UTC)Reply[reply]
@Nowakki: I don't know how you got this URL, but yes, it is the same resolution as the TIFF and JP2. So fine. Yann (talk) 08:43, 19 December 2023 (UTC)Reply[reply]
when you select a JPG version in the dropdown, the server returns either of the first 2 urls. the 3rd one i got by hacking the number after the ":". Nowakki (talk) 08:55, 19 December 2023 (UTC)Reply[reply]
@Yann: is there a way to change the rate limits for this job? can you clear the way here, politically speaking. Nowakki (talk) 10:45, 22 December 2023 (UTC)Reply[reply]
@Nowakki: You may request Autopatrol at COM:RFR; having that should effectively remove the upload rate limit for you (currently 380 uploads per 72 minutes per this post).   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 12:15, 22 December 2023 (UTC)Reply[reply]

December 19 edit

Flags or insignia of non-state actors in conflicts edit

moved from Commons talk:Village pump - Jmabel ! talk 18:48, 19 December 2023 (UTC) Reply[reply]

What are the rules on Wikimedia Commons for flags or insignia of non-state actors in armed conflicts? Pretty much everywhere with banned symbols laws makes exceptions for news / history / educational use, so that isn't what I'm worried about. But what are the rules for copyright? This feels like a very strange problem. I presume news organizations show them on the basis of newsworthiness / fair use / "these guys are definitely not gonna sue us", but Wikipedia seems very strict on it.

  • They don't "tick any of the boxes" for public domain? Or do they? If so, which box(es) do I tick when I upload images?
  • Are there any good libraries or databases that are useful for finding useable images?
  • Do we need to redraw them? Hand-making a near-identical image seems like a strange exercise, but I'll try to do a few of the missing ones if that's what is needed. Working from news photographs of physical flags / banners / patches seems like the best way for it to count as my "own work"? Does that work?
  • Are there any weird rules about fonts? Can all fonts be used in images on Commons? The fonts I have access to most easily are Microsoft and Google Fonts. Is there a better option? (For anyone making suggestions, the things I want to add use mostly non-Latin scripts, I'd obviously only use languages I know well enough to type accurately, but I definitely lack the time and skill to do vector-graphic calligraphy from scratch.)

Irtapil (talk) 09:08, 19 December 2023 (UTC)Reply[reply]

@Irtapil: Hi, and welcome. Please see our overview at COM:FLAG. Also, it seems you are trying to make fair use of such flags or insignia. We don't allow Fair Use here.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 13:22, 19 December 2023 (UTC)Reply[reply]

END moved from Commons talk:Village pump - Jmabel ! talk 18:48, 19 December 2023 (UTC) Reply[reply]

@Irtapil: there are few reasons for why some flags (regardless of their origin) could be public domain:
Redrawing does not magically circumvent copyrights, see COM:DW. --HyperGaruda (talk) 20:18, 20 December 2023 (UTC)Reply[reply]

December 20 edit

Request for opinion on copyright status edit

Hello.

Could you provide opinions on en:File:Margaret Rope's "Lumen Christi" (1923) - Shrewsbury Museum & Art Gallery 2016.jpg? The artist, en:Margaret Agnes Rope, has dead for 70 years since 8 December 2023. Therefore, I believe this file on WP can be restored to its full resolution and moved to WCommons.

Are you of the same opinion as I am? Veverve (talk) 20:50, 20 December 2023 (UTC)Reply[reply]

It is always the end of the year used to determine if a work is public domain. Therefore we need to wait twelve days until it is 2024 to move the file to Commons. GPSLeo (talk) 21:23, 20 December 2023 (UTC)Reply[reply]
Right. @Veverve: Please see COM:UK.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 22:22, 20 December 2023 (UTC)Reply[reply]

December 21 edit

Do stuffed animals… edit

… have copyright? 2804:14D:5C32:4673:2242:8AB:9108:F67B 02:47, 21 December 2023 (UTC)Reply[reply]

Yes, see COM:TOYS.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 03:09, 21 December 2023 (UTC)Reply[reply]

Renewal of lost bot flag edit

Apparently my bot (user:LA2-bot) has lost its bot flag, due to long inactivity. How do I apply for a renewal? I only see a form (Commons:Bots/Requests) for new bots requesting bot status. --LA2 (talk) 11:31, 21 December 2023 (UTC)Reply[reply]

Hmmm, or maybe it never had a bot flag? See Commons:Bots/Requests/LA2-bot. Okay, well then. --LA2 (talk) 11:56, 21 December 2023 (UTC)Reply[reply]
Yes, Special:UserRights/LA2-bot says it's never been made a member of the bots group. --bjh21 (talk) 13:06, 21 December 2023 (UTC)Reply[reply]

Help needed with Template:Philippines photographs taken on navbox edit

This template is not setting categories correctly. It should be adding a category with a name like "Photographs of the Philippines by date", but instead is adding "Photographs of the by date". Category:Philippines photographs taken on 2018-02-11 shows an example of this. I tried to track down the problem but couldn't find it. Could someone else try? Thanks. --Auntof6 (talk) 11:45, 21 December 2023 (UTC)Reply[reply]

@Auntof6: It looks like this has been discussed at Template talk:Country label/N. Joshbaumgartner said they had fixed this, presumably in Special:Diff/828310214. Maybe the fix didn't work. --bjh21 (talk) 13:21, 21 December 2023 (UTC)Reply[reply]
  Done @Auntof6 and Bjh21: This was a lot deeper than the issue discussed at Country label/N. The issue was actually with a previous edit to {{Country label with article}} which looked up the cat parameter, but had no fallback for when that parameter is not set. I added the fallback, so it should work now. Josh (talk) 14:44, 21 December 2023 (UTC)Reply[reply]
@Bjh21 @Joshbaumgartner: Thanks, it looks good now. -- Auntof6 (talk) 20:30, 21 December 2023 (UTC)Reply[reply]

Prompt template now available to record AI prompts edit

You are invited to join the discussion at Category talk:AI-generated images#Template:Prompt now available. {{u|Sdkb}}talk 16:15, 21 December 2023 (UTC)Reply[reply]

December 22 edit

Incorrect PNG previews of SVG files edit

Last week I tried updating File:10000 edit ribbon.svg, File:25000 edit ribbon.svg, File:50000 edit ribbon.svg and File:100000 edit ribbon.svg to include a "lighting effect" on the ribbon (similar to that of File:Master Administrator 1C.svg), but the black and white checks on the ribbon now no longer appear on the PNG previews even though they are visible on when the browser renders the SVG code. The PNG previews all worked properly before I made the updates. I have tried this with MS Edge and Google Chrome, including purging the cache on wiki and clearing the cache in the browser settings. That has worked before when the PNG previews did not match recently uploaded versions, but it is not working now. I am unsure whether this is an issue local to my computer or if others are having the same problem. If this is an error with wikimedia software, I request assistance in filing a Phabricator ticket as I am completely unfamiliar with how that works. — Jkudlick ⚓ (talk) 02:17, 22 December 2023 (UTC)Reply[reply]

@Jkudlick: Hi, and welcome. Please see Help:SVG.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 09:22, 22 December 2023 (UTC)Reply[reply]
@Jeff G.: I did not even notice there were errors in some of the path instructions. I have now bookmarked a couple of the validation pages. Thank you for pointing me in the right direction. — Jkudlick ⚓ (talk) 21:18, 22 December 2023 (UTC)Reply[reply]
@Jkudlick: You're welcome.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 10:38, 23 December 2023 (UTC)Reply[reply]

Request translation for File:Baltic states territorial changes 1939-45 es.svg edit

File:Baltic states territorial changes 1939-45 es.svg is completely reworked by myself, all texts now stored as <text> elements, but I found there is a piece of text untranslated, who can translate in Spanish? -- Great Brightstar (talk) 15:12, 22 December 2023 (UTC)Reply[reply]

@Great Brightstar: I'm not going to try editing the SVG, but the one portion I see that is not translated should be:
Territorio Klaipeda (Memel)
parte de Lituania 1923-1939
cedido a Alemania en marzo 1939
vuelto a la RSS de Lituania en 1945
If there is anything else, you'll have to point me at it specifically. - Jmabel ! talk 20:32, 22 December 2023 (UTC)Reply[reply]
  Done -- Great Brightstar (talk) 23:02, 22 December 2023 (UTC)Reply[reply]
Great Brightstar: I think devuelto is preferable to vuelto. Strakhov (talk) 18:20, 24 December 2023 (UTC)Reply[reply]
OK, I did. If you feel something wrong, you can revert it. -- Great Brightstar (talk) 13:32, 25 December 2023 (UTC)Reply[reply]

December 24 edit

Deletion of Android 14's screenshot. edit

Hello, I do not really know where to put this, but I am going to put it here (Pardon me please.). I noticed that the Android 14 screenshot was deleted, and the reason was for copyright. But this does not really make sense for the reason stated. The Google Launcher/Pixel Launcher for Android 14 for Google's phones **may** be protected by copyright, but the use case here is clearly fair use. It is showing an example of what an Android 14 home screen looks like. If it was really copyright infringing, then why do we still have screenshot's of Windows 10 and 11's home screen, iOS, or MacOS? All of them are operating systems that come with their copyright/trademarks (I know the difference between the two, td is just preventing orgs/ppl making similar looking things, copyright is to protect your rights on a specific piece of media). Plus, it is generally impossible to get a "stock" android experience. All phones that are shipped by OEMs are loaded with their own flavor of android (Google's Pixel Launcher, Samsung OneUI, MIUI for XiaoMi) that are generally not opensource, thus no screenshot is perfectly non-copyrighted. As well as for the previous versions of Android, it has always been showcased with a screenshot of the pixel launcher.

TL:DR

So what I want is for the image to be undeleted because fair use and all previous pages on android has used google's screenshots for the most-part (Since Android 7/N, which is 7 years ago).

13

https://en.wikipedia.org/wiki/Android_13

12

https://en.wikipedia.org/wiki/Android_12

11

https://en.wikipedia.org/wiki/Android_11

etc.

Thanks-- (I am going to sleep now, I will probably check back in 8-ish hours depending on how lazy I am. BTW Happy Holidays!) Randomdudewithinternet (talk) 10:47, 24 December 2023 (UTC)Reply[reply]

Hi, please see our licensing policy at COM:L. Because our project emphasises the collection of images that are freely reusable, we can't accept images with a fair use rationale at Wikimedia Commons. Some Wikimedia projects (like English Wikipedia) accept fair use images but they must meet the terms of that project and be stored on that project. From Hill To Shore (talk) 11:48, 24 December 2023 (UTC)Reply[reply]
Hmm I see. Thanks, but I have a question, what do I do now, can I for example download the image and then upload it to the english wikipedia to use so that the articles on Android has a screenshot? I am still fairly new to this. I have also noticed a deletion request for the general screenshots of Android here.
Thanks Randomdudewithinternet (talk) 21:16, 24 December 2023 (UTC)Reply[reply]
The key problem seems to be the complexity of the background. See Commons:Deletion requests/Android screenshots. If you add screenshots to Commons with as simple a background as possible, they can probably remain. From Hill To Shore (talk) 23:25, 24 December 2023 (UTC)Reply[reply]
Thanks. But I have a S22 Ultra with Android 14, however the biggest issue is that Samsung's android bares almost no resemblance in terms of UI when compared to the Pixel launcher, so I am probably not going to do anything there. I can probably do nothing about it further. Anyways merry christmas/happy holidays! Randomdudewithinternet (talk) 00:39, 25 December 2023 (UTC)Reply[reply]

staff situation. edit

I asked twice (21 December and 23 December) for permission to rename files.

How long do those people need to push a button? Nowakki (talk) 11:07, 24 December 2023 (UTC)Reply[reply]

We are volunteers, not staff and you are showing a poor attitude demanding that your requests take priority over others. It is also a holiday period in many countries, with many people focused on their families rather than online activities - you can't expect normal response times during the festive period. Finally, if you provide a link here to the pages you want to rename, an administrator or page mover may choose to take a look at them for you. From Hill To Shore (talk) 11:40, 24 December 2023 (UTC)Reply[reply]
Nah, i think that would be unfair to other people in the queue. Nowakki (talk) 12:13, 24 December 2023 (UTC)Reply[reply]
@Nowakki: While you wait for responses on COM:RFR, please use {{Rename}} or our RenameLink gadget so that reviewing Admins may see which files you want renamed, why you want that, and what file renamers think of your rename requests.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 18:21, 24 December 2023 (UTC)Reply[reply]

Searching for unreviewed licenses edit

Hi, The website camptocamp.org has many free images, and many were uploaded to Commons. However only a few licenses were reviewed. How to search for unreviewed licenses in images with that source? BTW if anyone want to mass upload more, they are welcome. Most of images there are very interesting for Commons. Thanks, Yann (talk) 12:32, 24 December 2023 (UTC)Reply[reply]

Try searching for camptocamp -"was reviewed"? --HyperGaruda (talk) 07:14, 27 December 2023 (UTC)Reply[reply]

Hi, I see 2 issues with this file:

  • Overprocessing with AI? No information about that, but it should be mentioned if AI was used to create this.
  • Overlong description with external links. This should be trimmed to a reasonable size.

Of course, a free picture of a demonstration is in scope for Commons. Yann (talk) 16:49, 24 December 2023 (UTC)Reply[reply]

Discussion should probably continue at Commons:Deletion requests/File:Israel's Genocidal Assault on the Gaza Ghetto (53289186330).jpg, which I started before reading this (though several hours after Yann posted here). - Jmabel ! talk 00:46, 25 December 2023 (UTC)Reply[reply]
As I said above, I didn't mean that the file should be deleted. Yann (talk) 08:57, 25 December 2023 (UTC)Reply[reply]

December 25 edit

Google & Commons edit

Does anyone understand why Google search results would be referring to this site as "Wikipedia Commons" and whether there is anything we can do about that? I got that in the second search result of [6]. Your mileage may vary. - Jmabel ! talk 01:44, 25 December 2023 (UTC)Reply[reply]

Hi, The Wikimedia Foundation should be informed. They probably have a direct contact with Google, which would help fix this issue. Yann (talk) 08:59, 25 December 2023 (UTC)Reply[reply]
Wikisource and Wikidata also show up as Wikipedia, so it seems to be a more general problem. --Rosenzweig τ 16:43, 25 December 2023 (UTC)Reply[reply]
I think the relevant task for this bug is phab:T348203. Sam Wilson 09:40, 27 December 2023 (UTC)Reply[reply]

Category renaming (move) edit

Nearly a year ago Delta Air Lines re-purchased the naming rights for the main arena in Salt Lake City, Utah (most recently knows as the Vivint Smart Home Arena) to become effective July 1, 2023. Accordingly, since this appears a "non controversial name change", an attempt was made by this editor to move the former category to the category reflecting the current name (Delta Center). However, the target category already existed, as this was the original name of the area (1991), so the category move is not allowed. Therefore, in September of this year this editor added the Move template to the Vivint Smart Home Arena category requesting administrative approval of said move. Understanding, that there is a backlog of move requests, the rapid approval of said request was not expected. Notwithstanding, with the requested move still not having been approved, this editor is wondering if the correct process has been followed to get said category renamed. An Errant Knight (talk) 13:49, 25 December 2023 (UTC)Reply[reply]

@An Errant Knight: I tagged it {{SD|G6}} to put it back in this edit.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 14:03, 25 December 2023 (UTC)Reply[reply]
Suppose that is one way to allow make it work. Thanks for the assistance. An Errant Knight (talk) 19:38, 25 December 2023 (UTC)Reply[reply]
@An Errant Knight: You're welcome. Johnj1995 modified it in this later edit.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 19:47, 25 December 2023 (UTC)Reply[reply]
Deletion done. Someone else can take it from there. - Jmabel ! talk 21:22, 25 December 2023 (UTC)Reply[reply]
@An Errant Knight and Jmabel: Cat moved and cleaned; members moved; Wikidata connection is not working yet. :(   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 21:36, 25 December 2023 (UTC)Reply[reply]
@Mike Peel: Please help.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 21:45, 25 December 2023 (UTC)Reply[reply]
@Jeff G. and An Errant Knight: I think Pi bot automatically fixed this - the trick is to use the sitelinks (under 'Multilingual sites'), not Commons category (P373). Thanks. Mike Peel (talk) 06:47, 26 December 2023 (UTC)Reply[reply]
@Mike Peel: Thanks!   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 07:06, 26 December 2023 (UTC)Reply[reply]
Thanks all for the assistance! An Errant Knight (talk) 16:00, 26 December 2023 (UTC)Reply[reply]

What's the name of this gesture ? edit

 
noframe

Hi,

I've searched for about an hour on Google and asked to ChatGPT about the gesture at the right, but I found nothing. Someone know how we call that gesture ? - Simon Villeneuve 21:57, 25 December 2023 (UTC)Reply[reply]

@Simon Villeneuve: Wide goalposts for paper football? Regular goalposts involve outstreched index fingers, rather than pinkies.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 22:03, 25 December 2023 (UTC)Reply[reply]
I think you speak about the gesture done vertically. Usually, in my culture, we use this gesture horizontally between men to talk about a pretty women hips or bottom who can "enter" in the zone delimited by the little fingers.
I know it's kind of horny, but I think it should be documented. - Simon Villeneuve 22:07, 25 December 2023 (UTC)Reply[reply]
I asked the question to two old friends yesterday. They had a nervous laughter. They didn't know either the name of this.
So, here, else nobody know the name of this (we don't know if this gesture is known outside our province), else nobody want to discuss seriously about this. - Simon Villeneuve 08:15, 27 December 2023 (UTC)Reply[reply]

December 26 edit

Senkaku Copernicus Photo Sentinel-2A Photo edit

Hi, this satellite photo of the Senkaku islands was taken by Copernicus Sentinel-2A on October 10, 2019. There are many photos of Sentinel 2 on commons. Can it be used on commons with licensing "Attribution: Contains modified Copernicus Sentinel data 2019"? -Artanisen (talk) 01:06, 26 December 2023 (UTC)Reply[reply]

Artanisen, you can use the template {{Attribution-Copernicus |year=2019}}. Huntster (t @ c) 01:34, 26 December 2023 (UTC)Reply[reply]
Huntster, what about this satellite photo on the same page? Details show it was probably taken in 2012 (edited in 2013) which is before Sentinel-1A (launched in 2014) so the photo could be taken by Jason-1, but there are no satellite photos of on commons afaik. -Artanisen (talk) 00:52, 27 December 2023 (UTC)Reply[reply]

Close request for category discussion edit

Hi, could someone close the discussion 'Why are Chronic fatigue syndrome and myalgic encephalomyelitis separate categories?' on Category_talk:Chronic_fatigue_syndrome and remove the tags from Category:Chronic_fatigue_syndrome and Category:Myalgic encephalomyelitis? Already well over a year ago both participants concluded that it had run its course. Cheers, Guido den Broeder (talk) 23:17, 26 December 2023 (UTC)Reply[reply]

Convenience link to CfD: Commons:Categories for discussion/2022/09/Category:Chronic fatigue syndrome. --HyperGaruda (talk) 07:07, 27 December 2023 (UTC)Reply[reply]

December 27 edit

Problems with Kit body universitario23e.png edit

Hello, had uploaded an image that is part of a football kit, the problem is that when I go to the description page I get that nothing has been uploaded, and when I try to upload it again, it does not leave the page, indicating that it has already been uploaded. has risen. If it is as soon as possible I would appreciate it. IBryanDP (talk) 00:10, 27 December 2023 (UTC)Reply[reply]

Images in Category:Wyman-Gordon, Houstoun edit

I noticed that both images in Category:Wyman-Gordon, Houstoun have "Houston", not "Houstoun" in their filenames and descriptions, and that the company in question is now headquartered in Houston, Texas. Nonetheless, since the images came in through geograph.org.uk, they probably are really from Houstoun Industrial Estate in West Lothian, not from Houston, Texas.

I'm inclined to change the filenames and descriptions, but the coincidence of names is so close that I figured I'd check here first in case I'm just plain wrong. User:Rodhullandemu who did the categorizing is banned, so I can't ask him. - Jmabel ! talk 03:26, 27 December 2023 (UTC)Reply[reply]

Unlike many millions of files on the site for which proper categorization is difficult if not impossible, these are well-described and geotagged. I don't see the problem.RadioKAOS (talk) 06:36, 27 December 2023 (UTC)Reply[reply]
The file names and descriptions say the place is called "Houston". The category says "Houstoun". One of them has to be wrong. I believe it is the former, but I have no one to check with (a bot vs. a banned user). - Jmabel ! talk 07:49, 27 December 2023 (UTC)Reply[reply]
Wyman-Gordon Livingston. The building in this image can be seen here on Google Street View. Anyway, it's located on "Houstoun Road." So that's probably the correct spelling. Although ironically there's also Wyman-Gordon Houston located in Houston, Texas. Which would explain the confusion, but the pictures seem to be of the facility in Scotland. --Adamant1 (talk) 10:17, 27 December 2023 (UTC)Reply[reply]