Community Central
Community Central
mNo edit summary
mNo edit summary
 
(13 intermediate revisions by 5 users not shown)
Line 11: Line 11:
 
:Thanks for that. But I'm still not seeing where the dupes are. Is it in the pageid? Sorry I'm not seeing this. [[User:Cook Me Plox|Cook Me Plox]] 20:13, July 1, 2010 (UTC)
 
:Thanks for that. But I'm still not seeing where the dupes are. Is it in the pageid? Sorry I'm not seeing this. [[User:Cook Me Plox|Cook Me Plox]] 20:13, July 1, 2010 (UTC)
 
::Probably a better URL with an example is http://runescape.wikia.com/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500 where at least for me it shows up in the first result (notably there are some oddities in the file names of the first few that might merit other further attention). --{{User:Pcj/sig}} 20:26, July 1, 2010 (UTC)
 
::Probably a better URL with an example is http://runescape.wikia.com/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500 where at least for me it shows up in the first result (notably there are some oddities in the file names of the first few that might merit other further attention). --{{User:Pcj/sig}} 20:26, July 1, 2010 (UTC)
:::Okay, I see the first duplicate file among all the other things. But how do I get to the next page? Adding, for isntance, gaifrom="(Swamp) Snake hide.png", it doesn't start with that one. I'm rather confused. Sorry I'm not grasping this :/ {{Signatures/Cook Me Plox}} 20:49, July 1, 2010 (UTC)
+
:::Okay, I see the first duplicate file among all the other things. But how do I get to the next page? Adding, for isntance, gaifrom="(Swamp) Snake hide.png", it doesn't start with that one. I'm rather confused. Sorry I'm not grasping this :/ [[User:Cook Me Plox|Cook Me Plox]] 20:49, July 1, 2010 (UTC)
  +
::::The continuation URL for the previous one is [http://runescape.wikia.com/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500&gaifrom=Acorn(oak)%20Tree%20Seed.PNG this]. You really should probably [[w:c:runescape:Special:Contact|contact Wikia]] about the first few entries on there, as there is some oddity going on with images with double colons between them and their namespace as well as other similar weirdness. Also see [http://runescape.wikia.com/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500&dflimit500 this URL] to show more duplicate images for each image. --{{User:Pcj/sig}} 21:06, July 1, 2010 (UTC)
  +
::::EDIT: The duplicate first few files (especially those without extensions) appear to be "uploaded videos" which are just links to other sites. I would say this is still a bug and should still be reported but it doesn't seem to be exclusive to your site - it occurs on WoWWiki too. --{{User:Pcj/sig}} 21:11, July 1, 2010 (UTC)
  +
  +
Is there any way to automate this process to only output duplicated files? '''[[User:Duskey|Duskey]]'''<small>([[User_talk:Duskey|<span style="color:green;">talk</span>]])</small> 19:03, August 25, 2010 (UTC)
  +
:Not really, you could use a regular expression to eliminate the non-duplicated. --{{User:Pcj/sig}} 19:06, August 25, 2010 (UTC)
  +
  +
:It might be simplest to use a google search for the duplicate file notice in your files. --<span class="sigpic">[[User:M.mendel|◄mendel►]]</span> 20:37, August 25, 2010 (UTC)
  +
  +
I have written some JavaScript to list these for you (by AJAX). First, put this code in your [[Special:Mypage/monaco.js]] (or [[Special:Mypage/global.js]]):
  +
<pre><nowiki>
  +
dil = new Array();
  +
function findDupImages(gf) {
  +
output = "";
  +
url = "/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500&format=json";
  +
if (gf) url += "&gaifrom=" + gf;
  +
$.getJSON(url,function (data) {
  +
if (data.query) {
  +
pages = data.query.pages;
  +
for (pageID in pages) {
  +
dils = ","+dil.join();
  +
if (dils.indexOf(","+pages[pageID].title) == -1 && pages[pageID].title.indexOf("File::") == -1 && pages[pageID].duplicatefiles) {
  +
output += "<h3><a href='/" + pages[pageID].title + "'>"+pages[pageID].title+"</a></h3>\n<ul>\n";
  +
for (x=0;x<pages[pageID].duplicatefiles.length;x++) {
  +
output += "<li><a href='/File:" + pages[pageID].duplicatefiles[x].name + "'>File:"+pages[pageID].duplicatefiles[x].name+"</a></li>\n";
  +
dil.push("File:"+pages[pageID].duplicatefiles[x].name.replace(/_/g," "));
  +
}
  +
output += "</ul>\n\n"
  +
}
  +
}
  +
$("#mw-dupimages").append(output);
  +
if (data["query-continue"]) setTimeout("findDupImages('"+data["query-continue"].allimages.gaifrom+"');",5000);
  +
}
  +
});
  +
}
  +
$(function () { if ($("#mw-dupimages").length) findDupImages(); });
  +
</nowiki></pre>
  +
  +
Then create a page with this content:
  +
<pre><nowiki><div id="mw-dupimages"></div></nowiki></pre>
  +
  +
Then you can browse to that page and it will create a list of duplicate images for you (every 5 seconds it will add more until it exhausts the list). Please let me know if you have any questions. --{{User:Pcj/sig}} 21:45, August 26, 2010 (UTC)
  +
  +
:Just tested it out and it seems to work, thanks pcj. '''[[User:Duskey|Duskey]]'''<small>([[User_talk:Duskey|<span style="color:green;">talk</span>]])</small> 13:42, August 27, 2010 (UTC)
  +
:What would the requirements be to have this function on a non-Wikia wiki? I'm an admin on the [http://wiki.teamfortress.com/wiki/Main_Page Official Team Fortress Wiki] and we definitely need something like this so we can get all the dupes in one place. I followed the instructions but It did not work (which I'm assuming is due to our current setup). Any help/suggestions? [[User:Surlyanduncouth|surlyanduncouth]] <sub>([[User_talk:Surlyanduncouth|talk]])</sub> 14:13, August 29, 2010 (UTC)
  +
::You'll need to install jQuery on your wiki and change some of the URLs. --{{User:Pcj/sig}} 17:33, August 29, 2010 (UTC)
  +
:::Ah, that's probably not possible. Thanks anyway! [[User:Surlyanduncouth|surlyanduncouth]] <sub>([[User_talk:Surlyanduncouth|talk]])</sub> 14:57, August 30, 2010 (UTC)
  +
::::It is possible if you can edit the wiki's JS. --{{User:Pcj/sig}} 15:12, August 30, 2010 (UTC)

Latest revision as of 15:12, 30 August 2010

Forums: Index Community Central Forum Search for duplicate files
Fandom's forums are a place for the community to help other members.
To contact staff directly or to report bugs, please use Special:Contact.
Archive
Note: This topic has been unedited for 4980 days. It is considered archived - the discussion is over. Information in this thread may be out of date. Do not add to unless it really needs a response.

Hello, I'm just wondering if there's a way to list all of the duplicate files on a wiki. I know there's Special:FileDuplicateSearch, but that only looks at one file at a time. Is there a special page (or an extension, or anything) that will churn out a list of all of the duplicates? Thanks, Cook Me Plox

The API: /api.php?action=query&generator=allimages&prop=duplicatefiles --Pcj (TC) 02:50, July 1, 2010 (UTC)
Sorry, but I'm not really sure what I do with that. I got this, but I don't know how to use it. Cook Me Plox 06:56, July 1, 2010 (UTC)

You take the url [1] and see the gaifrom="19. Poneytail spikey.png" at the top? you keep adding that to the url to get to the next page. remember that you have to URLencode (google it for a table to do it by hand) certain things, and spaces turn into underscores. so that first one would become [2] and so on and so on. If there are any dupes, you will see it. --Uberfuzzy 07:27, July 1, 2010 (UTC)

Thanks for that. But I'm still not seeing where the dupes are. Is it in the pageid? Sorry I'm not seeing this. Cook Me Plox 20:13, July 1, 2010 (UTC)
Probably a better URL with an example is http://runescape.wikia.com/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500 where at least for me it shows up in the first result (notably there are some oddities in the file names of the first few that might merit other further attention). --Pcj (TC) 20:26, July 1, 2010 (UTC)
Okay, I see the first duplicate file among all the other things. But how do I get to the next page? Adding, for isntance, gaifrom="(Swamp) Snake hide.png", it doesn't start with that one. I'm rather confused. Sorry I'm not grasping this :/ Cook Me Plox 20:49, July 1, 2010 (UTC)
The continuation URL for the previous one is this. You really should probably contact Wikia about the first few entries on there, as there is some oddity going on with images with double colons between them and their namespace as well as other similar weirdness. Also see this URL to show more duplicate images for each image. --Pcj (TC) 21:06, July 1, 2010 (UTC)
EDIT: The duplicate first few files (especially those without extensions) appear to be "uploaded videos" which are just links to other sites. I would say this is still a bug and should still be reported but it doesn't seem to be exclusive to your site - it occurs on WoWWiki too. --Pcj (TC) 21:11, July 1, 2010 (UTC)

Is there any way to automate this process to only output duplicated files? Duskey(talk) 19:03, August 25, 2010 (UTC)

Not really, you could use a regular expression to eliminate the non-duplicated. --Pcj (TC) 19:06, August 25, 2010 (UTC)
It might be simplest to use a google search for the duplicate file notice in your files. --◄mendel► 20:37, August 25, 2010 (UTC)

I have written some JavaScript to list these for you (by AJAX). First, put this code in your Special:Mypage/monaco.js (or Special:Mypage/global.js):

dil = new Array();
function findDupImages(gf) {
output = "";
url = "/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500&format=json";
if (gf) url += "&gaifrom=" + gf;
$.getJSON(url,function (data) {
if (data.query) {
pages = data.query.pages;
for (pageID in pages) {
dils = ","+dil.join();
if (dils.indexOf(","+pages[pageID].title) == -1 && pages[pageID].title.indexOf("File::") == -1 && pages[pageID].duplicatefiles) {
output += "<h3><a href='/" + pages[pageID].title + "'>"+pages[pageID].title+"</a></h3>\n<ul>\n";
for (x=0;x<pages[pageID].duplicatefiles.length;x++) {
output += "<li><a href='/File:" + pages[pageID].duplicatefiles[x].name + "'>File:"+pages[pageID].duplicatefiles[x].name+"</a></li>\n";
dil.push("File:"+pages[pageID].duplicatefiles[x].name.replace(/_/g," "));
}
output += "</ul>\n\n"
}
}
$("#mw-dupimages").append(output);
if (data["query-continue"]) setTimeout("findDupImages('"+data["query-continue"].allimages.gaifrom+"');",5000);
}
});
}
$(function () { if ($("#mw-dupimages").length) findDupImages(); });

Then create a page with this content:

<div id="mw-dupimages"></div>

Then you can browse to that page and it will create a list of duplicate images for you (every 5 seconds it will add more until it exhausts the list). Please let me know if you have any questions. --Pcj (TC) 21:45, August 26, 2010 (UTC)

Just tested it out and it seems to work, thanks pcj. Duskey(talk) 13:42, August 27, 2010 (UTC)
What would the requirements be to have this function on a non-Wikia wiki? I'm an admin on the Official Team Fortress Wiki and we definitely need something like this so we can get all the dupes in one place. I followed the instructions but It did not work (which I'm assuming is due to our current setup). Any help/suggestions? surlyanduncouth (talk) 14:13, August 29, 2010 (UTC)
You'll need to install jQuery on your wiki and change some of the URLs. --Pcj (TC) 17:33, August 29, 2010 (UTC)
Ah, that's probably not possible. Thanks anyway! surlyanduncouth (talk) 14:57, August 30, 2010 (UTC)
It is possible if you can edit the wiki's JS. --Pcj (TC) 15:12, August 30, 2010 (UTC)