Board Thread:General Discussion/@comment-37380939-20200810025747/@comment-9605025-20200810092017

reply to #13 Okay, fine. I went back and found one of the previous discussions about wikitext. It is this thread. More specifically: Andrewds1021 wrote: It is probably a security risk, inefficient, or something but api.php does allow for parsing of any text. I can't give an actual link becuase the wikitext will get parsed, but you can copy-paste the following URL and see for yourself.

https://community.fandom.com/api.php?action=parse&prop=text&text= - Andrewds1021 wrote: [...]

All that said, it still seems to me that getting wikitext into Discussions would not be all that difficult. I have no knowledge of CORS and all that other security stuff, so maybe there is an issue there. However, when it comes to being able to parse wikitext, I don't see why it couldn't be something as simple as a JS script that runs on page load. It isn't difficult to send api.php wikitext and get back the HTML. Perhaps it would be a server load issue? - Noreplyz wrote: [...]

Yea, from what I see from my perspective, it doesn't scale well when you have to parse content through an API, especially as Discussions isn't built in PHP. The other point is that Discussions was intentionally built to not require MediaWiki - not having to rely on a parser in MW is beneficial for something like an upgrade from 1.19 to the current one.

[...] - Andrewds1021 wrote: I don't know if this will change your opinion but I figured I should still clarify. I am not thinking of having Discussions pass every post through the API. Given how long discussions can get, I certainly understand how that would be a nightmare. I am thinking of having a special block that users can insert if they actually need wikitext. Sort of like how you can insert preformatted text. Then you would just need to send the contents of those special blocks to the API. As far as I am aware, HTML is HTML regardless of how it is generated. So I don't quite understand you point about Discussions not using PHP.

[...] - Noreplyz wrote: [...]

Having blocks of wikitext would be an interesting idea... though something tells me some people will just have their whole post be wikitext :P I mentioned PHP because I suspected that the current versions of MW have optimisations that make parsing wikitext super quick. Perhaps they store the content in both wikitext and HTML? This might not transfer well into the structure that's being used in Discussions, and because it's in JS, those optimisations that have been developed by the MW dev teams for years in PHP probably aren't going to translate quickly into a new platform.

[...] - Andrewds1021 wrote: All fair points. However, just because it isn't optimal doesn't mean it isn't possible/reasonable. The impression that has been conveyed from past exchanges is that it is not possible/reasonable. If one wants to make the argument that, for development purposes, anything less than optimal should be treated as impossible, then fine. I disagree with that decision but at least it clarifies the point of disagreement.

[...] Prior to that exchange, I had written this script as a test to see if wikitext could indeed to parsed via API and inserted into HTML. No surprise, the answer is yes. Regarding the potential issues not addressed in the original discussion: The test script I wrote didn't have a rate limit as I wasn't aware of that concept at the time. However, it could easily be rewritten to have a rate limit/serialize the requests. Since then, I have also learned more about JS's built-in request objects/functions. As such, I am confident it could be rewritten in a form that works with vanilla ES5. In other words, it would require just the core JS supported by browsers as old as IE11; no additional libraries. There would, however, also need to be a slight modification to accomodate non-English wikis; but that isn't too hard either.
 * Security - I assume parsing via API follows the same rules and restrictions as parsing a page. So there shouldn't be any difference between the security of parsing a snippet from Feeds and parsing a wiki page.
 * CORS - The API is on the same domain so that shouldn't be necessary.
 * Server Load - You can institute a rate limit which doesn't remove the problem but should provide some level of relief. Additionally, the rate limit would mean that heavy use of the special section would slow loading of content. This would be a natural feedback loop discouraging unneccessary use of wikitext.

So there is an actual script they can look at, test, and critique if they wanted to. I am not sure what is more actionable than that. - reply to #16 The interesting thing about the editor is that we actually just narrowly missed being able to keep it. The classic editor has been maintained for quite a while. It was updated to work with MediaWiki 1.32. UCP uses MediaWiki 1.33.