Just made my first edits on National Library of Norway's local history wiki.
Let's see if they last longer than my first edits on Wikipedia.
#Wiki #MediaWiki #LocalKnowledge #LocalHistory #History #Norsk #Norwegian #Oslo
#Tag
Just made my first edits on National Library of Norway's local history wiki.
Let's see if they last longer than my first edits on Wikipedia.
#Wiki #MediaWiki #LocalKnowledge #LocalHistory #History #Norsk #Norwegian #Oslo
#ClimateKG project is developing an Entity Relation Model to represent the syntactic structure of a scientific document in a knowledge graph.
We are reaching out to ask for recommendations of projects, literature, methods, etc.
The ClimateKG will hold #IPCC #AR6 corpus of 7 reports for browsing, republishing, enrichment.
Knowledge graph uses #Wikibase / #MediaWiki
Write up: https://github.com/TIBHannover/climate-knowledge-graph/issues/51
IPCC AR6 corpus: https://doi.org/10.5281/zenodo.17516065
ClimateKG project: https://tibhannover.github.io/climate-knowledge-graph
#ClimateKG - the #IPCC #AR6 reports are a pretty big beast. Authored over 7 years they are the ultimate #statusupdate of planet earth. Quant data: https://doi.org/10.5281/zenodo.17521936 - Reports 7, Pages 10,047, Words 8,047,000, Citations 48,400, Data 66,834, Figures 1,672, Authors 1,106, Glossary 925, Acronyms 3,041, Lang 5+. The entity relationship model is need to map the parts of the reports, relations, then to allow markup of entities, concepts, etc
#ClimateKG will be used for the #IPCC #AR6 corpus of 7 main reports of 10,000 pages. For corpus browsing, republishing, community enrichment. The constructed knowledge graph in Wikibase/MediaWiki allows browsing using the familiar #MediaWiki interface with #Wikidata enhancements like infoboxes, and #scholia like interfaces https://scholia.toolforge.org/ | Republishing is intended for sharing or reviewing search results on climate topics | Enrichment is for data scientists to make use of reports >>>
#ClimateKG project is developing an Entity Relation Model to represent the syntactic structure of a scientific document in a knowledge graph.
We are reaching out to ask for recommendations of projects, literature, methods, etc.
The ClimateKG will hold #IPCC #AR6 corpus of 7 reports for browsing, republishing, enrichment.
Knowledge graph uses #Wikibase / #MediaWiki
Write up: https://github.com/TIBHannover/climate-knowledge-graph/issues/51
IPCC AR6 corpus: https://doi.org/10.5281/zenodo.17516065
ClimateKG project: https://tibhannover.github.io/climate-knowledge-graph
#MUDCon2025 day 3 continuing with different possibilities to customize MediaWiki instances for various use cases. @tibhannover colleague Alexander Gesinn presents on the visual potential of different graph visualization approaches (e.g. the KnowledgeGraph extension, and SRF Graph based on Graphviz) for hashtag#semanticmediawiki.
📽️ Follow the livestream: https://www.youtube.com/watch?v=jCjaMnxgq2Q
#MUDCon2025 day 3 continuing with different possibilities to customize MediaWiki instances for various use cases. @tibhannover colleague Alexander Gesinn presents on the visual potential of different graph visualization approaches (e.g. the KnowledgeGraph extension, and SRF Graph based on Graphviz) for hashtag#semanticmediawiki.
📽️ Follow the livestream: https://www.youtube.com/watch?v=jCjaMnxgq2Q
Wrapping up Day 2 of hashtag#MUDCon2025, Tom Arrow from Wikimedia Deutschland e. V. delivers the second keynote of the conference giving updates on latest state of Wikibase, Wikidata and federation devepments at WMDE (coinciding with the 13th Bday of hashtag#Wikidata, too!). Delving into the nitty gritty of federation complexity, Tom provides some examples of what he describes as ontology vs query federation. Pointing to some issues that were also mentioned in the earlier presentation by OSL's Lucia Sohmen (e.g. the need for data type support of concept URIs from external semantic resources), Tom showcases the feature development planning for "federated values" in the future.
🎂 Follow live: https://www.youtube.com/watch?v=WjWzELDLmkc
More from #MUDCon2025 on the topics of AI applications & #MediaWiki development: Jeffrey Wang presented an excellent experimental analysis of current state of 'vibe coding' when developing a sample MediaWiki extension. Proposal - switching to spec-driven approach & staying open for further developments in the future.
📽️ Catch up on the full talk, including demo, via the livestream: https://www.youtube.com/watch?v=WjWzELDLmkc
And we're officially announcing the #ECHOLOT project publicly for the first time at the #MUDCon2025 with @krabina from KMA, one of our partners in this new HORIZON Europe collaborative project. Bernhard provided some history and context to the project idea, and we outlined how we are reimagining the #MediaWiki software for the future of connected cultural heritage data together with a dozen other EU organisations. More details on the project will be published soon on the @tibhannover website.
In the meantime, stay tuned and follow along the MUDCon: https://www.youtube.com/watch?v=WjWzELDLmkc
📣 New security and maintenance releases of supported
#MediaWiki branches are out!
MediaWiki 1.39.15, 1.43.5 and 1.44.2 have now been released.
#MUDCon2025 is continuing today with various use cases for #MediaWiki software incorporating semantics – from open government data projects (@krabina) to scientific data from electronic lab notebooks (Thomas Gruber – Helmholtz-Zentrum Dresden-Rossendorf (HZDR)) and various interconnected humanities data (FlorianThiery – LEIZA), including a mention of @nfdi4culture & #SemanticKompakkt.
📽️ Follow the livestream for more throughout today and tomorrow: https://www.youtube.com/watch?v=WjWzELDLmkc
📣 New security and maintenance releases of supported
#MediaWiki branches are out!
MediaWiki 1.39.15, 1.43.5 and 1.44.2 have now been released.
@nullvoxpopuli Yes!
Wikipedia has output style tags in the body of most articles for a decade without issue.
* https://en.wikipedia.org/wiki/Universe — 18 style tags.
* https://en.wikipedia.org/wiki/Banana — 16 style tags.
To put that in context:
* We spent time *this* year investigating an issue with Firefox 52 on WinXP. If there's an edge case, we hear about it and a volunteer or staff may prioritise it.
* Our pages have one of the best perf among top sites. https://www.mediawiki.org/wiki/Wikimedia_Performance_Team#Milestones
@nullvoxpopuli Yes!
Wikipedia has output style tags in the body of most articles for a decade without issue.
* https://en.wikipedia.org/wiki/Universe — 18 style tags.
* https://en.wikipedia.org/wiki/Banana — 16 style tags.
To put that in context:
* We spent time *this* year investigating an issue with Firefox 52 on WinXP. If there's an edge case, we hear about it and a volunteer or staff may prioritise it.
* Our pages have one of the best perf among top sites. https://www.mediawiki.org/wiki/Wikimedia_Performance_Team#Milestones
The MediaWiki software gives editors the ability to use "templates".
The infobox you see aside many articles is such a template.
These are re-usable macros (a bit like web components) that include another page, called with parameters. There you can use variables, conditionals, and can embed a stylesheet.
We then output this in the HTML before the first call to that template on a given page (deduplicated).
@nullvoxpopuli Yes!
Wikipedia has output style tags in the body of most articles for a decade without issue.
* https://en.wikipedia.org/wiki/Universe — 18 style tags.
* https://en.wikipedia.org/wiki/Banana — 16 style tags.
To put that in context:
* We spent time *this* year investigating an issue with Firefox 52 on WinXP. If there's an edge case, we hear about it and a volunteer or staff may prioritise it.
* Our pages have one of the best perf among top sites. https://www.mediawiki.org/wiki/Wikimedia_Performance_Team#Milestones
I have a problem, which is: My websites (a #Wordpress site and a #MediaWiki installation) are slow as hell.
So I need to identify the cause. The problem is that I don't know nearly as much about website administration as I ought to be.
I contacted the support people at my website provider, who looked at my (Apache) logs and suggested that my Wordpress site might suffer from a "pingback xmlrpc attack". I did the proposed remedy, which made things a little better. But I don't know enough about reading website logs to identify such problems myself, which I ought to.
So what I am trying to say is: Is there some kind of beginners guide for reading website logs, identifying malicious traffic, and what to do about it?
I have a problem, which is: My websites (a #Wordpress site and a #MediaWiki installation) are slow as hell.
So I need to identify the cause. The problem is that I don't know nearly as much about website administration as I ought to be.
I contacted the support people at my website provider, who looked at my (Apache) logs and suggested that my Wordpress site might suffer from a "pingback xmlrpc attack". I did the proposed remedy, which made things a little better. But I don't know enough about reading website logs to identify such problems myself, which I ought to.
So what I am trying to say is: Is there some kind of beginners guide for reading website logs, identifying malicious traffic, and what to do about it?
A space for Bonfire maintainers and contributors to communicate