
The OpenAlex rewrite is live in beta 🎉 Meet Walden: faster, bigger, cleaner. Over 150M new works and sharper metadata. Try the beta API now, or compare Classic and Walden in OREO. Seminar Oct 7.
#OpenAlex #ResearchInfrastructure #Metadata
#Tag
The OpenAlex rewrite is live in beta 🎉 Meet Walden: faster, bigger, cleaner. Over 150M new works and sharper metadata. Try the beta API now, or compare Classic and Walden in OREO. Seminar Oct 7.
#OpenAlex #ResearchInfrastructure #Metadata
The OpenAlex rewrite is live in beta 🎉 Meet Walden: faster, bigger, cleaner. Over 150M new works and sharper metadata. Try the beta API now, or compare Classic and Walden in OREO. Seminar Oct 7.
#OpenAlex #ResearchInfrastructure #Metadata
Our team member @epoz presented "Now that we have Large Language Models, are metadata standards still necessary?" at the Autumn School 2025 ‘Modern Stained Glass – Metadata – AI’ at University of Münster, Faculty of Catholic Theology
https://zenodo.org/records/17151141
#llms #generativeAI #metadata #iconclass #AI #arthistory #dh #digitalhumanities #culturalheritage #elephant #chatgpt @fiz_karlsruhe @nfdi4culture @NFDI4Memory
Any #music fans got any tips/shortcuts for rapidly fixing #metadata #tags across a bunch of albums to enforce consistency with artists like (for eg) Belle & Sebastian vs Belle and Sebastian (or any other artist that could be an ampersand or an 'and)?
Ideally #linux solutions please, even better if there's a simple way to do it in #jellyfin
Our team member @epoz presented "Now that we have Large Language Models, are metadata standards still necessary?" at the Autumn School 2025 ‘Modern Stained Glass – Metadata – AI’ at University of Münster, Faculty of Catholic Theology
https://zenodo.org/records/17151141
#llms #generativeAI #metadata #iconclass #AI #arthistory #dh #digitalhumanities #culturalheritage #elephant #chatgpt @fiz_karlsruhe @nfdi4culture @NFDI4Memory
A Strategic Community #Roadmap for an #Australian#FAIR#Vocabulary Ecosystem
https://doi.org/10.25911/N6K8-F540
Three years ago, I participated in a very engaged workshop at #ANU on #vocabularies for FAIR #data management. It sharpened how I think about vocabularies. I now see them primarily as a #KnowledgeTransfer tool for representing domain expertise in an actionable form. And I think we do a terrible job both at highlighting how critical they are (particularly in an age where trusted expertise is harder to find) and also at making them easier for others to find and reuse.
I picture this scenario. A student is about to start collecting data for their thesis. They need to make choices about what variables to observe or what questions to ask participants, and they need to think about how they want to represent the results to support their analysis. In the ideal case, the actual data collecting effort is about populating an imagined but initially empty data matrix. If they could be assisted to find the best structured and most widely used (in their domain) vocabularies for any categorical values in their data, it would be possible to generate that template matrix with in-built validation tools, etc. The data they finally collect would have most of its metadata already defined and would be properly interoperable with data collected by others in their domain. Meta-analysis would be much simpler.
I am interested in why tools like this don't really exist, or at least why they are not mainstream. I think it's because vocabularies are seen as such an ultra-nerdy subset of the nerdy topic of #metadata rather than presented as an opportunity to stand on the shoulders of others. What can be done to make them more friendly and intuitive for such purposes?
Finally, after way too many struggles, we have a report and recommendations from from that meeting in 2022. I tried to add some of these ideas to the final product as best I could.
A Strategic Community #Roadmap for an #Australian#FAIR#Vocabulary Ecosystem
https://doi.org/10.25911/N6K8-F540
Three years ago, I participated in a very engaged workshop at #ANU on #vocabularies for FAIR #data management. It sharpened how I think about vocabularies. I now see them primarily as a #KnowledgeTransfer tool for representing domain expertise in an actionable form. And I think we do a terrible job both at highlighting how critical they are (particularly in an age where trusted expertise is harder to find) and also at making them easier for others to find and reuse.
I picture this scenario. A student is about to start collecting data for their thesis. They need to make choices about what variables to observe or what questions to ask participants, and they need to think about how they want to represent the results to support their analysis. In the ideal case, the actual data collecting effort is about populating an imagined but initially empty data matrix. If they could be assisted to find the best structured and most widely used (in their domain) vocabularies for any categorical values in their data, it would be possible to generate that template matrix with in-built validation tools, etc. The data they finally collect would have most of its metadata already defined and would be properly interoperable with data collected by others in their domain. Meta-analysis would be much simpler.
I am interested in why tools like this don't really exist, or at least why they are not mainstream. I think it's because vocabularies are seen as such an ultra-nerdy subset of the nerdy topic of #metadata rather than presented as an opportunity to stand on the shoulders of others. What can be done to make them more friendly and intuitive for such purposes?
Finally, after way too many struggles, we have a report and recommendations from from that meeting in 2022. I tried to add some of these ideas to the final product as best I could.
The #WikiCite2025 is on-going since yesterday in hybrid-format. If you are interested in #Wikidata, #KnowledgeGraphs, #LinkedOpenData, #OpenKnowledge, #Commons, #Wikibase, #Metadata, #BiographicData, #Citation, #ScholarlyPublishing, #OpenScience, #SemanticWeb and much more - you can join online. The talks are streamed via #BigBlueButton:
The #WikiCite2025 is on-going since yesterday in hybrid-format. If you are interested in #Wikidata, #KnowledgeGraphs, #LinkedOpenData, #OpenKnowledge, #Commons, #Wikibase, #Metadata, #BiographicData, #Citation, #ScholarlyPublishing, #OpenScience, #SemanticWeb and much more - you can join online. The talks are streamed via #BigBlueButton:
TIL (from my accountant) that the #ATO uses cell tower data to validate vehicle logbook claims. Just in case those in #Australia needed any reminders about the promiscuous governmental #surveillance apparatus and inter-departmental free-for-all on citizen tracking #metadata.
Digital Rights Management (DRM) doesn’t work. Also: draft California law mulls mandatory DRM to preserve image provenance metadata, breaks Signal Messenger
https://alecmuffett.com/article/114666
#AiTransparency #ab853 #ai #california #cata #metadata #privacy #signal #tracking
Digital Rights Management (DRM) doesn’t work. Also: draft California law mulls mandatory DRM to preserve image provenance metadata, breaks Signal Messenger
https://alecmuffett.com/article/114666
#AiTransparency #ab853 #ai #california #cata #metadata #privacy #signal #tracking
Uhh the #andriod#foss #app #opencamera le's one directly clear the #metadata #exifdata of #pictures under the i ① #icon sweet ✅ 💡 Quick additional hint alternatively one could use a upload a picture into signal to automatically remove #metada 💡 ✅
https://f-droid.org/en/packages/net.sourceforge.opencamera/
https://play.google.com/store/apps/details?id=net.sourceforge.opencamera
Uhh the #andriod#foss #app #opencamera le's one directly clear the #metadata #exifdata of #pictures under the i ① #icon sweet ✅ 💡 Quick additional hint alternatively one could use a upload a picture into signal to automatically remove #metada 💡 ✅
https://f-droid.org/en/packages/net.sourceforge.opencamera/
https://play.google.com/store/apps/details?id=net.sourceforge.opencamera
We need to talk about #metadata and the “freemarkit” ideology.
People say “metadata isn’t real data” — but it's more, It maps movements, moods, relationships. That’s why #governments and #dotcons want it.
If you think this doesn't matter, that we’re just individuals in a marketplace, not a society, congrats, you’ve joined the #deathcult.
We can change course. But only if we name the poison. Loudly #OMN
Why It Matters
If metadata is the new currency, then open metadata is the new commons. Capitalism runs on closed systems — the #OMN runs on shared knowledge and decentralized trust.
This isn’t about perfect tech. It’s about:
Human-scale trust
Community autonomy
Fast, messy, democratic distribution
Real-world resilience
It’s not utopia. It’s compost — and we’ve got the shovels.
#OMN #4opens #metadata #trustnetworks #openweb #decentralization #anarchism#DIYtech #postcapitalism#foss
The current media system is stacked against us. It’s rigged, exclusive, and hostile to alternatives.
That’s why we need the Open Media Network.
Because without an open, trust-based infrastructure for communication and coordination, we’re just shouting into silos.
Let’s build something different.
Let’s make history — together.
#OMN #4opens #metadata #trustnetworks #openweb #anarchism #postcapitalism #decentralization#DIYtech #commons #grassroots#foss #geekproblem #dotcons#KISS
A space for Bonfire maintainers and contributors to communicate