Word2vec-style vector arithmetic on docs embeddings
https://technicalwriting.dev/embeddings/arithmetic/index.html
#HackerNews #Word2vec-style #vector #arithmetic #on #docs #embeddings #Word2vec #vectorarithmetic #docsembeddings #NLP #MachineLearning
#Tag
Word2vec-style vector arithmetic on docs embeddings
https://technicalwriting.dev/embeddings/arithmetic/index.html
#HackerNews #Word2vec-style #vector #arithmetic #on #docs #embeddings #Word2vec #vectorarithmetic #docsembeddings #NLP #MachineLearning
🔗 Learn more:
• Official website → https://sbert.net/
• Original paper → https://aclanthology.org/D19-1410.pdf
• GitHub repository → https://github.com/UKPLab/sentence-transformers
📰 Read the full announcements:
TU Darmstadt Press Release
→ https://www.tu-darmstadt.de/universitaet/aktuelles_meldungen/einzelansicht_528832.de.jsp
Hugging Face Blog Post
→ https://huggingface.co/blog/sentence-transformers-joins-hf
(2/2)
#UKPLab #HuggingFace #SentenceTransformers #NLP #AIresearch #OpenSource 🚀
🚨 #NLP SHARED TASK 🚨
Use Mozilla Common Voice Spontaneous #Speech datasets to train #ASR #SpeechRecognition models that work for conversational speech on 21 under-represented languages.
📆 Dataset release 1 Dec
📆 Submissions 8 Dec
💰 $USD 11k prize pool !!!
Boosts appreciated ❤️
🤔 What is #NLP research 𝘳𝘦𝘢𝘭𝘭𝘺 about?
We analyzed 29k+ papers to find out! 📚🔍
📌 Our NLPContributions dataset, from the ACL Anthology, reveals what authors actually contribute—artifacts, insights, and more.
📈 Trends show a swing back towards language & society. Curious where you fit in?
🎁 Tools, data, and analysis await you:
📄 Paper: https://arxiv.org/abs/2409.19505
🌐Project: https://ukplab.github.io/acl25-nlp-contributions/
💻 Code: https://github.com/UKPLab/acl25-nlp-contributions
💾 Data: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4678
(1/🧵)
🤔 What is #NLP research 𝘳𝘦𝘢𝘭𝘭𝘺 about?
We analyzed 29k+ papers to find out! 📚🔍
📌 Our NLPContributions dataset, from the ACL Anthology, reveals what authors actually contribute—artifacts, insights, and more.
📈 Trends show a swing back towards language & society. Curious where you fit in?
🎁 Tools, data, and analysis await you:
📄 Paper: https://arxiv.org/abs/2409.19505
🌐Project: https://ukplab.github.io/acl25-nlp-contributions/
💻 Code: https://github.com/UKPLab/acl25-nlp-contributions
💾 Data: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4678
(1/🧵)
"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.
The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.
To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."
https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/
"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.
The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.
To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."
https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/
I might as well do another #introduction specifically for the #academic side of this here fediverse:
Coming from #theoreticalCS (with applications in #NLP) to doing #digitalhumanities (computational #musicology), I've now landed in #ResponsibleAI. Specifically, I'm interested in exploring #AntiCapitalistAI, both sharpening existing critiques of current AI practise by confronting capital and exploring inherent politics of technologies, and finding better ones for a socialist world.
A space for Bonfire maintainers and contributors to communicate