Are you at @everythingopen today?
Come join my tutorial after morning tea where I'll cover fine-tuning #Whisper #ASR models using @mozilladatacollective datasets.
Are you at @everythingopen today?
Come join my tutorial after morning tea where I'll cover fine-tuning #Whisper #ASR models using @mozilladatacollective datasets.
TL;DR: I'm using WhisperIMEplus on my phone, and I think I will finally live in the XXIst century with my phone.
https://github.com/woheller69/whisperIMEplus
I refrained myself from using speech recognition on my android since the beginning because I didn't like the idea of my voice used for other reasons than my need which would have been speech-to-text.
And having on-device speech recognition was pretty niche for a while (I was interested in mycroft and snips at that time). Then there was Mozilla with CommonVoice and deepspeech, unfortunately, DeepSpeech has been shut down (it seems), and the results are far from the Whisper model from OpenAI.
I'm clearly not an OpenAI fan (if you haven't figured it out yet, you will soon if you follow me), but Whisper seems to be the best thing that got out of this, mostly because it's way more open than any other things from OpenAI which are not open at all.
Anyway, I found that now, there is a project called WhisperIMEplus that is used as a keyboard on my android, and it processes my voice, locally, on my device. And the app has NO internet connection rights, so, even if OpenAI added some backdoor to send data online in their Whisper model, well, Android app rights wouldn't allow it.
I'm fine with all of this, so now, I can finally take notes by talking to my phone, in English and French, without having second thoughts about it.
It's good when technology helps you, instead of trying to screw you in different and sneaky ways.
#SpeechRecognition #android #privacy #SpeechToText #OpenAI #whisper #ai
A Beginners Guide to Extract Text from a Specific Portion of a YouTube Video on Linux
Step-by-Step tutorial: https://ostechnix.com/extract-text-from-specific-youtube-video-part-linux/
#Transcribe #Youtube #Ytdlp #Ffmpeg #Whisper #Commandline #Linux #Script
TL;DR: I'm using WhisperIMEplus on my phone, and I think I will finally live in the XXIst century with my phone.
https://github.com/woheller69/whisperIMEplus
I refrained myself from using speech recognition on my android since the beginning because I didn't like the idea of my voice used for other reasons than my need which would have been speech-to-text.
And having on-device speech recognition was pretty niche for a while (I was interested in mycroft and snips at that time). Then there was Mozilla with CommonVoice and deepspeech, unfortunately, DeepSpeech has been shut down (it seems), and the results are far from the Whisper model from OpenAI.
I'm clearly not an OpenAI fan (if you haven't figured it out yet, you will soon if you follow me), but Whisper seems to be the best thing that got out of this, mostly because it's way more open than any other things from OpenAI which are not open at all.
Anyway, I found that now, there is a project called WhisperIMEplus that is used as a keyboard on my android, and it processes my voice, locally, on my device. And the app has NO internet connection rights, so, even if OpenAI added some backdoor to send data online in their Whisper model, well, Android app rights wouldn't allow it.
I'm fine with all of this, so now, I can finally take notes by talking to my phone, in English and French, without having second thoughts about it.
It's good when technology helps you, instead of trying to screw you in different and sneaky ways.
#SpeechRecognition #android #privacy #SpeechToText #OpenAI #whisper #ai
🗣️🎤📝
Speech to Text and Text to Speech on GNU/Linux
📝🔊💻
Why This Matters to Me (and Maybe You Too)
If you’re anything like me—a Linux user who counts on voice typing and TTS because of visual impairment—you know that accessibility is not a luxury, it’s a necessity. Speaking from experience as someone who depends on voice typing (and TTS) , the quest for a seamless, local, FLOSS speech-to-text (STT) setup on Linux can be frustrating.
Here’s how you can succeed with modern tools using Linux. FLOSS means freedom and privacy; working locally means real control.
Let’s dive in! I’ll tell you what I’ve learned and what I use—and hope you’ll share your favorite tools or tips!
System-Wide Voice Keyboard: Speak Directly in Any App
Want to speak and have your words typed wherever your cursor is—be it a terminal, browser, chat, or IDE? Here’s what actually works and how it feels day-to-day:
- Speak to AI (Offline, Whisper-based, global hotkeys)
This tool is my current go-to. It uses Whisper locally, lets you use global hotkeys (configurable) to type into any focused window, and doesn’t need internet. Runs smoothly on X11 and Wayland; just takes a bit of setup (AppImage available!).
GitHub Repo https://github.com/AshBuk/speak-to-ai) | Dev.to Post https://dev.to/ashbuk/i-built-an-offline-voice-typing-app-for-linux-speak-to-ai-3ab5)
- DIY: RealtimeSTT + PyAutoGUI
For the true tinkerers, RealtimeSTT plus a Python script lets you simulate keystrokes. You control every step, can lower latency with your tweaks, but you’ll need to be comfortable with scripting.
RealtimeSTT Guide https://github.com/KoljaB/RealtimeSTT#readme)
- Handy (Free/Libre, offline, Whisper-based, acts as a keyboard)
I’ve read lots of positive feedback on Handy—even though I haven’t tried it myself. The workflow is simple: press a hotkey, speak, and Handy pastes your text in the active app. It’s fully offline, works on X11 and Wayland, and gets strong accuracy thanks to Whisper.
Heads up: Handy lets you pick your own shortcut key, but it actually overrides the keyboard shortcut for start/stop recording. That means it can clash with other tools that depend on major shortcut combos—including Orca’s custom keybindings if you use a screen reader. If your workflow relies on certain shortcuts, this might need adjustment or careful planning before you commit.
GitHub Repo https://github.com/cjpais/Handy) | Demo https://handy.computer)
Real-Time Transcription in a Window (Copy/Paste Workflow)
If you’re okay with speaking into a dedicated app, then copying, these options offer great GUIs and power features:
- Speech Note by @mkiol https://mastodon.social/@mkiol
FLOSS, offline, multi-language GUI app—perfect for quick notes and batch transcription. Not a system-wide keyboard, but super easy to use and works on both desktops and Linux phones.
Flathub https://flathub.org/apps/net.mkiol.SpeechNote | LinuxPhoneApps https://linuxphoneapps.org/apps/net.mkiol.speechnote/)
- WhisperLive (by Collabora)
Real-time transcription in a terminal or window—great for meetings, lectures, and captions. Manual copy/paste required to get the text to other apps.
GitHub Repo https://github.com/collabora/WhisperLive)
More Tools for Tinkerers
If you like building your own or want extra control, check out:
- Vosk: Lightweight, lots of language support. GitHub https://alphacephei.com/vosk/)
- Kaldi: Powerful, best for custom setups. Website https://kaldi-asr.org/)
- Simon: Voice control automation. Website https://simon-listens.org/)
- voice2json: Phrase-level and command recognition. GitHub https://github.com/synesthesiam/voice2json)
Pro Tips
- Desktop Environment: X11 vs. Wayland affects how keyboard hooks and app focus actually operate.
- Ready-Made vs. DIY: If you want plug-and-play, try Speech Note or Handy first. Into automation or customization? RealtimeSTT is perfect.
- Follow the Community: @thorstenvoice offers tons of open-source voice tech insights.
Screen Reader Integration
Looking for robust screen reader support? Linux has you covered:
- Orca (GNOME/MATE): The most customizable GUI screen reader out there. The default voice (eSpeak) is robotic, but you can swap it for something better and fine-tune verbosity so it reads only what matters.
- Speakup: Console-based, ideal for terminal.
- Emacspeak: The solution for Emacs fans.
💡 Orca is part of my daily toolkit. It took time to get the settings just right (especially verbosity!) but it’s absolutely worth it. If you use a screen reader—what setup makes it bearable or even enjoyable for you?
Final Thoughts
If you’re starting from scratch, try Handy for direct typing (just watch those shortcuts if you use a screen reader!) or Speech Note for GUI-based transcription. Both are privacy-friendly, local, and accessible—ideal for everyday Linux use.
Is there a FLOSS gem missing here?
Sharing what works (and what doesn’t!) helps the entire community.
Resources:
Speech Note on Flathub https://flathub.org/apps/net.mkiol.SpeechNote
Handy GitHub https://github.com/cjpais/Handy
Speak to AI Guide https://dev.to/ashbuk/i-built-an-offline-voice-typing-app-for-linux-speak-to-ai-3ab5
RealtimeSTT https://github.com/KoljaB/RealtimeSTT
#Linux #SpeechToText #FLOSS #Accessibility #VoiceKeyboard #ScreenReader #Whisper #Handy #SpeechNote #OpenSource #Community #voicetyping #LocalSTT #TTStools #SpeechRecognition #A11y #Linuxtools #Voicekeyboard #Whisper #Handy #speech-to-text #SpeechNote #review #ScreenReaders #ORCA #FOSS
Ha, someone has beaten me to it
handy - the free and open source app for speech to text
Looks really awesome!
Ha, someone has beaten me to it
handy - the free and open source app for speech to text
Looks really awesome!
Why do voice transcription apps charge monthly when Whisper runs locally?
#HackerNews #voiceTranscription #voiceTech #Whisper #appPricing #localProcessing
A story about never ever giving up...❤️🔥
After several weeks, questioning my life choices, I've finally figured out why my #Whisper #SpeechToText system had been so slow on #Windows:
It was because apparently the #Rust-FFI wrapped #CPlusPlus code (Whisper.cpp) didn't compile with AVX and AVX2 enabled ( #SIMD!). I've tried it on two Windows machines (both AVX-capable). On one of the machines, with #Linux, it has successfully detected AVX/AVX2, though and has run fast.
1/?
Hmm... 🤔
My suspicion why it's "not working" is:
Even though I do `cargo run --release` I've seen, during my investigation of the above compiling-fail-nightmare, that it puts artifacts into `Debug` folder.
So it might be that the program (Whisper.cpp to be precise) runs as a debug build and is just _terribly_ slow. 🐌
Oh boy, the struggle continues... 🤸
This might be related:
https://codeberg.org/tazz4843/whisper-rs/issues/226
A story about never ever giving up...❤️🔥
After several weeks, questioning my life choices, I've finally figured out why my #Whisper #SpeechToText system had been so slow on #Windows:
It was because apparently the #Rust-FFI wrapped #CPlusPlus code (Whisper.cpp) didn't compile with AVX and AVX2 enabled ( #SIMD!). I've tried it on two Windows machines (both AVX-capable). On one of the machines, with #Linux, it has successfully detected AVX/AVX2, though and has run fast.
1/?
A Beginners Guide to Extract Text from a Specific Portion of a YouTube Video on Linux
Step-by-Step tutorial: https://ostechnix.com/extract-text-from-specific-youtube-video-part-linux/
#Transcribe #Youtube #Ytdlp #Ffmpeg #Whisper #Commandline #Linux #Script
Progress on my little speech2text/transcription project:
1. You press some hotkeys.
2. You speak into your microphone.
3. You wait for approx. 10 secs. (depending on your hardware)
4. Text starts to magically appear on your screen!
It feels like True Magic™! 🪄 ✨
This is why I love software development! ❤️
#Speech2Text #AI #Whisper #Rust #RustLang #Audio #AudioTranscription
Ok, I have to correct myself:
Compiling any C/C++ project on Windows is an absolute clusterfuck!
I've now almost spent more time trying to compile my program for Windows than writing the actual code for it - let that sink in!
Whoop! It compiles now on Windows!
You'll never guess what the #error was...
...on my Windows machine I had a file sync program running in the background, which apparently tripped up the compilation process (the program to compile was in a folder that had been under sync)!
Once I moved the program out of this folder, it all compiled fine!
Holy cow! 🤯
Unfortunately, my program doesn't seem to work on #Windows yet. It just gets stuck after passing audio to #Whisper. 😢
Progress on my little speech2text/transcription project:
1. You press some hotkeys.
2. You speak into your microphone.
3. You wait for approx. 10 secs. (depending on your hardware)
4. Text starts to magically appear on your screen!
It feels like True Magic™! 🪄 ✨
This is why I love software development! ❤️
#Speech2Text #AI #Whisper #Rust #RustLang #Audio #AudioTranscription
This post was written by voice input through the #Futo keyboard which uses #whisper. Free, open source and local - no internet connection! Seems to work very well for English and quite good for German too. I only had to add hashtags and correct two words manually.
Important addition: I was motivated to look for a solution for a friend who has multiple sclerosis, and it makes typing really hard for him. His messages are already longer and more detailed :)