Computer: There's a nice dog on the cat tree.
Me: Wait. What!? You sure?
Computer: Yes. I'm 93% sure.
Computer: There's a nice dog on the cat tree.
Me: Wait. What!? You sure?
Computer: Yes. I'm 93% sure.
Free Workshop this week! Tracking fast-moving objects in real time presents a unique CV challenge, especially on low-compute devices. Using OpenCV’s lightweight and low-latency capabilities in Unity with Meta Quest. You’ll learn how CV can run efficiently on wearables with local inference, enabling real-time tracking from a first-person POV and seamless integration into immersive environments.
Sign up here: https://www.eventbrite.com/e/real-time-object-tracking-with-opencv-and-camera-access-tickets-1706443551599?aff=opencvsocials #OpenCV #ComputerVision #MetaQuest #Wearables #VR #OpenSource
Free Workshop this week! Tracking fast-moving objects in real time presents a unique CV challenge, especially on low-compute devices. Using OpenCV’s lightweight and low-latency capabilities in Unity with Meta Quest. You’ll learn how CV can run efficiently on wearables with local inference, enabling real-time tracking from a first-person POV and seamless integration into immersive environments.
Sign up here: https://www.eventbrite.com/e/real-time-object-tracking-with-opencv-and-camera-access-tickets-1706443551599?aff=opencvsocials #OpenCV #ComputerVision #MetaQuest #Wearables #VR #OpenSource
A couple weeks ago @aeonofdiscord showed me "The Hitch-hiker's Guide to Artificial Intelligence" from 1986, a book by Richard Forsyth and Chris Naylor, which teaches readers the state-of-the-art in AI using... BASIC programming type-ins. There's some extremely minimal examples of expert systems, A* maze path search, alpha-beta game trees, etc. but one that made me curious was a "Perceptron image classifier" for machine vision. Image classification in 16kb? I had to see for myself if it would work, so I made a #Processing version. And, amazingly, it actually does - giving about 7 examples of a smile vs frown face, it can then distinguish between the two. Not bad for an idea from 1957!
🚀 Excited to share our new paper:
"DynTex: A real-time generative model of dynamic naturalistic luminance textures"
...now published in Journal of Vision!
🔹 Why it matters: Dynamic textures (e.g., fire, water, foliage) are everywhere, but modeling them in real-time has been a challenge. DynTex bridges this gap with a biologically inspired, efficient approach.
🔹 Key innovation: A generative model that captures the spatiotemporal statistics of natural scenes while running in real-time.
🔹 Applications: Computer vision, neuroscience, VR/AR, and more.📖
Read it here: https://doi.org/10.1167/jov.25.11.2
with Andrew Meso, Nikos Gekas, Jonathan Vacher, Pascal Mamassian and Guillaume Masson
More on: https://laurentperrinet.github.io/publication/meso-25/
#DynamicTextures#ComputationalNeuroscience#ComputerVision#GenerativeModels#OpenScience