I updated the slides for my talk "Run LLMs Locally":
Now including requirements, costs, setup, llama.cpp, stable-diffusion.cpp, embeddings, function calling, opencode, image recognition, speech recognition, image generation, prompt injection and popular models like GPT-OSS, Qwen3, Qwen3-vl, Z-Image and Whisper.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf
#llm #llamacpp #stablediffusion #gptoss #qwen3 #opencode #php