Hey everyone 馃憢
I鈥檓 diving deeper into running AI models locally鈥攂ecause, let鈥檚 be real, the cloud is just someone else鈥檚 computer, and I鈥檇 rather have full control over my setup. Renting server space is cheap and easy, but it doesn鈥檛 give me the hands-on freedom I鈥檓 craving.
So, I鈥檓 thinking about building my own AI server/workstation! I鈥檝e been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I鈥檇 love your advice!
 My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don鈥檛 need gaming features (ray tracing, DLSS, etc.), I鈥檓 leaning toward used server GPUs that offer great performance for AI workloads.
 Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What鈥檚 your go-to setup for local AI inference? I鈥檇 love to hear about your experiences!
I鈥檓 all about balancing cost and performance, so any insights or recommendations are hugely appreciated. 
Thanks in advance! 馃檶
@selfhosted@a.gup.pe  #AIServer  #LocalAI  #BudgetBuild  #LLM  #GPUAdvice  #Homelab  #AIHardware  #DIYAI  #ServerGPU  #ThinkStation  #UsedTech  #AICommunity  #OpenSourceAI  #SelfHostedAI  #TechAdvice  #AIWorkstation  #LocalAI  #LLM  #MachineLearning  #AIResearch  #FediverseAI  #LinuxAI  #AIBuild  #DeepLearning  #OpenSourceAI  #ServerBuild  #ThinkStation  #BudgetAI  #AIEdgeComputing  #Questions  #CommunityQuestions  #HomeLab  #HomeServer  #Ailab  #llmlab