Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social  路  activity timestamp 2 months ago

Hey everyone 馃憢

I鈥檓 diving deeper into running AI models locally鈥攂ecause, let鈥檚 be real, the cloud is just someone else鈥檚 computer, and I鈥檇 rather have full control over my setup. Renting server space is cheap and easy, but it doesn鈥檛 give me the hands-on freedom I鈥檓 craving.

So, I鈥檓 thinking about building my own AI server/workstation! I鈥檝e been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I鈥檇 love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don鈥檛 need gaming features (ray tracing, DLSS, etc.), I鈥檓 leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What鈥檚 your go-to setup for local AI inference? I鈥檇 love to hear about your experiences!

I鈥檓 all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 馃檶

@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab


What is the Best used GPU Pick for AI Researchers?
 GPUs I鈥檓 Considering:
| GPU Model            | VRAM          | Pros                                      | Cons/Notes                          |
| Nvidia Tesla M40          | 24GB GDDR5        | Reliable, less costly than V100              | Older architecture, but solid for budget builds |
| Nvidia Tesla M10          | 32GB (4x 8GB)     | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads |
| AMD Radeon Instinct MI50   | 32GB HBM2         | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA |
| Nvidia Tesla V100         | 32GB HBM2         | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance |
| Nvidia A40                | 48GB GDDR6        | Huge VRAM, server-grade GPU                  | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social replied  路  activity timestamp 2 months ago

Hoi iedereen! 馃憢
Vragen aan de community:

Heeft iemand ervaring met deze GPU鈥檚? Welke zou je aanbevelen voor het lokaal draaien van grotere LLMs?
Zijn er andere budgetvriendelijke server-GPU鈥檚 die ik misschien heb gemist en die geweldig zijn voor AI-workloads?
Heb je tips voor het bouwen van een kosteneffectieve AI-workstation? (Koeling, voeding, compatibiliteit, enz.)
Wat is jouw favoriete setup voor lokale AI-inferentie? Ik zou graag over jullie ervaringen horen!

Alvast bedankt! 馃檶"
#AIServer #LokaleAI #BudgetBuild #LLM #GPUAdvies #ThuisLab #AIHardware #DIYAI #ServerGPU #TweedehandsTech #AIGemeenschap #OpenSourceAI #ZelfGehosteAI #TechAdvies #AIWorkstation #MachineLeren #AIOnderzoek #FediverseAI #LinuxAI #AIBouw #DeepLearning #ServerBouw #BudgetAI #AIEdgeComputing #Vragen #CommunityVragen

  • Copy link
  • Flag this comment
  • Block
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social  路  activity timestamp 2 months ago

Hey everyone 馃憢

I鈥檓 diving deeper into running AI models locally鈥攂ecause, let鈥檚 be real, the cloud is just someone else鈥檚 computer, and I鈥檇 rather have full control over my setup. Renting server space is cheap and easy, but it doesn鈥檛 give me the hands-on freedom I鈥檓 craving.

So, I鈥檓 thinking about building my own AI server/workstation! I鈥檝e been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I鈥檇 love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don鈥檛 need gaming features (ray tracing, DLSS, etc.), I鈥檓 leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What鈥檚 your go-to setup for local AI inference? I鈥檇 love to hear about your experiences!

I鈥檓 all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 馃檶

@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab


What is the Best used GPU Pick for AI Researchers?
 GPUs I鈥檓 Considering:
| GPU Model            | VRAM          | Pros                                      | Cons/Notes                          |
| Nvidia Tesla M40          | 24GB GDDR5        | Reliable, less costly than V100              | Older architecture, but solid for budget builds |
| Nvidia Tesla M10          | 32GB (4x 8GB)     | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads |
| AMD Radeon Instinct MI50   | 32GB HBM2         | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA |
| Nvidia Tesla V100         | 32GB HBM2         | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance |
| Nvidia A40                | 48GB GDDR6        | Huge VRAM, server-grade GPU                  | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.3.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login