Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Djoerd Hiemstra 馃崏 boosted
Der Teilweise
@teilweise@layer8.space  路  activity timestamp 3 days ago

Based on my experience with external #software #development companies, if I ever do a #job #interview again where the company asks whether I have any #questions, I will ask them to do a #coding #test:

Please bring in a developer from your team, I will assign a simple task, e.g. for embedded dev: Write down the C or assembly code of a sorting algorithm for a linked list. No standard library, no AI assist. Just write it down.

I guess the majority of development companies would #fail.

  • Copy link
  • Flag this post
  • Block
Der Teilweise
@teilweise@layer8.space  路  activity timestamp 3 days ago

Based on my experience with external #software #development companies, if I ever do a #job #interview again where the company asks whether I have any #questions, I will ask them to do a #coding #test:

Please bring in a developer from your team, I will assign a simple task, e.g. for embedded dev: Write down the C or assembly code of a sorting algorithm for a linked list. No standard library, no AI assist. Just write it down.

I guess the majority of development companies would #fail.

  • Copy link
  • Flag this post
  • Block
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social  路  activity timestamp 2 months ago

Hey everyone 馃憢

I鈥檓 diving deeper into running AI models locally鈥攂ecause, let鈥檚 be real, the cloud is just someone else鈥檚 computer, and I鈥檇 rather have full control over my setup. Renting server space is cheap and easy, but it doesn鈥檛 give me the hands-on freedom I鈥檓 craving.

So, I鈥檓 thinking about building my own AI server/workstation! I鈥檝e been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I鈥檇 love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don鈥檛 need gaming features (ray tracing, DLSS, etc.), I鈥檓 leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What鈥檚 your go-to setup for local AI inference? I鈥檇 love to hear about your experiences!

I鈥檓 all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 馃檶

@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab


What is the Best used GPU Pick for AI Researchers?
 GPUs I鈥檓 Considering:
| GPU Model            | VRAM          | Pros                                      | Cons/Notes                          |
| Nvidia Tesla M40          | 24GB GDDR5        | Reliable, less costly than V100              | Older architecture, but solid for budget builds |
| Nvidia Tesla M10          | 32GB (4x 8GB)     | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads |
| AMD Radeon Instinct MI50   | 32GB HBM2         | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA |
| Nvidia Tesla V100         | 32GB HBM2         | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance |
| Nvidia A40                | 48GB GDDR6        | Huge VRAM, server-grade GPU                  | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social replied  路  activity timestamp 2 months ago

Salut 脿 tous ! 馃憢
Questions pour la communaut茅 :

Quelqu鈥檜n a-t-il de l鈥檈xp茅rience avec ces GPU ? Lequel recommanderiez-vous pour ex茅cuter des LLMs plus grands localement ?
Y a-t-il d鈥檃utres GPU serveurs 茅conomiques que j鈥檃urais pu manquer et qui sont excellents pour les charges de travail IA ?
Avez-vous des conseils pour construire une station de travail IA rentable ? (Refroidissement, alimentation, compatibilit茅, etc.)
Quelle est votre configuration pr茅f茅r茅e pour l鈥檌nf茅rence IA locale ? J鈥檃imerais entendre vos exp茅riences !

Merci d鈥檃vance ! 馃檶

#ServeurIA #IALocale #MontageBudget #LLM #ConseilsGPU #LaboMaison #Mat茅rielIA #IAFaitesVousM锚me #GPUServeur #TechOccasion #Communaut茅IA #IAOpenSource #IAAutoH茅berg茅e #ConseilsTech #StationIA #ApprentissageAutomatique #RechercheIA #FediverseIA #IALinux #MontageIA #ApprentissageProfond #MontageServeur #IABudget #CalculEnP茅riph茅rieIA #Questions #QuestionsCommunaut茅

  • Copy link
  • Flag this comment
  • Block
Debby 鈥亗馃搸馃惂:disability_flag:
@debby@hear-me.social  路  activity timestamp 2 months ago

Hey everyone 馃憢

I鈥檓 diving deeper into running AI models locally鈥攂ecause, let鈥檚 be real, the cloud is just someone else鈥檚 computer, and I鈥檇 rather have full control over my setup. Renting server space is cheap and easy, but it doesn鈥檛 give me the hands-on freedom I鈥檓 craving.

So, I鈥檓 thinking about building my own AI server/workstation! I鈥檝e been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I鈥檇 love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don鈥檛 need gaming features (ray tracing, DLSS, etc.), I鈥檓 leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What鈥檚 your go-to setup for local AI inference? I鈥檇 love to hear about your experiences!

I鈥檓 all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 馃檶

@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab


What is the Best used GPU Pick for AI Researchers?
 GPUs I鈥檓 Considering:
| GPU Model            | VRAM          | Pros                                      | Cons/Notes                          |
| Nvidia Tesla M40          | 24GB GDDR5        | Reliable, less costly than V100              | Older architecture, but solid for budget builds |
| Nvidia Tesla M10          | 32GB (4x 8GB)     | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads |
| AMD Radeon Instinct MI50   | 32GB HBM2         | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA |
| Nvidia Tesla V100         | 32GB HBM2         | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance |
| Nvidia A40                | 48GB GDDR6        | Huge VRAM, server-grade GPU                  | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
What is the Best used GPU Pick for AI Researchers? GPUs I鈥檓 Considering: | GPU Model | VRAM | Pros | Cons/Notes | | Nvidia Tesla M40 | 24GB GDDR5 | Reliable, less costly than V100 | Older architecture, but solid for budget builds | | Nvidia Tesla M10 | 32GB (4x 8GB) | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads | | AMD Radeon Instinct MI50 | 32GB HBM2 | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA | | Nvidia Tesla V100 | 32GB HBM2 | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance | | Nvidia A40 | 48GB GDDR6 | Huge VRAM, server-grade GPU | Expensive, but future-proof for larger models |
  • Copy link
  • Flag this post
  • Block
Kate Bowles boosted
Coach P膩峁噄ni 庐
@paninid@mastodon.world  路  activity timestamp 2 months ago

Asking #questions is a form of #labor: https://www.plough.com/en/topics/life/technology/what-problem-does-chatgpt-solve

  • Copy link
  • Flag this post
  • Block
Coach P膩峁噄ni 庐
@paninid@mastodon.world  路  activity timestamp 2 months ago

Asking #questions is a form of #labor: https://www.plough.com/en/topics/life/technology/what-problem-does-chatgpt-solve

  • Copy link
  • Flag this post
  • Block
Brian Swetland
@swetland@chaos.social  路  activity timestamp 2 months ago

Okay, what's the latest on Bambu Lab 3d printers and the ability to use them without Internet connectivity or their cloud service? Local LAN access, I mean, not swapping sdcards by hand like a caveman.

#BambuLab#3dPrinting#Questions

  • Copy link
  • Flag this post
  • Block
Joscelyn Transpiring
@JoscelynTransient@chaosfem.tw  路  activity timestamp 3 months ago

By popular request, it's here!

FORBIDDEN QUEERIES, my new Question & Response blog is live!

https://hachyderm.io/@mallory_sinn/115029605920637964

I'll be publishing it under my pseudonym, Mallie Sinn, to keep it separate from my career and make it clear what I offer there is personal opinion and not therapy or counseling. You can read it by following @mallory_sinn or on the blog site itself: https://forbidden-queeries.ghost.io/

I am currently taking open questions from everyone at forbiddenqueeries@gmail.com or in private mentions to @mallory_sinn

If you want to support this effort or get top priority for your own question, please subscribe on the blog itself at https://forbidden-queeries.ghost.io/ or on my patreon for projects under my pseudonym https://patreon.com/MallorySinn

#Trans#Transgender#Queer#Advice#Questions#Writing

  • Copy link
  • Flag this post
  • Block
Anke
@Anke@social.scribblers.club  路  activity timestamp 3 months ago

OK, I have a computer question... I have a mini PC with an M.2 SSD as its main drive running Windows 11.

I would like to take out that drive, put a new one in, and install Linux Mint.

Should that work? Is there anything I need to check first? Is there a realistic chance that swapping the SSDs back would not let me use the PC as it is now again?

#questions#Linux #computer

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.3.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login