Real artists, help an amateur out.
I often sketch things on canvas in chalk. But I've been reading books about the Old Masters, and a thing they all seem to have in common is #imprimatura #underpainting.
So my question is; When working with oil paint, what is the order of operations? Are you sketching on primed gesso with something a little more perm, like graphite, and then imprimatura, or imprimatura, then a less perm sketch like chalk once it dries?
Real artists, help an amateur out.
I often sketch things on canvas in chalk. But I've been reading books about the Old Masters, and a thing they all seem to have in common is #imprimatura #underpainting.
So my question is; When working with oil paint, what is the order of operations? Are you sketching on primed gesso with something a little more perm, like graphite, and then imprimatura, or imprimatura, then a less perm sketch like chalk once it dries?
Based on my experience with external #software #development companies, if I ever do a #job #interview again where the company asks whether I have any #questions, I will ask them to do a #coding #test:
Please bring in a developer from your team, I will assign a simple task, e.g. for embedded dev: Write down the C or assembly code of a sorting algorithm for a linked list. No standard library, no AI assist. Just write it down.
I guess the majority of development companies would #fail.
Based on my experience with external #software #development companies, if I ever do a #job #interview again where the company asks whether I have any #questions, I will ask them to do a #coding #test:
Please bring in a developer from your team, I will assign a simple task, e.g. for embedded dev: Write down the C or assembly code of a sorting algorithm for a linked list. No standard library, no AI assist. Just write it down.
I guess the majority of development companies would #fail.
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.
Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!
I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
Salut à tous ! 👋
Questions pour la communauté :
Quelqu’un a-t-il de l’expérience avec ces GPU ? Lequel recommanderiez-vous pour exécuter des LLMs plus grands localement ?
Y a-t-il d’autres GPU serveurs économiques que j’aurais pu manquer et qui sont excellents pour les charges de travail IA ?
Avez-vous des conseils pour construire une station de travail IA rentable ? (Refroidissement, alimentation, compatibilité, etc.)
Quelle est votre configuration préférée pour l’inférence IA locale ? J’aimerais entendre vos expériences !
Merci d’avance ! 🙌
#ServeurIA #IALocale #MontageBudget #LLM #ConseilsGPU #LaboMaison #MatérielIA #IAFaitesVousMême #GPUServeur #TechOccasion #CommunautéIA #IAOpenSource #IAAutoHébergée #ConseilsTech #StationIA #ApprentissageAutomatique #RechercheIA #FediverseIA #IALinux #MontageIA #ApprentissageProfond #MontageServeur #IABudget #CalculEnPériphérieIA #Questions #QuestionsCommunauté
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.
Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!
I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
Okay, what's the latest on Bambu Lab 3d printers and the ability to use them without Internet connectivity or their cloud service? Local LAN access, I mean, not swapping sdcards by hand like a caveman.
By popular request, it's here!
FORBIDDEN QUEERIES, my new Question & Response blog is live!
https://hachyderm.io/@mallory_sinn/115029605920637964
I'll be publishing it under my pseudonym, Mallie Sinn, to keep it separate from my career and make it clear what I offer there is personal opinion and not therapy or counseling. You can read it by following @mallory_sinn or on the blog site itself: https://forbidden-queeries.ghost.io/
I am currently taking open questions from everyone at forbiddenqueeries@gmail.com or in private mentions to @mallory_sinn
If you want to support this effort or get top priority for your own question, please subscribe on the blog itself at https://forbidden-queeries.ghost.io/ or on my patreon for projects under my pseudonym https://patreon.com/MallorySinn
OK, I have a computer question... I have a mini PC with an M.2 SSD as its main drive running Windows 11.
I would like to take out that drive, put a new one in, and install Linux Mint.
Should that work? Is there anything I need to check first? Is there a realistic chance that swapping the SSDs back would not let me use the PC as it is now again?