Best GPU for Running LLMs Locally on Linux: 2026 Buyer's Guide
AI

Best GPU for Running LLMs Locally on Linux: 2026 Buyer's Guide

Practical breakdown of the best GPU for running LLM locally in 2026. Covers VRAM tiers, Linux driver support, Ollama...

23 min
Docker Model Runner on Linux: Deploy and Serve AI Models with GPU Acceleration
AI

Docker Model Runner on Linux: Deploy and Serve AI Models with GPU Acceleration

Use Docker Model Runner on Linux to deploy, serve, and manage AI models with GPU acceleration, including setup, model...

12 min
5 Best AI Coding Assistants for the Linux Terminal: Hands-On Comparison
AI

5 Best AI Coding Assistants for the Linux Terminal: Hands-On Comparison

Hands-on comparison of 5 AI coding assistants for the Linux terminal: Claude Code, Aider, Continue, Open Interpreter,...

10 min
Build a Self-Hosted RAG Pipeline on Linux: Chat with Your Documentation
AI

Build a Self-Hosted RAG Pipeline on Linux: Chat with Your Documentation

Build a private RAG pipeline on Linux using Ollama, ChromaDB, and Python to chat with your own documentation, wikis,...

5 min
LoRA Fine-Tuning on Linux: Customize LLMs with Your Own Data
AI

LoRA Fine-Tuning on Linux: Customize LLMs with Your Own Data

Fine-tune LLMs with LoRA and QLoRA on Linux. Covers data preparation, training configuration, GPU requirements,...

12 min
Multi-GPU LLM Inference on Linux: Setup, Load Balancing, and Scaling
AI

Multi-GPU LLM Inference on Linux: Setup, Load Balancing, and Scaling

Configure multi-GPU LLM inference on Linux with Ollama and vLLM, including tensor parallelism, load balancing across...

13 min
vLLM on Linux: Production Deployment Guide for High-Throughput Inference
AI

vLLM on Linux: Production Deployment Guide for High-Throughput Inference

Deploy vLLM on Linux for production LLM inference. Covers installation, API server config, tensor parallelism,...

9 min
Power Consumption: Running LLMs 24/7 on Linux — Real Electricity Costs
AI

Power Consumption: Running LLMs 24/7 on Linux — Real Electricity Costs

Real-world power measurements and electricity costs for running LLMs on Linux 24/7. Covers GPU monitoring, cost...

5 min
Ollama on Proxmox: GPU Passthrough for LXC and VM AI Workloads
AI

Ollama on Proxmox: GPU Passthrough for LXC and VM AI Workloads

Configure NVIDIA GPU passthrough on Proxmox for Ollama AI workloads in both LXC containers and VMs, with IOMMU setup,...

13 min
Ollama on RHEL 9 and Rocky Linux: Enterprise Setup and SELinux Guide
AI

Ollama on RHEL 9 and Rocky Linux: Enterprise Setup and SELinux Guide

Deploy Ollama on RHEL 9 and Rocky Linux with proper SELinux policies, firewalld rules, NVIDIA drivers, Podman support,...

10 min