AI
AI and machine learning on Linux — deploy LLMs, GPU setup, self-hosted AI tools, and intelligent automation for sysadmins and DevOps engineers.
Open WebUI vs Text Generation WebUI: Side-by-Side Linux Server Comparison
A practical comparison of Open WebUI and Text Generation WebUI on Linux servers, covering installation, hardware needs,...
NVIDIA Tesla P40 and Ollama: Budget LLM Server Build Guide for Linux
Build a 24 GB VRAM LLM inference server for under $350 using a used NVIDIA Tesla P40. Complete parts list, driver...
AMD ROCm and Ollama on Linux: Complete GPU Setup Guide
Full guide to running Ollama on AMD GPUs with ROCm 6.x on Ubuntu and RHEL. Covers supported GPUs,...
Deploy a Private ChatGPT on Your Linux Server with Ollama and Open WebUI
Step-by-step guide to deploying a private, self-hosted ChatGPT alternative on Ubuntu 24.04 with Ollama, Open WebUI,...
Best Ollama Models for Linux Servers: 2026 Benchmarks and Recommendations
Comprehensive benchmarks of the best Ollama models for Linux servers in 2026. Covers chat, coding, small, vision, and...
Ollama vs OpenAI API: True Cost Comparison and When Self-Hosting Wins
Real cost breakdown of self-hosted Ollama versus OpenAI API at 100, 1K, and 10K daily requests. Hardware costs,...
Ollama vs vLLM: Which LLM Server Should Linux Admins Choose?
A head-to-head comparison of Ollama and vLLM for Linux administrators: architecture, installation, benchmarks, GPU...
Ollama GPU Memory Not Enough: Complete Troubleshooting Guide for Linux
Fix Ollama GPU memory errors on Linux with VRAM diagnostics, quantization tuning, GPU/CPU offloading, multi-GPU...
Ollama vs llama.cpp: Performance, Setup, and When to Use Each on Linux
Comparing Ollama and llama.cpp on Linux: architecture relationship, build-from-source guide with CUDA, performance...
Self-Hosted ChatGPT Alternatives on Linux: Complete Deployment Guide (2026)
Deploy private ChatGPT alternatives on Linux with Ollama. Step-by-step install for Open WebUI, LibreChat, AnythingLLM,...