Skip to content

Face Recognition on a Photo Server — CPU vs GPU

Why GPU acceleration is optional for self-hosted face recognition

Many self-hosted photo servers (Immich, LibrePhotos, PhotoPrism, etc.) use machine learning under the hood. One of the most popular features is face recognition: scanning your archive to detect unique faces, letting you name them, and then grouping every photo where they appear — very similar to what you get with Google Photos or Apple Photos.

When setting this up, you’ll often see a configuration option to “enable hardware acceleration.” That refers to using a GPU: Nvidia with CUDA or AMD with ROCm. If you don’t have one, the option may appear disabled.

Here’s the important part: you don’t need a GPU to make this work. Face recognition runs just fine on a CPU. The difference is speed. A GPU can accelerate the task dramatically, while the CPU simply takes longer.

For a modest photo archive, you won’t notice much difference. Even with a large archive, only the initial scan takes time, since it processes everything at once. After that, new photos are analyzed incrementally and the performance gap is less important.

So don’t let the “hardware acceleration” option discourage you — it’s a nice-to-have, not a requirement.

TensorFlow (and PyTorch)
Machine learning libraries built on top of CUDA and ROCm. Developers rarely write raw GPU code — they use frameworks like TensorFlow or PyTorch, which can run on CPUs or GPUs. Both libraries support CUDA and ROCm, though feature support can differ between Nvidia and AMD cards.

CUDA
Nvidia’s programming model for running workloads on the GPU, available on GeForce, Quadro, and the older Tesla line.

ROCm
AMD’s GPU programming model, broadly similar to CUDA, available on Radeon cards since around 2016.