avatar
  • About
  • Articles
  • Projects
  • Memories
  • Travel
  • Uses
About
Articles
Memories
Projects
Travel
Uses

© 2025 Kavin Aravind. All rights reserved.

Go 1.25 - Support for Container Aware GOMAXPROCS

August 27, 2025

Go 1.25 was released recently on 08/12/25 and with that came many improvements to the toolchain, runtime, and libraries!

What caught my eye was was a runtime change when running go apps in containerized environments.

Here are the specific changes from the release notes:

Container-aware GOMAXPROCS

The default behavior of the GOMAXPROCS has changed. In prior versions of Go, GOMAXPROCS defaults to the number of logical CPUs available at startup (runtime.NumCPU). Go 1.25 introduces two changes:

  1. On Linux, the runtime considers the CPU bandwidth limit of the cgroup containing the process, if any. If the CPU bandwidth limit is lower than the number of logical CPUs available,GOMAXPROCS will default to the lower limit. In container runtime systems like Kubernetes, cgroup CPU bandwidth limits generally correspond to the “CPU limit” option. The Go runtime does not consider the “CPU requests” option.
  2. On all OSes, the runtime periodically updates GOMAXPROCS if the number of logical CPUs available or the cgroup CPU bandwidth limit change.

Both of these behaviors are automatically disabled if GOMAXPROCS is set manually via the GOMAXPROCS environment variable or a call to runtime.GOMAXPROCS. They can also be disabled explicitly with the GODEBUG settings containermaxprocs=0 and updatemaxprocs=0, respectively.

In order to support reading updated cgroup limits, the runtime will keep cached file descriptors for the cgroup files for the duration of the process lifetime.

This was a common frustation that often led to cpu throttling in containerized environments. An approach before this change was to use automaxprocs from the open source folks at uber to automatically set GOMAXPROCS to match Linux container CPU quota. But now with this new change, the default behavior is more intuitive in a containerized environments!

Here's a simple demonstration of the changes I have observed using different versions of Go. I ended up using Docker and here are my environment details:

Device: MacBook Pro - Apple M4 Pro
Total Number of Cores: 14 (10 performance and 4 efficiency)

Source Code

A simple program to log GOMAXPROCS and simulate 1ms of "work" being done by 4 go routines. The aim here is to see the values being set by the go runtime and how that impacted simulated work via the 4 spawned go routines.

package main

import (
	"fmt"
	"runtime"
	"sync"
	"time"
)

func main() {
    n := 0 // If n < 1, it does not change the current setting
    fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(n))
    fmt.Println("NumCPU:", runtime.NumCPU())

    start := time.Now()

    var wg sync.WaitGroup
    wg.Add(4)

    for i := range 4 {
        go func(id int) {
            defer wg.Done()
            for range 4 {
				now := time.Now()
				for time.Since(now) < time.Millisecond { } // simulate 1ms of "work"
                runtime.Gosched() // yield processor allowing other goroutines to run for concurrency demonstration
            }
        }(i)
    }

    wg.Wait()
    duration := time.Since(start)
    fmt.Printf("All goroutines complete. Duration: %.2f ms\n", float64(duration.Milliseconds()))
}

Go 1.24

kavin@mac src % docker run --cpus=2 -it -v "$PWD":/app -w /app golang:1.24 go run main.go
GOMAXPROCS: 14
NumCPU: 14
All goroutines complete. Duration: 4.00 ms

kavin@mac src % docker run --cpus=4 -it -v "$PWD":/app -w /app golang:1.24 go run main.go
GOMAXPROCS: 14
NumCPU: 14
All goroutines complete. Duration: 4.00 ms

Go 1.25

kavin@mac src % docker run --cpus=2 -it -v "$PWD":/app -w /app golang:1.25 go run main.go
GOMAXPROCS: 2
NumCPU: 14
All goroutines complete. Duration: 8.00 ms

kavin@mac src % docker run --cpus=4 -it -v "$PWD":/app -w /app golang:1.25 go run main.go
GOMAXPROCS: 4
NumCPU: 14
All goroutines complete. Duration: 4.00 ms

Outcome

As you can see from the logs above, go 1.24 executed 4 go routines with 4 iterations of 1ms of simulated work in 4 ms regardless of the number of CPUs that were set. This is because GOMAXPROCS was set using NumCPU at run time which returns the number of logical CPUs usable by the current process.

In go 1.25, the behavior is different and in my option more intuitive! With the number of cores set at 2, you can see that the program took ~8ms which makes sense as the runtime executed 4 go routines with the work split across 2 threads. When the cpu count was set to 4, it was able to run all 4 in parallel in ~4ms as the total time is roughly the same as a single goroutine’s work.

This demonstrates the new, more intuitive container-aware behavior in Go 1.25!