Set NVIDIA GPU Power Limits at Boot
systemd is still better than a fire extinguisher
To the surprise of no one, I do have an “AI” workstation at home. Based on a Gigabyte’s AI TOP platform, which is a weird jailbreak on the AMD TRX50 chipset that turns it into half of a WRX90 by enabling 8-channel memory, it’s a beast: 512GB of DDR5 ECC system RAM, 4x 8TB NVMe storage, and: a bunch of NVIDIA’s new-ish Blackwell GPUs.
The latest additions are all MaxQ RTX Pro 6000 cards, hardware-limited to 300W, with effective air evacuation through the rear of the card, keeping the insane heat generated out of the main area of the system.
But when I started, Blackwell had just gone to market, and the only option was an OEM PNY version of the Workstation edition. Using the oversized flow-through cooler design from the 5090, it also inherits its biggest liability: 600W of power draw, via a connector that had never been designed for this, and has since turned into a fire risk.
This card is unwieldy, dangerous, and has the worst airflow path for my application. I’m looking to swap it at some point for a MaxQ, but in the meantime I need the horsepower.
Fortunately, nvidia-smi does provide a power limit switch:
$ sudo nvidia-smi -i 1 -pl 300
Power limit for GPU 00000000:81:00.0 was set to 300.00 W from 600.00 W.
All done.But I want to minimize the chance of this ever spiking above 300W. So let’s first create a tool to set a power limit across all GPUs:
#!/usr/bin/env bash
set -euo pipefail
TARGET_WATTS=300
NVIDIA_SMI=/usr/bin/nvidia-smi
# Wait for GPUs to appear (up to ~30 seconds)
for i in {1..30}; do
if “$NVIDIA_SMI” -L >/dev/null 2>&1; then
break
fi
sleep 1
done
# If still no GPUs, just exit quietly
if ! “$NVIDIA_SMI” -L >/dev/null 2>&1; then
echo “set-nvidia-power-limit: no GPUs found, exiting”
exit 0
fi
# Make sure persistence mode is enabled (helps keep settings)
# to enable, run once at any time: sudo nvidia-smi -pm ENABLED
“$NVIDIA_SMI” -pm 1 || true
# Loop over all GPU indices and set the power limit
for idx in $(”$NVIDIA_SMI” --query-gpu=index --format=csv,noheader); do
echo “Setting power limit for GPU $idx to ${TARGET_WATTS}W”
“$NVIDIA_SMI” -i “$idx” -pl “$TARGET_WATTS”
doneSaved as /usr/local/sbin/set-nvidia-power-limit.sh, then try it once after enabling settings persistence (don’t forget to set the execute flag):
$ sudo nvidia-smi -pm ENABLED
$ sudo /usr/local/sbin/set-nvidia-power-limit.sh
Persistence mode is already Enabled for GPU 00000000:41:00.0.
Persistence mode is already Enabled for GPU 00000000:81:00.0.
All done.
Setting power limit for GPU 0 to 300W
Power limit for GPU 00000000:41:00.0 was set to 300.00 W from 300.00 W.
All done.
Setting power limit for GPU 1 to 300W
Power limit for GPU 00000000:81:00.0 was set to 300.00 W from 300.00 W.
All done.Nice. Now create a service to run this at boot in /etc/systemd/system/nvidia-power-limit.service
[Unit]
Description=Set NVIDIA GPU power limits at boot
After=multi-user.target
Wants=nvidia-persistenced.service
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/set-nvidia-power-limit.sh
[Install]
WantedBy=multi-user.targetAnd enable it:
$ sudo systemctl daemon-reload
$ sudo systemctl enable nvidia-power-limit.service
$ sudo systemctl start nvidia-power-limit.service
$ systemctl status nvidia-power-limit.service○ nvidia-power-limit.service - Set NVIDIA GPU power limits at boot
Loaded: loaded (/etc/systemd/system/nvidia-power-limit.service; enabled; preset: enabled)
Active: inactive (dead) since Sun 2025-11-16 15:45:03 CET; 14min ago
Process: 524590 ExecStart=/usr/local/sbin/set-nvidia-power-limit.sh (code=exited, status=0/SUCCESS)
Main PID: 524590 (code=exited, status=0/SUCCESS)
CPU: 204ms
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524595]: Persistence mode is already Enabled for GPU 00000000:81:00.0.
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524595]: All done.
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524590]: Setting power limit for GPU 0 to 300W
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524599]: Power limit for GPU 00000000:41:00.0 was set to 300.00 W from>
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524599]: All done.
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524590]: Setting power limit for GPU 1 to 300W
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524601]: Power limit for GPU 00000000:81:00.0 was set to 300.00 W from>
Nov 16 15:45:03 aitop set-nvidia-power-limit.sh[524601]: All done.
Nov 16 15:45:03 aitop systemd[1]: nvidia-power-limit.service: Deactivated successfully.
Nov 16 15:45:03 aitop systemd[1]: Finished nvidia-power-limit.service - Set NVIDIA GPU power limits at boot.Done! A little more peace of mind. A Grafana dashboard and alert monitor this as well via pmlogger, the NVIDIA PMDA and PCP/Redis, just to be sure.

The 12VHPWR connector issue is genuinly terrifying for anyone running these high end cards. Using systemd to enforce power limits at boot is such a elegant solution that should honestly be default behavior in enterprise enviroments. The script aproach means you're not relying on manual intervention after every reboot, which is critical for production systems.