Environment: Ubuntu 24.04 LTS + K3s 1.33 + Cilium 1.17 on QEMU VM with 7.5 GiB RAM on a x86_64 host.

Recommended sysctl config, with lines matching or lower than their default values commented out:

# general limits
#vm.max_map_count = 1048576
#vm.overcommit_memory = 1
#fs.nr_open = 1048576
fs.aio-max-nr = 1048576
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
#fs.file-max = 524288
#kernel.pid_max = 4194304
 
# networking
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
#net.core.netdev_max_backlog = 1000
#net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096 # default with Cilium's Bandwidth Manager
#net.netfilter.nf_conntrack_max = 131072
#net.netfilter.nf_conntrack_buckets = 262144
net.ipv4.tcp_slow_start_after_idle = 0
#net.ipv4.ip_local_port_range = 32768 60999
#net.ipv4.tcp_max_tw_buckets = 32768
 
# network throughput. Calculate bandwidth delay product before tuning
# global in 4k-pages
#net.ipv4.tcp_mem = 90633 120846 181266
#net.ipv4.udp_mem = 181269 241692 362538
# per-socket buffer in bytes
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
#net.ipv4.tcp_adv_win_scale = 1 # Obsolete since linux-6.6, replaced with per socket scaling factor
net.ipv4.tcp_notsent_lowat = 131072
 
# not needed on cilium? cilium uses /32 in containers.
# DO NOT use on AWS
net.ipv4.neigh.default.gc_thresh1 = 80000 
net.ipv4.neigh.default.gc_thresh2 = 90000
net.ipv4.neigh.default.gc_thresh3 = 100000
 
# security
#kernel.unprivileged_bpf_disabled = 1

You don’t need to tune /etc/security/limits.conf for K8s workload. K3s starts with abundant limits by default:

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity

References