tree: 86546d755e97be70d45fb319c7f464760529480f [path history] [tgz]
  1. templates/
  2. .helmignore
  3. Chart.yaml
  4. README.md
  5. values.yaml
charts/pihole/README.md

pihole

Installs pihole in kubernetes

Version: 2.9.3 AppVersion: 2022.09.1 All Contributors

Source Code

Installation

Jeff Geerling on YouTube made a video about the installation of this chart:

Jeff Geerling on YouTube

Add Helm repository

helm repo add mojo2600 https://mojo2600.github.io/pihole-kubernetes/
helm repo update

Configure the chart

The following items can be set via --set flag during installation or configured by editing the values.yaml directly.

Configure the way how to expose pihole service:

  • Ingress: The ingress controller must be installed in the Kubernetes cluster.
  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer.

My settings in values.yaml

dnsmasq:
  customDnsEntries:
    - address=/nas/192.168.178.10
 
  customCnameEntries:
    - cname=foo.nas,nas

persistentVolumeClaim:
  enabled: true

serviceWeb:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

serviceDns:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

Configuring Upstream DNS Resolvers

By default, pihole-kubernetes will configure pod DNS automatically to use Google's 8.8.8.8 nameserver for upstream DNS resolution. You can configure this, or opt-out of pod DNS configuration completely.

Changing The Upstream DNS Resolver

For example, to use Cloudflare's resolver:

podDnsConfig:
  enabled: true
  policy: "None"
  nameservers:
  - 127.0.0.1
  - 1.1.1.1

Disabling Pod DNS Configuration

If you have other DNS policy at play (for example, when running a service mesh), you may not want to have pihole-kubernetes control this behavior. In that case, you can disable DNS configuration on pihole pods:

podDnsConfig:
  enabled: false

Upgrading

To 2.0.0

This version splits the DHCP service into its own resource and puts the configuration to serviceDhcp.

If you have not changed any configuration for serviceDns, you don’t need to do anything.

If you have changed your serviceDns configuration, copy your serviceDns section into a new serviceDhcp section.

To 1.8.22

To enhance compatibility for Traefik, we split the TCP and UDP service into Web and DNS. This means, if you have a dedicated configuration for the service, you have to update your values.yaml and add a new configuration for this new service.

Before (In my case, with metallb):

serviceTCP:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc

serviceUDP:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc

After:

serviceWeb:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc

serviceDns:
  loadBalancerIP: 192.168.178.252
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc

Version 1.8.22 has switched from the deprecated ingress api extensions/v1beta1 to the go forward version networking.k8s.io/v1. This means that your cluster must be running 1.19.x as this api is not available on older versions. If necessary to run on an older Kubernetes Version, it can be done by modifying the ingress.yaml and changing the api definition back. The backend definition would also change from:

            backend:
              service:
                name: \{\{ $serviceName \}\}
                port:
                  name: http

to:

            backend:
              serviceName: \{\{ $serviceName \}\}
              servicePort: http

Uninstallation

To uninstall/delete the my-release deployment (NOTE: --purge is default behaviour in Helm 3+ and will error):

helm delete --purge my-release

Configuration

The following table lists the configurable parameters of the pihole chart and the default values.

Values

KeyTypeDefaultDescription
DNS1string"8.8.8.8"default upstream DNS 1 server to use
DNS2string"8.8.4.4"default upstream DNS 2 server to use
adlistsobject{}list of adlists to import during initial start of the container
adminobject{"existingSecret":"","passwordKey":"password"}Use an existing secret for the admin password.
admin.enabledbooltrueIf set to false admin password will be disabled, adminPassword specified above and the pre-existing secret (if specified) will be ignored.
admin.existingSecretstring""Specify an existing secret to use as admin password
admin.passwordKeystring"password"Specify the key inside the secret to use
adminPasswordstring"admin"Administrator password when not using an existing secret (see below)
affinityobject{}
antiaff.avoidReleasestring"pihole1"Here you can set the pihole release (you set in helm install <releasename> ...) you want to avoid
antiaff.enabledboolfalseset to true to enable antiaffinity (example: 2 pihole DNS in the same cluster)
antiaff.strictbooltrueHere you can choose between preferred or required
antiaff.namespaces'[]'list of namespaces to include in anti-affinity settings
blacklistobject{}list of blacklisted domains to import during initial start of the container
customVolumes.configobject{}any volume type can be used here
customVolumes.enabledboolfalseset this to true to enable custom volumes
dnsHostPort.enabledboolfalseset this to true to enable dnsHostPort
dnsHostPort.portint53default port for this pod
dnsmasq.additionalHostsEntrieslist[]Dnsmasq reads the /etc/hosts file to resolve ips. You can add additional entries if you like
dnsmasq.customCnameEntrieslist[]Here we specify custom cname entries that should point to A records or elements in customDnsEntries array. The format should be: - cname=cname.foo.bar,foo.bar - cname=cname.bar.foo,bar.foo - cname=cname record,dns record
dnsmasq.customDnsEntrieslist[]Add custom dns entries to override the dns resolution. All lines will be added to the pihole dnsmasq configuration.
dnsmasq.customSettingsstringnilOther options
dnsmasq.staticDhcpEntrieslist[]Static DHCP config
dnsmasq.upstreamServerslist[]Add upstream dns servers. All lines will be added to the pihole dnsmasq configuration
doh.enabledboolfalseset to true to enabled DNS over HTTPs via cloudflared
doh.envVarsobject{}Here you can pass environment variables to the DoH container, for example:
doh.namestring"cloudflared"
doh.probesobject{"liveness":{"enabled":true,"failureThreshold":10,"initialDelaySeconds":60,"probe":{"exec":{"command":["nslookup","-po=5053","cloudflare.com","127.0.0.1"]}},"timeoutSeconds":5}}Probes configuration
doh.probes.livenessobject{"enabled":true,"failureThreshold":10,"initialDelaySeconds":60,"probe":{"exec":{"command":["nslookup","-po=5053","cloudflare.com","127.0.0.1"]}},"timeoutSeconds":5}Configure the healthcheck for the doh container
doh.probes.liveness.enabledbooltrueset to true to enable liveness probe
doh.probes.liveness.failureThresholdint10defines the failure threshold for the liveness probe
doh.probes.liveness.initialDelaySecondsint60defines the initial delay for the liveness probe
doh.probes.liveness.probeobject{"exec":{"command":["nslookup","-po=5053","cloudflare.com","127.0.0.1"]}}customize the liveness probe
doh.probes.liveness.timeoutSecondsint5defines the timeout in secondes for the liveness probe
doh.pullPolicystring"IfNotPresent"
doh.repositorystring"crazymax/cloudflared"
doh.tagstring"latest"
dualStack.enabledboolfalseset this to true to enable creation of DualStack services or creation of separate IPv6 services if serviceDns.type is set to "LoadBalancer"
extraEnvVarsobject{}extraEnvironmentVars is a list of extra enviroment variables to set for pihole to use
extraEnvVarsSecretobject{}extraEnvVarsSecret is a list of secrets to load in as environment variables.
extraInitContainerslist[]any initContainers you might want to run before starting pihole
extraObjectslist[]any extra kubernetes manifests you might want
extraVolumeMountsobject{}any extra volume mounts you might want
extraVolumesobject{}any extra volumes you might want
ftlobject{}values that should be added to pihole-FTL.conf
hostNetworkstring"false"should the container use host network
hostnamestring""hostname of pod
image.pullPolicystring"IfNotPresent"the pull policy
image.repositorystring"pihole/pihole"the repostory to pull the image from
image.tagstring""the docker tag, if left empty it will get it from the chart's appVersion
ingressobject{"annotations":{},"enabled":false,"hosts":["chart-example.local"],"path":"/","tls":[]}Configuration for the Ingress
ingress.annotationsobject{}Annotations for the ingress
ingress.enabledboolfalseGenerate a Ingress resource
maxSurgeint1The maximum number of Pods that can be created over the desired number of ReplicaSet during updating.
maxUnavailableint1The maximum number of Pods that can be unavailable during updating
monitoring.podMonitorobject{"enabled":false}Preferably adding prometheus scrape annotations rather than enabling podMonitor.
monitoring.podMonitor.enabledboolfalseset this to true to enable podMonitor
monitoring.sidecarobject{"enabled":false,"image":{"pullPolicy":"IfNotPresent","repository":"ekofr/pihole-exporter","tag":"v0.3.0"},"port":9617,"resources":{"limits":{"memory":"128Mi"}}}Sidecar configuration
monitoring.sidecar.enabledboolfalseset this to true to enable podMonitor as sidecar
nodeSelectorobject{}
persistentVolumeClaimobject{"accessModes":["ReadWriteOnce"],"annotations":{},"enabled":false,"size":"500Mi"}spec.PersitentVolumeClaim configuration
persistentVolumeClaim.annotationsobject{}Annotations for the PersitentVolumeClaim
persistentVolumeClaim.enabledboolfalseset to true to use pvc
podAnnotationsobject{}Additional annotations for pods
podDnsConfig.enabledbooltrue
podDnsConfig.nameservers[0]string"127.0.0.1"
podDnsConfig.nameservers[1]string"8.8.8.8"
podDnsConfig.policystring"None"
privilegedstring"false"should container run in privileged mode
capabilitiesobject{}Linux capabilities that container should run with
probesobject{"liveness":{"type": "httpGet","enabled":true,"failureThreshold":10,"initialDelaySeconds":60,"port":"http","scheme":"HTTP","timeoutSeconds":5},"readiness":{"enabled":true,"failureThreshold":3,"initialDelaySeconds":60,"port":"http","scheme":"HTTP","timeoutSeconds":5}}Probes configuration
probes.liveness.enabledbooltrueGenerate a liveness probe
probes.liveness.type stringhttpGetDefines the type of liveness probe. (httpGet, command)
probes.liveness.commandlist[]A list of commands to execute as a liveness probe (Requires type to be set to command)
probes.readiness.enabledbooltrueGenerate a readiness probe
regexobject{}list of blacklisted regex expressions to import during initial start of the container
replicaCountint1The number of replicas
resourcesobject{}lines, adjust them as necessary, and remove the curly braces after 'resources:'.
serviceDhcpobject{"annotations":{},"enabled":true,"externalTrafficPolicy":"Local","loadBalancerIP":"","loadBalancerIPv6":"","nodePort":"","port":67,"type":"NodePort"}Configuration for the DHCP service on port 67
serviceDhcp.annotationsobject{}Annotations for the DHCP service
serviceDhcp.enabledbooltrueGenerate a Service resource for DHCP traffic
serviceDhcp.externalTrafficPolicystring"Local"spec.externalTrafficPolicy for the DHCP Service
serviceDhcp.loadBalancerIPstring""A fixed spec.loadBalancerIP for the DHCP Service
serviceDhcp.loadBalancerIPv6string""A fixed spec.loadBalancerIP for the IPv6 DHCP Service
serviceDhcp.nodePortstring""Optional node port for the DHCP service
serviceDhcp.portint67The port of the DHCP service
serviceDhcp.typestring"NodePort"spec.type for the DHCP Service
serviceDnsobject{"annotations":{},"externalTrafficPolicy":"Local","loadBalancerIP":"","loadBalancerIPv6":"","mixedService":false,"nodePort":"","port":53,"type":"NodePort"}Configuration for the DNS service on port 53
serviceDns.annotationsobject{}Annotations for the DNS service
serviceDns.externalTrafficPolicystring"Local"spec.externalTrafficPolicy for the DHCP Service
serviceDns.loadBalancerIPstring""A fixed spec.loadBalancerIP for the DNS Service
serviceDns.loadBalancerIPv6string""A fixed spec.loadBalancerIP for the IPv6 DNS Service
serviceDns.mixedServiceboolfalsedeploys a mixed (TCP + UDP) Service instead of separate ones
serviceDns.nodePortstring""Optional node port for the DNS service
serviceDns.portint53The port of the DNS service
serviceDns.typestring"NodePort"spec.type for the DNS Service
serviceWebobject{"annotations":{},"externalTrafficPolicy":"Local","http":{"enabled":true,"nodePort":"","port":80},"https":{"enabled":true,"nodePort":"","port":443},"loadBalancerIP":"","loadBalancerIPv6":"","type":"ClusterIP"}Configuration for the web interface service
serviceWeb.annotationsobject{}Annotations for the DHCP service
serviceWeb.externalTrafficPolicystring"Local"spec.externalTrafficPolicy for the web interface Service
serviceWeb.httpobject{"enabled":true,"nodePort":"","port":80}Configuration for the HTTP web interface listener
serviceWeb.http.enabledbooltrueGenerate a service for HTTP traffic
serviceWeb.http.nodePortstring""Optional node port for the web HTTP service
serviceWeb.http.portint80The port of the web HTTP service
serviceWeb.httpsobject{"enabled":true,"nodePort":"","port":443}Configuration for the HTTPS web interface listener
serviceWeb.https.enabledbooltrueGenerate a service for HTTPS traffic
serviceWeb.https.nodePortstring""Optional node port for the web HTTPS service
serviceWeb.https.portint443The port of the web HTTPS service
serviceWeb.loadBalancerIPstring""A fixed spec.loadBalancerIP for the web interface Service
serviceWeb.loadBalancerIPv6string""A fixed spec.loadBalancerIP for the IPv6 web interface Service
serviceWeb.typestring"ClusterIP"spec.type for the web interface Service
strategyTypestring"RollingUpdate"The spec.strategyTpye for updates
tolerationslist[]
topologySpreadConstraintslist[]Specify a priorityClassName priorityClassName: "" Reference: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
virtualHoststring"pi.hole"
webHttpstring"80"port the container should use to expose HTTP traffic
webHttpsstring"443"port the container should use to expose HTTPS traffic
whitelistobject{}list of whitelisted domains to import during initial start of the container

Maintainers

NameEmailUrl
MoJo2600christian.erhardt@mojo2k.de

Remarks

MetalLB 0.8.1+

pihole seems to work without issue in MetalLB 0.8.1+

MetalLB 0.7.3

MetalLB 0.7.3 has a bug, where the service is not announced anymore, when the pod changes (e.g. update of a deployment). My workaround is to restart the metallb-speaker-* pods.

Credits

Pi-hole®

Contributing

Feel free to contribute by making a pull request.

Please read Contribution Guide for more information on how you can contribute to this Chart.

Contributors ✨

Thanks goes to these wonderful people:

This project follows the all-contributors specification. Contributions of any kind welcome!


Autogenerated from chart metadata using helm-docs v1.10.0