Postgresql-Ha helm chart

High availability cloud native postgresql based on stolon

  • Chart Version: 0.4.3

Prerequisites

  • Kubernetes 1.21+
  • Helm 3+
  • PV provisioner support in the underlying infrastructure (when using volumes)

How to use this chart?

Installing the Chart

Add helm repository first

helm repo add kubit-packs https://repo.sabz.dev/artifactory/kubit-packs

To install the chart with the release name my-postgresql-ha create a my-postgresql-ha.values.yaml file with following contents:

# my-postgresql-ha.values.yaml

clusterName: ...
debug: ...
superuserUsername: ...
superuserPassword: ...
superuserPasswordFile: ...
#...

and then run the following command:

kubectl create ns test-postgresql-ha || true
helm upgrade --install -n test-postgresql-ha my-postgresql-ha kubit-packs/postgresql-ha -f my-postgresql-ha.values.yaml

The command deploys postgresql-ha on the Kubernetes cluster with given parameters. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-postgresql-ha release:

helm delete -n test-postgresql-ha my-postgresql-ha

The command removes all the Kubernetes components associated with the chart and deletes the release.

Install via Pack

Create a my-postgresql-ha.pack.yaml file with following content.

# my-postgresql-ha.pack.yaml

apiVersion: k8s.kubit.ir/v1alpha1
kind: Pack
metadata:
  name: my-postgresql-ha
  namespace: test-postgresql-ha

spec:
  chart:
    repository:
      kind: ClusterPackRepository
      name: kubit-packs
    name: postgresql-ha
    version: ~=0.4.3

  values:
    clusterName: ...
    debug: ...
    superuserUsername: ...
    superuserPassword: ...
    superuserPasswordFile: ...
    #...

and then run the following command

kubectl create ns test-postgresql-ha || true
kubectl create -f my-postgresql-ha.pack.yaml

Uninstalling via Pack

To uninstall/delete the my-postgresql-ha pack:

kubectl -n test-postgresql-ha delete pack my-postgresql-ha

The command removes all the Kubernetes components associated with the chart and deletes the pack.

Parameters

The following table lists the configurable parameters sections of the postgresql-ha chart.

ParameterTypeDescriptionDefault
clusterNamestringstolon cluster name override""
debugboolfalse
global.commonImageRegistrystring""
image.registrystringPostgres/Stolon image registry""
image.repositorystringPostgres/Stolon image repository"sabzco/postgres"
image.tagstringPostgres/Stolon image tag"15"
image.keeperTagstringUse image.keeperTag to prevent keeper pods restart when image.tag changed""
image.pullPolicystringPostgres/Stolon image pull policy"IfNotPresent"
dockerize.image.repositorystring"jwilder/dockerize"
dockerize.image.pullPolicystring"IfNotPresent"
shmVolume.enabledboolEnable emptyDir volume for /dev/shm on keepers podsfalse
persistence.enabledboolEnable persistence data of keeperstrue
persistence.sizestringKeepers volume size""
persistence.storageClassstringStorage class name of backing keepers PVC""
persistence.accessModeslistKeeper persistent volumes access modes["ReadWriteOnce"]
rbac.createboolSpecifies if RBAC resources should be createdtrue
serviceAccount.createboolSpecifies if ServiceAccount should be createdtrue
serviceAccount.namestringThe name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template""
serviceAccount.imagePullSecretslistList of pull secrets added to ServiceAccount[]
serviceAccount.imagePullSecrets[].namestringName of Secret used to pull images""
superuserUsernamestringPostgresql superuser username"postgres"
superuserPasswordstringPassword for the superuser (REQUIRED if superuserSecret and superuserPasswordFile are not set)""
superuserPasswordFilestringFile where to read the Postgresql superuser password""
superuserSecret.namestringPostgresql superuser credential secret name""
superuserSecret.usernameKeystringUsername key of Postgresql superuser in secret"pg_su_username"
superuserSecret.passwordKeystringPassword key of Postgresql superuser in secret"pg_su_password"
replicationUsernamestringReplication username"replica"
replicationPasswordstringPassword for the replication user (REQUIRED if replicationSecret and replicationPasswordFile are not set)""
replicationPasswordFilestringFile where to read the replication password""
replicationSecret.namestringPostgresql replication credential secret name""
replicationSecret.usernameKeystringUsername key of Postgresql replication in secret"pg_repl_username"
replicationSecret.passwordKeystringPassword key of Postgresql replication in secret"pg_repl_password"
store.backendstringStolon store backend. It could be one of the following: consul, etcdv2, etcdv3 or kubernetes. If set it kubernetes or consul set etcd.enabled to false"etcdv2"
store.endpointsstringStore backend endpoints (eg: http://stolon-etcd:2379)nil
store.kubeResourceKindstringKubernetes resource kind One of configmap or secret (only for kubernetes backend)nil
pgParametersobjectpostgresql.conf options used during cluster creationsee below
pgParameters.shared_buffersstringSets the number of shared memory buffers used by the server0.25 * resources.requests.memory
pgParameters.log_checkpointson/offLogs each checkpoint"on"
pgParameters.log_lock_waitson/offLogs long lock waits"on"
pgParameters.checkpoint_completion_targetstringTime spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval"0.9"
pgParameters.wal_keep_sizestringSets the size of WAL files held for standby servers"1GB"
serviceMonitor.enabledboolWhen set to true then use a ServiceMonitor to collect metricstrue
serviceMonitor.labelsobjectCustom labels to use in the ServiceMonitor to be matched with a specific Prometheus{}
serviceMonitor.namespacestringSet the namespace the ServiceMonitor should be deployed to"default"
serviceMonitor.intervalstringHow frequently Prometheus should scrape"30s"
serviceMonitor.scrapeTimeoutstringHow much Prometheus wait for scrape until timeout (scrapeTimeout must lower than interval)"10s"
forceInitClusterboolfalse
databasesarrayArray of databases to create[]
databases[].databasestringName of database""
databases[].databaseCreationExtraArgumentsstringExtra arguments that append to create database sql command""
databases[].usernamestringUser that created and access of database granted for""
databases[].passwordstringPassword of user""
databases[].extensionslist of stringList of extensions as string that created for this database[]
modestringStolon mode, default create standalone cluster, set to standby to follow another postgresql instance"standalone"
standbyConfigobjectSpecifications of master postgresql when mode is standby{"certs":{"enabled":false,"files":{"ca.crt":"","tls.crt":"","tls.key":""},"path":"certs"},"host":"","port":"","sslmode":"disable"}
standbyConfig.hoststringHost of master postgresql""
standbyConfig.portstringPort of master postgresql""
standbyConfig.sslmodestringSet sslmode in connecting to master postgresql"disable"
standbyConfig.certsobjectCertificate properties in connecting to master postgresql{"enabled":false,"files":{"ca.crt":"","tls.crt":"","tls.key":""},"path":"certs"}
standbyConfig.certs.enabledboolIf enabled given certificates are mounted in keepers pod.false
standbyConfig.certs.pathstringPath ot mount certificates"certs"
standbyConfig.certs.filesobjectcertificates files{}
standbyConfig.certs.files."ca.crt"stringContent of ca.crt file""
standbyConfig.certs.files."tls.crt"stringContent of tls.crt file""
standbyConfig.certs.files."tls.key"stringContent of tls.key file""
clusterSpecobjectStolon cluster spec reference{}
tlsobjectEnable support ssl into postgres, you must specify the certs. ref{}
tls.enabledboolEnable tls support to postgresqlfalse
tls."ca.crt"stringContent of ca.crt file""
tls."tls.crt"stringContent of tls.crt file""
tls."tls.key"stringContent of tls.key file""
tls.existingSecretstringExisting secret with certificate content to stolon credentialsnil
keeper.uid_prefixstringKeeper prefix name"keeper"
keeper.replicaCountintNumber of keeper nodes2
keeper.annotationsobject{}
keeper.resourcesobjectKeeper resource requests/limits{}
keeper.priorityClassNamestringKeeper priorityClassName""
keeper.podSecurityContext.fsGroupintKeeper securityContext fsGroup, do not set if pg9 or 101000
keeper.podSecurityContext.fsGroupChangePolicystring"OnRootMismatch"
keeper.updateStrategyobject{}
keeper.service.typestring"ClusterIP"
keeper.service.annotationsobject{}
keeper.affinityobjectAffinity settings for keeper pod assignment{}
keeper.antiAffinityModestring"required"
keeper.nodeSelectorobjectNode labels for keeper pod assignment{}
keeper.tolerationslistToleration labels for keeper pod assignment[]
keeper.volumeslistKeeper Additional volumes[]
keeper.volumeMountslistMount paths for keeper.volumes[]
keeper.hooks.failKeeper.enabledboolEnable failkeeper pre-stop hookfalse
keeper.podDisruptionBudget.enabledboolIf true, create a pod disruption budget for keeper pods.true
keeper.podDisruptionBudget.minAvailableintMinimum number / percentage of pods that should remain scheduled1 if keeper.replicaCount >= 2 and no pdb set otherwise nil
keeper.podDisruptionBudget.maxUnavailableintMaximum number / percentage of pods that may be made unavailable1 if keeper.replicaCount == 1 and no pdb set otherwise nil
keeper.terminationGracePeriodSecondsintOptional duration in seconds the keeper pod needs to terminate gracefully.10
keeper.extraEnvlistExtra environment variables for keeper[]
keeper.networkPolicy.enabledbooltrue
keeper.networkPolicy.metricsExtraFromlist[]
keeper.readinessProbe.enabledbooltrue
keeper.readinessProbe.portint10101
keeper.readinessProbe.pathstring"/readiness"
keeper.readinessProbe.initialDelaySecondsint2
keeper.readinessProbe.periodSecondsint10
keeper.readinessProbe.timeoutSecondsint1
keeper.readinessProbe.successThresholdint1
keeper.readinessProbe.failureThresholdint3
proxy.replicaCountintNumber of proxy pods2
proxy.annotationsobject{}
proxy.resourcesobjectProxy resource requests/limits{"requests":{"cpu":"20m","memory":"200Mi"}}
proxy.priorityClassNamestringProxy priorityClassName""
proxy.service.typestring"ClusterIP"
proxy.service.annotationsobject{}
proxy.service.ports.proxy.portint5432
proxy.service.ports.proxy.targetPortint5432
proxy.service.ports.proxy.protocolstring"TCP"
proxy.affinityobjectAffinity settings for proxy pod assignment{}
proxy.antiAffinityModestring"required"
proxy.nodeSelectorobjectNode labels for proxy pod assignment{}
proxy.tolerationslistToleration labels for proxy pod assignment[]
proxy.podDisruptionBudget.enabledboolIf true, create a pod disruption budget for proxy pods.false
proxy.podDisruptionBudget.minAvailableintMinimum number / percentage of pods that should remain scheduled1 if proxy.replicaCount >= 2 and no pdb set otherwise nil
proxy.podDisruptionBudget.maxUnavailableintMaximum number / percentage of pods that may be made unavailable1 if proxy.replicaCount == 1 and no pdb set otherwise nil
proxy.extraEnvlistExtra environment variables for proxy[]
proxy.networkPolicy.enabledboolfalse
proxy.networkPolicy.sameNamespacebooltrue
proxy.networkPolicy.extraFromlist[]
proxy.readinessProbe.portint5432
proxy.readinessProbe.initialDelaySecondsint10
proxy.readinessProbe.timeoutSecondsint5
slaveProxy.enabledboolEnable creation of slave-proxy deployment to connect to slave keepersfalse
slaveProxy.replicaCountint2
slaveProxy.annotationsobject{}
slaveProxy.resources.requests.memorystring"200Mi"
slaveProxy.resources.requests.cpustring"20m"
slaveProxy.priorityClassNamestring""
slaveProxy.service.typestring"ClusterIP"
slaveProxy.service.annotationsobject{}
slaveProxy.service.ports.proxy.portint5432
slaveProxy.service.ports.proxy.targetPortint5432
slaveProxy.service.ports.proxy.protocolstring"TCP"
slaveProxy.affinityobject{}
slaveProxy.antiAffinityModestring"required"
slaveProxy.nodeSelectorobject{}
slaveProxy.tolerationslist[]
slaveProxy.podDisruptionBudget.enabledboolIf true, create a pod disruption budget for slaveProxy pods.false
slaveProxy.podDisruptionBudget.minAvailableintMinimum number / percentage of pods that should remain scheduled1 if slaveProxy.replicaCount >= 2 and no pdb set otherwise nil
slaveProxy.podDisruptionBudget.maxUnavailableintMaximum number / percentage of pods that may be made unavailable1 if slaveProxy.replicaCount == 1 and no pdb set otherwise nil
slaveProxy.extraEnvlist[]
slaveProxy.networkPolicy.enabledboolfalse
slaveProxy.networkPolicy.sameNamespacebooltrue
slaveProxy.networkPolicy.extraFromlist[]
slaveProxy.readinessProbe.portint5432
slaveProxy.readinessProbe.initialDelaySecondsint10
slaveProxy.readinessProbe.timeoutSecondsint5
sentinel.replicaCountintNumber of sentinel pods3
sentinel.annotationsobject{}
sentinel.resourcesobjectSentinel resource requests/limits{"requests":{"cpu":"10m","memory":"50Mi"}}
sentinel.priorityClassNamestringSentinel priorityClassName""
sentinel.affinityobjectAffinity settings for sentinel pod assignment{}
sentinel.antiAffinityModestring"required"
sentinel.nodeSelectorobjectNode labels for sentinel pod assignment{}
sentinel.tolerationslistToleration labels for sentinel pod assignment[]
sentinel.podDisruptionBudget.enabledboolIf true, create a pod disruption budget for sentinel pods.false
sentinel.podDisruptionBudget.minAvailableintMinimum number / percentage of pods that should remain scheduled1 if sentinel.replicaCount >= 2 and no pdb set otherwise nil
sentinel.podDisruptionBudget.maxUnavailableintMaximum number / percentage of pods that may be made unavailable1 if sentinel.replicaCount == 1 and no pdb set otherwise nil
sentinel.extraEnvlistExtra environment variables for sentinel[]
sentinel.livenessProbe.enabledboolfalse
sentinel.livenessProbe.command[0]string"/tmp/job-scripts/sentinel-cluster-has-leader.sh"
sentinel.livenessProbe.initialDelaySecondsint5
sentinel.livenessProbe.periodSecondsint10
sentinel.livenessProbe.timeoutSecondsint1
sentinel.livenessProbe.successThresholdint1
sentinel.livenessProbe.failureThresholdint5
postgresqlUpgrade.enabledboolEnable postgresql upgrade mechanism, (Note: postgresql will be down during upgrade)false
postgresqlUpgrade.oldVersionstringPostgresql upgrade from version"11"
postgresqlUpgrade.newVersionstringPostgresql upgrade to version"13"
postgresqlUpgrade.image.registrystringPostgresql upgrade specific image registry""
postgresqlUpgrade.image.repositorystringPostgresql upgrade specific image repository"tianon/postgres-upgrade"
postgresqlUpgrade.image.tagstringPostgresql upgrade specific image tag[oldVersion]-to-[newVersion]
metrics.image.registrystring""
metrics.image.repositorystring"prometheuscommunity/postgres-exporter"
metrics.image.tagstring"v0.12.0"
metrics.image.pullPolicystring"IfNotPresent"
metrics.databasestring"postgres"
metrics.portint9187
metrics.defaultCustomMetrics.pg_replicationboolfalse
metrics.defaultCustomMetrics.pg_postmasterboolfalse
metrics.defaultCustomMetrics.pg_stat_user_tablesboolfalse
metrics.defaultCustomMetrics.pg_statio_user_tablesboolfalse
metrics.defaultCustomMetrics.pg_stat_statementsboolfalse
metrics.defaultCustomMetrics.pg_process_idleboolfalse
metrics.postgres_exporter_yml.auth_modulesobject{}
metrics.customMetricsobject{}
adminer.enabledboolEnable adminer deployment, a full-featured database management toolfalse
adminer.replicaCountintNumber of adminer pods1
adminer.image.registrystringAdminer image registry""
adminer.image.repositorystringAdminer image repository"adminer"
adminer.image.tagstringAdminer image tag"4.8.1"
adminer.image.pullPolicystringAdminer image pull policy"IfNotPresent"
adminer.ingress.annotationsobject{}
adminer.ingress.secretNamestringAdminer's ingress secret to use in tls""
adminer.ingress.hoststringAdminer's ingress host""
adminer.resourcesobjectAdminer resource requests/limit{}
adminer.themestringAdminer theme name (see more themes)"pappu687"
backup.enabledboolEnable backup mechanism by creating a CronJobfalse
backup.schedulestringCronJob schedule"0 0 * * *"
backup.activeDeadlineSecondsintMaximum time until the job treated as dead14400
backup.strategystringDetermines which keeper to back up from. Valid selections are: only-standby, prefer-standby, exclusive-standby. By selecting exclusive-standby it will create a dedicated keeper to backup from."only-standby"
backup.maxBackupsintMaximum successful backups to retain and remove others0
backup.providerstringBackup storage provider. One of (s3, local)"s3"
backup.s3objectBackup s3 parameters{"accessKey":"","bucket":"","endpoint":"","existingSecret":"","pathPrefix":"","region":"","secretKey":""}
backup.s3.existingSecretstringExisting Secret name containing s3 parameters""
backup.s3.endpointstringS3 endpoint""
backup.s3.regionstringS3 region""
backup.s3.accessKeystringS3 AccessKey""
backup.s3.secretKeystringS3 SecretKey""
backup.s3.bucketstringS3 bucket name""
backup.s3.pathPrefixstringS3 path prefix""
backup.extraArgslistExtra args to stolonctl backup command[]
backup.image.registrystring""
backup.image.repositorystring""
backup.image.tagstring""
backup.image.pullPolicystring"Always"
backup.persistence.enabledboolfalse
backup.persistence.sizestring"50Gi"
backup.persistence.accessModestring"ReadWriteMany"
backup.persistence.existingPVCstring""
backup.persistence.mountPathstring"/backups"
backup.resourcesobject{}
backup.affinityobject{}
backup.antiAffinityModestring"required"

Examples

Hello world

image:
  tag: v0.17.0-pg13

mode: standalone

superuserUsername: postgres
superuserPassword: my-postgres-password
replicationUsername: replica
replicationPassword: my-replica-password

persistence:
  storageClassName: my-storage-class
  size: 5Gi

etcd:
  enabled: true

External etcd

etcd:
  enabled: false (default)

store:
  backend: etcdv2 (default)
  endpoints: http://my-etcd-endpoint:port

Custom registry

global:
  commonImageRegistry: my-docker.io

Database creation

databases:
  - database: my-database
    username: my-username
    password: my-password
    extensions:
      - my-extension-1
      - my-extension-2
    databaseCreationExtraArguments: my-arguments

Standby mode

mode: standby
replicationUsername: my-master-pg-replication-username
replicationPassword: my-master-pg-replication-password
standbyConfig:
  host: my-master-pg-host
  port: 5432

Promotion to standalone mode

Just apply with this value:

mode: standalone (default)

Resources

keeper:
  resources:
    requests:
      memory: 1Gi
      cpu: 100m
proxy:
  resources:
    requests:
      memory: 50Mi
      cpu: 50m
sentinel:
  resources:
    requests:
      memory: 10Mi
      cpu: 10m

Postgresql upgrading

apply with these values:

postgresqlUpgrade:
  enabled: true
  oldVersion: 11
  newVersion: 13

then apply with these values:

postgresqlUpgrade:
  enabled: false (default)

Adminer

web-based pg viewer

enabled: true
ingress:
  host: my-adminer.com
  secretName: my-secret-tls
theme: pappu687 (default)

Recommended pg parameters

pgParameters:
  shared_buffers: '0.5GB' # half of keeper.resources.requests.memory
  log_checkpoints: 'on'
  log_lock_waits: 'on'
  checkpoint_completion_target: '0.9'
  min_wal_size: '2GB'
  shared_preload_libraries: 'pg_stat_statements'
  pg_stat_statements.track: 'all'

more configuration

Maintainers