8e27bb9bae
# Changes A big shoutout to @luhahn for all his work in #205 which served as the base for this PR. ## Documentation - [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long). Most of the information below should go into it with more details and explanations behind all of the individual components. ## Chart deps ~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~ ~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~ - Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2. - Removes `memcached` instead of `redis-cluster` - Add `postgresql-ha` as default DB dep in favor of `postgres` ## Adds smart HA chart logic The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1. - If `replicaCount` > 1, - `gitea.config.session.PROVIDER` is automatically set to `redis-cluster` - `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch` - `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc. ## Deployment vs Statefulset Given all the discussions about this lately (#428), I think we could use both. In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets. On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment. Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption. Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case. This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet. The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way? ## Chart PVC Creation I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation. In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial. A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse... - New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim` - New `persistence.create`: whether to create a new PVC ## Testing As this PR does a lot of things, we need proper testing. The helm chart can be installed from the Git branch via `helm-git` as follows: ``` helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment helm install gitea --version 0.0.0 ``` It is **highly recommended** to test the chart in a dedicated namespace. I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine. I just did some basic operations though and we should do more niche testing before merging. Examplary `values.yml` for testing (only needs a valid RWX storage class): <details> <summary>values.yaml</summary> ```yml image: tag: "dev" PullPolicy: "Always" rootless: true replicaCount: 2 persistence: enabled: true accessModes: - ReadWriteMany storageClass: FIXME redis-cluster: enabled: false global: redis: password: gitea gitea: config: indexer: ISSUE_INDEXER_ENABLED: true REPO_INDEXER_ENABLED: false ``` </details> ## Preferred setup The preferred HA setup with respect to performance and stability might currently be as follows: - Repos: RWX (e.g. EFS or Azurefiles NFS) - Issue indexer: Meilisearch (HA) - Session and cache: Redis Cluster (HA) - Attachments/Avatars: Minio (HA) This will result in a ~ 10-pod HA setup overall. All pods have very low resource requests. fix #98 Co-authored-by: pat-s <pat-s@noreply.gitea.io> Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437 Co-authored-by: pat-s <patrick.schratz@gmail.com> Co-committed-by: pat-s <patrick.schratz@gmail.com>
544 lines
20 KiB
YAML
544 lines
20 KiB
YAML
# Default values for gitea.
|
|
# This is a YAML-formatted file.
|
|
# Declare variables to be passed into your templates.
|
|
## @section Global
|
|
#
|
|
## @param global.imageRegistry global image registry override
|
|
## @param global.imagePullSecrets global image pull secrets override; can be extended by `imagePullSecrets`
|
|
## @param global.storageClass global storage class override
|
|
## @param global.hostAliases global hostAliases which will be added to the pod's hosts files
|
|
global:
|
|
imageRegistry: ""
|
|
## E.g.
|
|
## imagePullSecrets:
|
|
## - myRegistryKeySecretName
|
|
##
|
|
imagePullSecrets: []
|
|
storageClass: ""
|
|
hostAliases: []
|
|
# - ip: 192.168.137.2
|
|
# hostnames:
|
|
# - example.com
|
|
|
|
## @param replicaCount number of replicas for the deployment
|
|
replicaCount: 1
|
|
|
|
## @section strategy
|
|
## @param strategy.type strategy type
|
|
## @param strategy.rollingUpdate.maxSurge maxSurge
|
|
## @param strategy.rollingUpdate.maxUnavailable maxUnavailable
|
|
strategy:
|
|
type: "RollingUpdate"
|
|
rollingUpdate:
|
|
maxSurge: "100%"
|
|
maxUnavailable: 0
|
|
|
|
## @param clusterDomain cluster domain
|
|
clusterDomain: cluster.local
|
|
|
|
## @section Image
|
|
## @param image.registry image registry, e.g. gcr.io,docker.io
|
|
## @param image.repository Image to start for this pod
|
|
## @param image.tag Visit: [Image tag](https://hub.docker.com/r/gitea/gitea/tags?page=1&ordering=last_updated). Defaults to `appVersion` within Chart.yaml.
|
|
## @param image.pullPolicy Image pull policy
|
|
## @param image.rootless Wether or not to pull the rootless version of Gitea, only works on Gitea 1.14.x or higher
|
|
image:
|
|
registry: ""
|
|
repository: gitea/gitea
|
|
# Overrides the image tag whose default is the chart appVersion.
|
|
tag: ""
|
|
pullPolicy: Always
|
|
rootless: true
|
|
|
|
## @param imagePullSecrets Secret to use for pulling the image
|
|
imagePullSecrets: []
|
|
|
|
## @section Security
|
|
# Security context is only usable with rootless image due to image design
|
|
## @param podSecurityContext.fsGroup Set the shared file system group for all containers in the pod.
|
|
podSecurityContext:
|
|
fsGroup: 1000
|
|
|
|
## @param containerSecurityContext Security context
|
|
containerSecurityContext: {}
|
|
# allowPrivilegeEscalation: false
|
|
# capabilities:
|
|
# drop:
|
|
# - ALL
|
|
# # Add the SYS_CHROOT capability for root and rootless images if you intend to
|
|
# # run pods on nodes that use the container runtime cri-o. Otherwise, you will
|
|
# # get an error message from the SSH server that it is not possible to read from
|
|
# # the repository.
|
|
# # https://gitea.com/gitea/helm-chart/issues/161
|
|
# add:
|
|
# - SYS_CHROOT
|
|
# privileged: false
|
|
# readOnlyRootFilesystem: true
|
|
# runAsGroup: 1000
|
|
# runAsNonRoot: true
|
|
# runAsUser: 1000
|
|
|
|
## @deprecated The securityContext variable has been split two:
|
|
## - containerSecurityContext
|
|
## - podSecurityContext.
|
|
## @param securityContext Run init and Gitea containers as a specific securityContext
|
|
securityContext: {}
|
|
|
|
## @param podDisruptionBudget Pod disruption budget
|
|
podDisruptionBudget: {}
|
|
# maxUnavailable: 1
|
|
# minAvailable: 1
|
|
|
|
## @section Service
|
|
service:
|
|
## @param service.http.type Kubernetes service type for web traffic
|
|
## @param service.http.port Port number for web traffic
|
|
## @param service.http.clusterIP ClusterIP setting for http autosetup for deployment is None
|
|
## @param service.http.loadBalancerIP LoadBalancer IP setting
|
|
## @param service.http.nodePort NodePort for http service
|
|
## @param service.http.externalTrafficPolicy If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
|
|
## @param service.http.externalIPs External IPs for service
|
|
## @param service.http.ipFamilyPolicy HTTP service dual-stack policy
|
|
## @param service.http.ipFamilies HTTP service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
|
|
## @param service.http.loadBalancerSourceRanges Source range filter for http loadbalancer
|
|
## @param service.http.annotations HTTP service annotations
|
|
http:
|
|
type: ClusterIP
|
|
port: 3000
|
|
clusterIP: None
|
|
loadBalancerIP:
|
|
nodePort:
|
|
externalTrafficPolicy:
|
|
externalIPs:
|
|
ipFamilyPolicy:
|
|
ipFamilies:
|
|
loadBalancerSourceRanges: []
|
|
annotations: {}
|
|
## @param service.ssh.type Kubernetes service type for ssh traffic
|
|
## @param service.ssh.port Port number for ssh traffic
|
|
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for deployment is None
|
|
## @param service.ssh.loadBalancerIP LoadBalancer IP setting
|
|
## @param service.ssh.nodePort NodePort for ssh service
|
|
## @param service.ssh.externalTrafficPolicy If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
|
|
## @param service.ssh.externalIPs External IPs for service
|
|
## @param service.ssh.ipFamilyPolicy SSH service dual-stack policy
|
|
## @param service.ssh.ipFamilies SSH service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
|
|
## @param service.ssh.hostPort HostPort for ssh service
|
|
## @param service.ssh.loadBalancerSourceRanges Source range filter for ssh loadbalancer
|
|
## @param service.ssh.annotations SSH service annotations
|
|
ssh:
|
|
type: ClusterIP
|
|
port: 22
|
|
clusterIP: None
|
|
loadBalancerIP:
|
|
nodePort:
|
|
externalTrafficPolicy:
|
|
externalIPs:
|
|
ipFamilyPolicy:
|
|
ipFamilies:
|
|
hostPort:
|
|
loadBalancerSourceRanges: []
|
|
annotations: {}
|
|
|
|
## @section Ingress
|
|
## @param ingress.enabled Enable ingress
|
|
## @param ingress.className Ingress class name
|
|
## @param ingress.annotations Ingress annotations
|
|
## @param ingress.hosts[0].host Default Ingress host
|
|
## @param ingress.hosts[0].paths[0].path Default Ingress path
|
|
## @param ingress.hosts[0].paths[0].pathType Ingress path type
|
|
## @param ingress.tls Ingress tls settings
|
|
## @extra ingress.apiVersion Specify APIVersion of ingress object. Mostly would only be used for argocd.
|
|
ingress:
|
|
enabled: false
|
|
# className: nginx
|
|
className:
|
|
annotations:
|
|
{}
|
|
# kubernetes.io/ingress.class: nginx
|
|
# kubernetes.io/tls-acme: "true"
|
|
hosts:
|
|
- host: git.example.com
|
|
paths:
|
|
- path: /
|
|
pathType: Prefix
|
|
tls: []
|
|
# - secretName: chart-example-tls
|
|
# hosts:
|
|
# - git.example.com
|
|
# Mostly for argocd or any other CI that uses `helm template | kubectl apply` or similar
|
|
# If helm doesn't correctly detect your ingress API version you can set it here.
|
|
# apiVersion: networking.k8s.io/v1
|
|
|
|
## @section deployment
|
|
#
|
|
## @param resources Kubernetes resources
|
|
resources:
|
|
{}
|
|
# We usually recommend not to specify default resources and to leave this as a conscious
|
|
# choice for the user. This also increases chances charts run on environments with little
|
|
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
|
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
|
# limits:
|
|
# cpu: 100m
|
|
# memory: 128Mi
|
|
# requests:
|
|
# cpu: 100m
|
|
# memory: 128Mi
|
|
|
|
## Use an alternate scheduler, e.g. "stork".
|
|
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
|
|
##
|
|
## @param schedulerName Use an alternate scheduler, e.g. "stork"
|
|
schedulerName: ""
|
|
|
|
## @param nodeSelector NodeSelector for the deployment
|
|
nodeSelector: {}
|
|
|
|
## @param tolerations Tolerations for the deployment
|
|
tolerations: []
|
|
|
|
## @param affinity Affinity for the deployment
|
|
affinity: {}
|
|
|
|
## @param topologySpreadConstraints TopologySpreadConstraints for the deployment
|
|
topologySpreadConstraints: []
|
|
|
|
## @param dnsConfig dnsConfig for the deployment
|
|
dnsConfig: {}
|
|
|
|
## @param priorityClassName priorityClassName for the deployment
|
|
priorityClassName: ""
|
|
|
|
## @param deployment.env Additional environment variables to pass to containers
|
|
## @param deployment.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
|
|
## @param deployment.labels Labels for the deployment
|
|
## @param deployment.annotations Annotations for the Gitea deployment to be created
|
|
deployment:
|
|
env:
|
|
[]
|
|
# - name: VARIABLE
|
|
# value: my-value
|
|
terminationGracePeriodSeconds: 60
|
|
labels: {}
|
|
annotations: {}
|
|
|
|
## @section ServiceAccount
|
|
|
|
## @param serviceAccount.create Enable the creation of a ServiceAccount
|
|
## @param serviceAccount.name Name of the created ServiceAccount, defaults to release name. Can also link to an externally provided ServiceAccount that should be used.
|
|
## @param serviceAccount.automountServiceAccountToken Enable/disable auto mounting of the service account token
|
|
## @param serviceAccount.imagePullSecrets Image pull secrets, available to the ServiceAccount
|
|
## @param serviceAccount.annotations Custom annotations for the ServiceAccount
|
|
## @param serviceAccount.labels Custom labels for the ServiceAccount
|
|
serviceAccount:
|
|
create: false
|
|
name: ""
|
|
automountServiceAccountToken: false
|
|
imagePullSecrets: []
|
|
# - name: private-registry-access
|
|
annotations: {}
|
|
labels: {}
|
|
|
|
## @section Persistence
|
|
#
|
|
## @param persistence.enabled Enable persistent storage
|
|
## @param persistence.create Whether to create the persistentVolumeClaim for shared storage
|
|
## @param persistence.mount Whether the persistentVolumeClaim should be mounted (even if not created)
|
|
## @param persistence.claimName Use an existing claim to store repository information
|
|
## @param persistence.size Size for persistence to store repo information
|
|
## @param persistence.accessModes AccessMode for persistence
|
|
## @param persistence.labels Labels for the persistence volume claim to be created
|
|
## @param persistence.annotations Annotations for the persistence volume claim to be created
|
|
## @param persistence.storageClass Name of the storage class to use
|
|
## @param persistence.subPath Subdirectory of the volume to mount at
|
|
persistence:
|
|
enabled: true
|
|
create: true
|
|
mount: true
|
|
claimName: gitea-shared-storage
|
|
size: 10Gi
|
|
accessModes:
|
|
- ReadWriteOnce
|
|
labels: {}
|
|
annotations: {}
|
|
storageClass:
|
|
subPath:
|
|
|
|
## @param extraVolumes Additional volumes to mount to the Gitea deployment
|
|
extraVolumes: []
|
|
# - name: postgres-ssl-vol
|
|
# secret:
|
|
# secretName: gitea-postgres-ssl
|
|
|
|
## @param extraContainerVolumeMounts Mounts that are only mapped into the Gitea runtime/main container, to e.g. override custom templates.
|
|
extraContainerVolumeMounts: []
|
|
|
|
## @param extraInitVolumeMounts Mounts that are only mapped into the init-containers. Can be used for additional preconfiguration.
|
|
extraInitVolumeMounts: []
|
|
|
|
## @deprecated The extraVolumeMounts variable has been split two:
|
|
## - extraContainerVolumeMounts
|
|
## - extraInitVolumeMounts
|
|
## As an example, can be used to mount a client cert when connecting to an external Postgres server.
|
|
## @param extraVolumeMounts **DEPRECATED** Additional volume mounts for init containers and the Gitea main container
|
|
extraVolumeMounts: []
|
|
# - name: postgres-ssl-vol
|
|
# readOnly: true
|
|
# mountPath: "/pg-ssl"
|
|
|
|
## @section Init
|
|
## @param initPreScript Bash shell script copied verbatim to the start of the init-container.
|
|
initPreScript: ""
|
|
#
|
|
# initPreScript: |
|
|
# mkdir -p /data/git/.postgresql
|
|
# cp /pg-ssl/* /data/git/.postgresql/
|
|
# chown -R git:git /data/git/.postgresql/
|
|
# chmod 400 /data/git/.postgresql/postgresql.key
|
|
|
|
## @param initContainers.resources.limits initContainers.limits Kubernetes resource limits for init containers
|
|
## @param initContainers.resources.requests.cpu initContainers.requests.cpu Kubernetes cpu resource limits for init containers
|
|
## @param initContainers.resources.requests.memory initContainers.requests.memory Kubernetes memory resource limits for init containers
|
|
initContainers:
|
|
resources:
|
|
limits: {}
|
|
requests:
|
|
cpu: 100m
|
|
memory: 128Mi
|
|
|
|
# Configure commit/action signing prerequisites
|
|
## @section Signing
|
|
#
|
|
## @param signing.enabled Enable commit/action signing
|
|
## @param signing.gpgHome GPG home directory
|
|
## @param signing.privateKey Inline private gpg key for signed Gitea actions
|
|
## @param signing.existingSecret Use an existing secret to store the value of `signing.privateKey`
|
|
signing:
|
|
enabled: false
|
|
gpgHome: /data/git/.gnupg
|
|
privateKey: ""
|
|
# privateKey: |-
|
|
# -----BEGIN PGP PRIVATE KEY BLOCK-----
|
|
# ...
|
|
# -----END PGP PRIVATE KEY BLOCK-----
|
|
existingSecret: ""
|
|
|
|
## @section Gitea
|
|
#
|
|
gitea:
|
|
## @param gitea.admin.username Username for the Gitea admin user
|
|
## @param gitea.admin.existingSecret Use an existing secret to store admin user credentials
|
|
## @param gitea.admin.password Password for the Gitea admin user
|
|
## @param gitea.admin.email Email for the Gitea admin user
|
|
admin:
|
|
# existingSecret: gitea-admin-secret
|
|
existingSecret:
|
|
username: gitea_admin
|
|
password: r8sA8CPHD9!bt6d
|
|
email: "gitea@local.domain"
|
|
|
|
## @param gitea.metrics.enabled Enable Gitea metrics
|
|
## @param gitea.metrics.serviceMonitor.enabled Enable Gitea metrics service monitor
|
|
metrics:
|
|
enabled: false
|
|
serviceMonitor:
|
|
enabled: false
|
|
# additionalLabels:
|
|
# prometheus-release: prom1
|
|
|
|
## @param gitea.ldap LDAP configuration
|
|
ldap:
|
|
[]
|
|
# - name: "LDAP 1"
|
|
# existingSecret:
|
|
# securityProtocol:
|
|
# host:
|
|
# port:
|
|
# userSearchBase:
|
|
# userFilter:
|
|
# adminFilter:
|
|
# emailAttribute:
|
|
# bindDn:
|
|
# bindPassword:
|
|
# usernameAttribute:
|
|
# publicSSHKeyAttribute:
|
|
|
|
# Either specify inline `key` and `secret` or refer to them via `existingSecret`
|
|
## @param gitea.oauth OAuth configuration
|
|
oauth:
|
|
[]
|
|
# - name: 'OAuth 1'
|
|
# provider:
|
|
# key:
|
|
# secret:
|
|
# existingSecret:
|
|
# autoDiscoverUrl:
|
|
# useCustomUrls:
|
|
# customAuthUrl:
|
|
# customTokenUrl:
|
|
# customProfileUrl:
|
|
# customEmailUrl:
|
|
|
|
## @param gitea.config.server.SSH_PORT SSH port for rootlful Gitea image
|
|
## @param gitea.config.server.SSH_LISTEN_PORT SSH port for rootless Gitea image
|
|
config:
|
|
# APP_NAME: "Gitea: Git with a cup of tea"
|
|
# RUN_MODE: dev
|
|
server:
|
|
SSH_PORT: 22 # rootful image
|
|
SSH_LISTEN_PORT: 2222 # rootless image
|
|
#
|
|
# security:
|
|
# PASSWORD_COMPLEXITY: spec
|
|
|
|
## @param gitea.additionalConfigSources Additional configuration from secret or configmap
|
|
additionalConfigSources: []
|
|
# - secret:
|
|
# secretName: gitea-app-ini-oauth
|
|
# - configMap:
|
|
# name: gitea-app-ini-plaintext
|
|
|
|
## @param gitea.additionalConfigFromEnvs Additional configuration sources from environment variables
|
|
additionalConfigFromEnvs: []
|
|
|
|
## @param gitea.podAnnotations Annotations for the Gitea pod
|
|
podAnnotations: {}
|
|
|
|
## @param gitea.ssh.logLevel Configure OpenSSH's log level. Only available for root-based Gitea image.
|
|
ssh:
|
|
logLevel: "INFO"
|
|
|
|
## @section LivenessProbe
|
|
#
|
|
## @param gitea.livenessProbe.enabled Enable liveness probe
|
|
## @param gitea.livenessProbe.tcpSocket.port Port to probe for liveness
|
|
## @param gitea.livenessProbe.initialDelaySeconds Initial delay before liveness probe is initiated
|
|
## @param gitea.livenessProbe.timeoutSeconds Timeout for liveness probe
|
|
## @param gitea.livenessProbe.periodSeconds Period for liveness probe
|
|
## @param gitea.livenessProbe.successThreshold Success threshold for liveness probe
|
|
## @param gitea.livenessProbe.failureThreshold Failure threshold for liveness probe
|
|
# Modify the liveness probe for your needs or completely disable it by commenting out.
|
|
livenessProbe:
|
|
enabled: true
|
|
tcpSocket:
|
|
port: http
|
|
initialDelaySeconds: 200
|
|
timeoutSeconds: 1
|
|
periodSeconds: 10
|
|
successThreshold: 1
|
|
failureThreshold: 10
|
|
|
|
## @section ReadinessProbe
|
|
#
|
|
## @param gitea.readinessProbe.enabled Enable readiness probe
|
|
## @param gitea.readinessProbe.tcpSocket.port Port to probe for readiness
|
|
## @param gitea.readinessProbe.initialDelaySeconds Initial delay before readiness probe is initiated
|
|
## @param gitea.readinessProbe.timeoutSeconds Timeout for readiness probe
|
|
## @param gitea.readinessProbe.periodSeconds Period for readiness probe
|
|
## @param gitea.readinessProbe.successThreshold Success threshold for readiness probe
|
|
## @param gitea.readinessProbe.failureThreshold Failure threshold for readiness probe
|
|
# Modify the readiness probe for your needs or completely disable it by commenting out.
|
|
readinessProbe:
|
|
enabled: true
|
|
tcpSocket:
|
|
port: http
|
|
initialDelaySeconds: 5
|
|
timeoutSeconds: 1
|
|
periodSeconds: 10
|
|
successThreshold: 1
|
|
failureThreshold: 3
|
|
|
|
# # Uncomment the startup probe to enable and modify it for your needs.
|
|
## @section StartupProbe
|
|
#
|
|
## @param gitea.startupProbe.enabled Enable startup probe
|
|
## @param gitea.startupProbe.tcpSocket.port Port to probe for startup
|
|
## @param gitea.startupProbe.initialDelaySeconds Initial delay before startup probe is initiated
|
|
## @param gitea.startupProbe.timeoutSeconds Timeout for startup probe
|
|
## @param gitea.startupProbe.periodSeconds Period for startup probe
|
|
## @param gitea.startupProbe.successThreshold Success threshold for startup probe
|
|
## @param gitea.startupProbe.failureThreshold Failure threshold for startup probe
|
|
startupProbe:
|
|
enabled: false
|
|
tcpSocket:
|
|
port: http
|
|
initialDelaySeconds: 60
|
|
timeoutSeconds: 1
|
|
periodSeconds: 10
|
|
successThreshold: 1
|
|
failureThreshold: 10
|
|
|
|
## @section redis-cluster
|
|
## @param redis-cluster.enabled Enable redis
|
|
## @param redis-cluster.global.redis.password Password for the "gitea" user (overrides `password`)
|
|
redis-cluster:
|
|
enabled: true
|
|
global:
|
|
redis:
|
|
password: gitea
|
|
|
|
## @section postgresql-ha
|
|
#
|
|
## @param postgresql-ha.enabled Enable postgresql-ha
|
|
## @param postgresql-ha.global.postgresql-ha.auth.password Password for the `gitea` user (overrides `auth.password`)
|
|
## @param postgresql-ha.global.postgresql-ha.auth.database Name for a custom database to create (overrides `auth.database`)
|
|
## @param postgresql-ha.global.postgresql-ha.auth.username Name for a custom user to create (overrides `auth.username`)
|
|
## @param postgresql-ha.global.postgresql-ha.service.ports.postgresql-ha postgresql-ha service port (overrides `service.ports.postgresql-ha`)
|
|
## @param postgresql-ha.primary.persistence.size PVC Storage Request for postgresql-ha volume
|
|
postgresql-ha:
|
|
enabled: true
|
|
global:
|
|
postgresql-ha:
|
|
auth:
|
|
password: gitea
|
|
database: gitea
|
|
username: gitea
|
|
service:
|
|
ports:
|
|
postgresql-ha: 5432
|
|
primary:
|
|
persistence:
|
|
size: 10Gi
|
|
|
|
## @section PostgreSQL
|
|
#
|
|
## @param postgresql.enabled Enable PostgreSQL
|
|
## @param postgresql.global.postgresql.auth.password Password for the `gitea` user (overrides `auth.password`)
|
|
## @param postgresql.global.postgresql.auth.database Name for a custom database to create (overrides `auth.database`)
|
|
## @param postgresql.global.postgresql.auth.username Name for a custom user to create (overrides `auth.username`)
|
|
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
|
|
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
|
|
postgresql:
|
|
enabled: false
|
|
global:
|
|
postgresql:
|
|
auth:
|
|
password: gitea
|
|
database: gitea
|
|
username: gitea
|
|
service:
|
|
ports:
|
|
postgresql: 5432
|
|
primary:
|
|
persistence:
|
|
size: 10Gi
|
|
|
|
# By default, removed or moved settings that still remain in a user defined values.yaml will cause Helm to fail running the install/update.
|
|
# Set it to false to skip this basic validation check.
|
|
## @section Advanced
|
|
## @param checkDeprecation Set it to false to skip this basic validation check.
|
|
## @param test.enabled Set it to false to disable test-connection Pod.
|
|
## @param test.image.name Image name for the wget container used in the test-connection Pod.
|
|
## @param test.image.tag Image tag for the wget container used in the test-connection Pod.
|
|
checkDeprecation: true
|
|
test:
|
|
enabled: true
|
|
image:
|
|
name: busybox
|
|
tag: latest
|
|
|
|
## @param extraDeploy Array of extra objects to deploy with the release
|
|
##
|
|
extraDeploy: []
|