Solvedcharts [bitnami/redis] Sentinel cluster doesn't elect new master after master pod deletion

Which chart:
Chart: bitnami/redis
Version: 13.0.1

Describe the bug
When a master pod is manually deleted, occasionally the remaining replicas appear to continue re-electing the nonexistent master. When the replacement pod reappears, it's unable to connect to the existing master as reported by the remaining replicas, which corresponds to the IP of the now nonexistent previous master pod.

To Reproduce
I'm not able to deterministically reproduce the behavior described above. I'd say the errant behavior occurs ~20% of the time.

Steps to reproduce the behavior:

  1. Create a sentinel cluster with the values below and wait for it to come online
  2. Determine which pod is master and delete it
  3. (with some probability) the replacement pod can't start redis because it can't connect to the master IP reported by the remaining sentinels because they still think the IP of the now deleted pod is still master

Expected behavior
When a pod is deleted the cluster members should elect a new master among themselves and the replacement pod should be able to connect to the elected master when the replacement comes online.

Version of Helm and Kubernetes:

  • Output of helm version:
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}
  • Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-12T21:08:59Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:14:05Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Additional context

values
## Bitnami Redis(TM) image version
## ref: https://hub.docker.com/r/bitnami/redis/tags/
##
image:
  registry: docker.io
  repository: bitnami/redis
  ## Bitnami Redis(TM) image tag
  ## ref: https://github.com/bitnami/bitnami-docker-redis#supported-tags-and-respective-dockerfile-links
  ##
  tag: "6.2.1-debian-10-r36"
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent

## Cluster settings
##
cluster:
  enabled: true
  slaveCount: 3

## Use redis sentinel in the redis pod. This will disable the master and slave services and
## create one redis service with ports to the sentinel and the redis instances
##
sentinel:
  enabled: true
  ## Require password authentication on the sentinel itself
  ## ref: https://redis.io/topics/sentinel
  ##
  usePassword: true
  ## Bitnami Redis(TM) Sentintel image version
  ## ref: https://hub.docker.com/r/bitnami/redis-sentinel/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/redis-sentinel
    ## Bitnami Redis(TM) image tag
    ## ref: https://github.com/bitnami/bitnami-docker-redis-sentinel#supported-tags-and-respective-dockerfile-links
    ##
    tag: "6.2.1-debian-10-r35"
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent

## Use password authentication
##
usePassword: true
## Redis(TM) password (both master and slave)
## Defaults to a random 10-character alphanumeric string if not set and usePassword is true
## ref: https://github.com/bitnami/bitnami-docker-redis#setting-the-server-password-on-first-run
##
password: "password"

##
## Redis(TM) Master parameters
##
master:
  ## Comma-separated list of Redis(TM) commands to disable
  ##
  ## Can be used to disable Redis(TM) commands for security reasons.
  ## Commands will be completely disabled by renaming each to an empty string.
  ## ref: https://redis.io/topics/security#disabling-of-specific-commands
  ##
  disableCommands:
  # - FLUSHDB
  # - FLUSHALL

  ## Redis(TM) Master additional pod labels and annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  podAnnotations:
    # Datadog redis metrics autodiscovery
    # See: https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=kubernetes#datadog-redis-integration
    ad.datadoghq.com/redis.check_names: '["redisdb"]'
    ad.datadoghq.com/redis.init_configs: '[{}]'
    ad.datadoghq.com/redis.instances: |
      [
        {
          "host": "%%host%%",
          "port":"6379",
          "password":"{{ .Values.secrets.rms.cache.backend_config.password }}"
        }
      ]

  ## Redis(TM) Master resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  resources:
    requests:
      memory: 512Mi
      cpu: 300m
    limits:
      memory: 1024Mi
      cpu: 600m

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: false

##
## Redis(TM) Slave properties
## Note: service.type is a mandatory parameter
## The rest of the parameters are either optional or, if undefined, will inherit those declared in Redis(TM) Master
##
slave:
  ## List of Redis(TM) commands to disable
  ##
  disableCommands:
  # - FLUSHDB
  # - FLUSHALL

  ## Redis(TM) slave Resource
  resources:
    requests:
      memory: 512Mi
      cpu: 300m
    limits:
      memory: 1024Mi
      cpu: 600m

  podAnnotations:
    # Datadog redis metrics autodiscovery
    # See: https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=kubernetes#datadog-redis-integration
    ad.datadoghq.com/redis.check_names: '["redisdb"]'
    ad.datadoghq.com/redis.init_configs: '[{}]'
    ad.datadoghq.com/redis.instances: |
      [
        {
          "host": "%%host%%",
          "port":"6379",
          "password":"{{ .Values.secrets.rms.cache.backend_config.password }}"
        }
      ]

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: false

## Sysctl InitContainer
## used to perform sysctl operation to modify Kernel settings (needed sometimes to avoid warnings)
##
sysctlImage:
  enabled: true
  command:
    - /bin/sh
    - -c
    - |-
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
  registry: docker.io
  repository: bitnami/bitnami-shell
  tag: "10"
  pullPolicy: Always
  mountHostSys: true
installation command
helm install redis . -f custom-values.yaml  --atomic --namespace redis-test
cluster log output

The output below occurs on an otherwise healthy sentinel cluster after I run kubectl delete pod redis-node-2 (please note: the logging is collected via stern which I believe explains the unexpected error: stream error: stream ID 19; INTERNAL_ERROR occurrences).

redis-node-2 redis 1:signal-handler (1618955501) Received SIGTERM scheduling shutdown...
redis-node-2 redis 1:M 20 Apr 2021 21:51:41.945 # User requested shutdown...
redis-node-2 redis 1:M 20 Apr 2021 21:51:41.945 * Calling fsync() on the AOF file.
redis-node-2 redis 1:M 20 Apr 2021 21:51:41.945 # Redis is now ready to exit, bye bye...
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.949 # Executing user requested FAILOVER of 'mymaster'
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.949 # +new-epoch 6
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.949 # +try-failover master mymaster 10.42.12.213 6379
redis-node-1 redis 1:S 20 Apr 2021 21:51:41.954 # Connection with master lost.
redis-node-1 redis 1:S 20 Apr 2021 21:51:41.954 * Caching the disconnected master state.
redis-node-1 redis 1:S 20 Apr 2021 21:51:41.954 * Reconnecting to MASTER 10.42.12.213:6379
redis-node-1 redis 1:S 20 Apr 2021 21:51:41.954 * MASTER <-> REPLICA sync started
redis-node-1 redis 1:S 20 Apr 2021 21:51:41.955 # Error condition on socket for SYNC: Connection refused
redis-node-0 redis 1:S 20 Apr 2021 21:51:41.952 # Connection with master lost.
redis-node-0 redis 1:S 20 Apr 2021 21:51:41.952 * Caching the disconnected master state.
redis-node-0 redis 1:S 20 Apr 2021 21:51:41.952 * Reconnecting to MASTER 10.42.12.213:6379
redis-node-0 redis 1:S 20 Apr 2021 21:51:41.952 * MASTER <-> REPLICA sync started
redis-node-0 redis 1:S 20 Apr 2021 21:51:41.953 # Error condition on socket for SYNC: Connection refused
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.993 # +vote-for-leader bc33c65f6d573da2c50da570ccf4dc629a32426d 6
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.993 # +elected-leader master mymaster 10.42.12.213 6379
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:41.993 # +failover-state-select-slave master mymaster 10.42.12.213 6379
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:42.054 # +selected-slave slave 10.42.9.18:6379 10.42.9.18 6379 @ mymaster 10.42.12.213 6379
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:42.054 * +failover-state-send-slaveof-noone slave 10.42.9.18:6379 10.42.9.18 6379 @ mymaster 10.42.12.213 6379
redis-node-2 sentinel 1:signal-handler (1618955502) Received SIGTERM scheduling shutdown...
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:42.121 # User requested shutdown...
redis-node-2 sentinel 1:X 20 Apr 2021 21:51:42.121 # Sentinel is now ready to exit, bye bye...
redis-node-0 redis 1:S 20 Apr 2021 21:51:42.175 * Connecting to MASTER 10.42.12.213:6379
redis-node-0 redis 1:S 20 Apr 2021 21:51:42.175 * MASTER <-> REPLICA sync started
redis-node-0 redis 1:S 20 Apr 2021 21:51:42.177 # Error condition on socket for SYNC: Connection refused
redis-node-1 redis 1:S 20 Apr 2021 21:51:42.310 * Connecting to MASTER 10.42.12.213:6379
redis-node-1 redis 1:S 20 Apr 2021 21:51:42.310 * MASTER <-> REPLICA sync started
redis-node-1 redis 1:S 20 Apr 2021 21:51:42.312 # Error condition on socket for SYNC: Connection refused
- redis-node-2  redis
- redis-node-2  sentinel
redis-node-0 redis 1:S 20 Apr 2021 21:51:43.185 * Connecting to MASTER 10.42.12.213:6379
redis-node-0 redis 1:S 20 Apr 2021 21:51:43.185 * MASTER <-> REPLICA sync started
redis-node-1 redis 1:S 20 Apr 2021 21:51:43.328 * Connecting to MASTER 10.42.12.213:6379
redis-node-1 redis 1:S 20 Apr 2021 21:51:43.328 * MASTER <-> REPLICA sync started
redis-node-1 sentinel 1:X 20 Apr 2021 21:51:52.310 # +reset-master master mymaster 10.42.12.213 6379
+ redis-node-2  sentinel
+ redis-node-2  redis
redis-node-2 sentinel  21:51:52.24 INFO  ==> redis-headless.redis-test.svc.cluster.local has my IP: 10.42.12.214
redis-node-2 sentinel  21:51:52.29 INFO  ==> Cleaning sentinels in sentinel node: 10.42.9.18
redis-node-2 sentinel Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-2 sentinel 1
redis-node-2 redis  21:51:51.92 INFO  ==> redis-headless.redis-test.svc.cluster.local has my IP: 10.42.12.214
redis-node-2 redis Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-2 redis Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-1 sentinel 1:X 20 Apr 2021 21:51:53.211 * +sentinel sentinel b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 10.42.16.216 26379 @ mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:51:57.322 # +reset-master master mymaster 10.42.12.213 6379
redis-node-2 sentinel  21:51:57.31 INFO  ==> Cleaning sentinels in sentinel node: 10.42.16.216
redis-node-2 sentinel Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-2 sentinel 1
redis-node-0 sentinel 1:X 20 Apr 2021 21:51:59.379 * +sentinel sentinel 11f8f53ef3e904a0cfe2822709d6d6ca611daaf6 10.42.9.18 26379 @ mymaster 10.42.12.213 6379
redis-node-2 sentinel  21:52:02.32 INFO  ==> Sentinels clean up done
redis-node-2 sentinel Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-2 sentinel Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:12.350 # +sdown master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.333 # +sdown master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.388 # +odown master mymaster 10.42.12.213 6379 #quorum 2/2
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.388 # +new-epoch 6
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.388 # +try-failover master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.397 # +vote-for-leader b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 6
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:17.407 # +new-epoch 6
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.420 # 11f8f53ef3e904a0cfe2822709d6d6ca611daaf6 voted for b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 6
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:17.422 # +vote-for-leader b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 6
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.480 # +elected-leader master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.480 # +failover-state-select-slave master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.556 # -failover-abort-no-good-slave master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:17.623 # Next failover delay: I will not start a failover before Tue Apr 20 21:52:54 2021
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:17.716 # +odown master mymaster 10.42.12.213 6379 #quorum 2/2
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:17.716 # Next failover delay: I will not start a failover before Tue Apr 20 21:52:53 2021
unexpected error: stream error: stream ID 19; INTERNAL_ERROR
unexpected error: stream error: stream ID 29; INTERNAL_ERROR
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:49.303 # +reset-master master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:49.700 # -odown master mymaster 10.42.12.213 6379
redis-node-1 sentinel 1:X 20 Apr 2021 21:52:50.232 * +sentinel sentinel b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 10.42.16.216 26379 @ mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:54.314 # +reset-master master mymaster 10.42.12.213 6379
unexpected error: stream error: stream ID 33; INTERNAL_ERROR
redis-node-0 sentinel 1:X 20 Apr 2021 21:52:54.329 * +sentinel sentinel 11f8f53ef3e904a0cfe2822709d6d6ca611daaf6 10.42.9.18 26379 @ mymaster 10.42.12.213 6379
redis-node-1 sentinel 1:X 20 Apr 2021 21:53:09.384 # +sdown master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.328 # +sdown master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.411 # +odown master mymaster 10.42.12.213 6379 #quorum 2/2
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.411 # +new-epoch 7
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.411 # +try-failover master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.422 # +vote-for-leader b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 7
redis-node-1 sentinel 1:X 20 Apr 2021 21:53:14.437 # +new-epoch 7
redis-node-1 sentinel 1:X 20 Apr 2021 21:53:14.450 # +vote-for-leader b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 7
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.448 # 11f8f53ef3e904a0cfe2822709d6d6ca611daaf6 voted for b942a249aa6aaca842ead4ff6ad2fd01cdd6797b 7
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.488 # +elected-leader master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.488 # +failover-state-select-slave master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.550 # -failover-abort-no-good-slave master mymaster 10.42.12.213 6379
redis-node-0 sentinel 1:X 20 Apr 2021 21:53:14.640 # Next failover delay: I will not start a failover before Tue Apr 20 21:53:51 2021
redis-node-1 sentinel 1:X 20 Apr 2021 21:53:14.695 # +odown master mymaster 10.42.12.213 6379 #quorum 2/2
redis-node-1 sentinel 1:X 20 Apr 2021 21:53:14.695 # Next failover delay: I will not start a failover before Tue Apr 20 21:53:51 2021
- redis-node-2  redis
- redis-node-2  sentinel
+ redis-node-2  sentinel
+ redis-node-2  redis
30 Answers

✔️Accepted Answer

Mitigation: We've been able to mitigate this issue by reducing sentinel.downAfterMilliseconds and sentinel.failoverTimeout to beat the pod restart delay.

Looks like this works, I was able to workaround the issue by setting these two config options to

-  downAfterMilliseconds: 60000
-  failoverTimeout: 18000
+  downAfterMilliseconds: 4000
+  failoverTimeout: 2000 

Other Answers:

Hi @wilsoniya,

sorry, is still a work in progress, when we have more information we will update the issue

Related Issues:

13
charts [bitnami/redis] Sentinel cluster doesn't elect new master after master pod deletion
Mitigation: We've been able to mitigate this issue by reducing sentinel.downAfterMilliseconds and se...
3
charts Keycloak not working behind nginx ingress controller
@javsalgar @boxcee i have tried your example and as i mentioned above it doesn't work i have tried a...
673
helm User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"
That's because you don't have the permission to deploy tiller add an account for it: Console output:...
513
helm Error: no available release name found
Per #2224 (comment) the following commands resolved the error for me: Hi folks i just don't have any...
497
helm unable to retrieve the complete list of server APIs
For anyone who hits this it's caused by api-services that no longer have backends running.. In my ca...
290
helm helm upgrade --install no longer works
I thought I was experiencing the same problem but it turned out I just had an old delete (but not pu...
202
helm system:default not sufficient for helm, but works for kubectl
Per jayunit100 's advice add role cluster-admin to serviceaccount kube-system:default works for me b...
180
helm Values not exist with loop over a range
@worldsayshi For what it's worth I just ran into this same issue and it turns out there's a relevant...
150
helm Support for using {{ values }} within values.yaml
YAML also provides a handy feature called anchors which let you easily duplicate content across your...
145
helm Accessing values of the subchart with dash in the name
Use the index template function to access a value with a '-' in the name: {{ index .Values gitlab-ru...
144
helm Helm init fails on Kubernetes 1.16.0
The following sed works-for-me: The issue with @mattymo solution (using kubectl patch --local) is th...
135
helm Scoping releases names to namespaces
Explicitly It would be really nice if I could do the same things as I can do with services and other...
118
helm Error converting YAML to JSON could not find expected ':'
For what it's worth I have just run into very curious bug with (.Files.Glob configOverrides/*).AsCon...
106
helm Error: UPGRADE FAILED: cannot re-use a name that is still in use
For those stuck when initial deployment fails After you fix your chart but still cannot deploy with ...
94
react chartjs 2 Mix chart, labels.slice is not a function error
After fiddling with this for a little while I found a way to fix the example It is very easy and I d...
93
charts Helm Install for prometheus fails v3.0.1
In Helm 3 the stable repository is not set You have to manually add it Then you can use stable chart...
90
helm Error: UPGRADE FAILED: no resource with the name "anything_goes" found
Completely removing release from Helm via helm delete release works but it is not a viable solution ...
88
helm Warning after upgrading to 3.3.3
To close out this ticket.. Your ~/.kube/config should only be readable by the current user not the g...
78
helm Connection refused error when issuing helm install command
@cookkkie The service account needs to have a clusterRole: Admin default does not have a cluster adm...
77
react chartjs 2 Defaults creating error
got the same issue solve it by change the chart.js: ^2.9.4 in your package.json then delete node mod...
70
helm Add 'helm install --app-version' command/versioning a chart against an app version
Any update on this It seems PR from @Eraac does what is requested As @TD-4242 mentioned we also run ...
55
apexcharts.js x-axis is displayed as local datetime instead of UTC
for incoming people facing this: as @drpuur showed in his codepen just add: this will fix it for bot...
51
helm Error when updating Statefulsets
I had a similar error when adding a 2nd container (k8s 1.7.8) The first time I'm updating a stateful...
48
helm Using .Chart.name in range function
Just for reference here in case you are landing to this issue the documented way is working ...
44
helm Unable to perform helm upgrade due to resource conflict
This doesn't really help because I still need to manually remove old resources Output of helm versio...
43
helm Naming convention of Release and Chart
We ran into the exact same issue Usually the chart name is your app name I'm having difficulties und...
41
primeng Primeng breaks new Angular 9 project
Hello @mehmet-erim I have tested on this version Angular version: 9.0.0-rc.6 Primeng version: 9.0.0-...
40
39
helm Helm3: No 'init', doesn't use existing ~/.helm
Found this in another issue: ./helm version version.BuildInfo{Version:v3.0.0-beta.1 GitCommit:f76b5f...
37
charts stable/kubernetes-dashboard: User "system:serviceaccount:default:default" cannot create secrets in the namespace "kube-system"
The issue exists quite a while Would be good for new helm users to address this in the Readme ...
37
helm app-name has no deployed releases
Cannot agree more Our production is experiencing the same error So deleting the chart is not an opti...
34
helm Error forwarding ports: error upgrading connection
Hi I have Helm install locally and Tiller on my cluster everything looks healthy but running helm in...
33
helm Release failure or error logs
helm status only shows that the release is failed not quite useful for debugging helm install will p...
33
primeng Chart is not defined error
Adding: To my index.html page resolved this issue for me. when we try to use doughnutChart component...
32
helm Proposal: Allow templating in values.yaml
@alexbrand You could use the tpl function to render values that contain variables: values.yaml templ...
29
helm "required" template function breaks linter
Having a default value for a required values in the charts values.yaml file defeats the whole purpos...
28
apexcharts.js Make Chart Margins and Padding Optional/Adjustable
@benhaynes The way to remove all paddings/margins would be hide the y-axis by yaxis.show = false Rem...
28
charts [stable/rabbitmq] endpoints does not exist
Hi @rnkhouse PVC is Persistent Volume Claim To delete them first obtain the name with: Copy the name...
27
helm UPGRADE FAILED: No resource with the name "" found
This is a process I use to recover from this problem (so far it has worked every time without any in...
27
helm Error: could not find a ready tiller pod
Run this command may solve your issue kubectl logs --namespace kube-system tiller-deploy-2654728925-...
27
nivo Support calendars less than a year long
Having granularity of months would be awesome and should solve most use cases smaller than a year th...
26
victory native victory-native is not compatible with react-native^0.56
This work is published as victory-native@30.0.0 Thank you to everyone in this thread for their help ...
26
helm helm v3.0 mapping values are not allowed in this context
I get the same error message with our SonarQube chart for OpenShift The same install works with Helm...
26
helm charts prometheus-kube-stack unknown fields probenamespaceselector and probeSelector
Can you try to delete the crds before retrying to install the chart? See https://github.com/promethe...
25
charts [stable/nginx-ingress] toYaml can't render configmap with booleans
same here :( I was able to make it work (fix) by using --set-string For example: ...
25
helm Secret Management in Helm
The method proposed above does not seem like it would work with helm rollback or simple helm upgrade...
25
helm Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules
After running helm init When installing a cluster for the first time using kubeadm v1.6.1 the initia...
24
helm helm cant upgrade already created resources
For the benefit of the group I am using helm 2.9.1 and I want to create a namespace with an annotati...