[Bug] Not enough permissions to watch for resources: changes (creation/deletion/updates) will not be noticed; the resources are only refreshed on operator restarts.
See original GitHub issueSearch before asking
- I searched the issues and found no similar issues.
Ray Component
Ray Clusters
What happened + What you expected to happen
Ray operator starting with error
Not enough permissions to watch for resources: changes (creation/deletion/updates) will not be noticed; the resources are only refreshed on operator restarts.
I can see all permissions are fine.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ray-operator-serviceaccount
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ray-operator-clusterrole
rules:
- apiGroups: ["", "cluster.ray.io"]
resources: ["rayclusters", "rayclusters/finalizers", "rayclusters/status", "pods", "pods/exec", "services"]
verbs: ["get", "watch", "list", "create", "delete", "patch", "update"]
- apiGroups: [""]
resources: [events]
verbs: [create]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ray-operator-clusterrolebinding
subjects:
- kind: ServiceAccount
name: ray-operator-serviceaccount
namespace: default
roleRef:
kind: ClusterRole
name: ray-operator-clusterrole
apiGroup: rbac.authorization.k8s.io
Reproduction script
Deploy it with helm3 and try to spin up ray work some time work only start after restarting ray operator
Anything else
Ray work is starting after restarting Ray operator it is not able to trace the changes. I’m using ray work on the same namespace it should work without issue not sure what is happening.
Are you willing to submit a PR?
- Yes I am willing to submit a PR!
Issue Analytics
- State:
- Created 2 years ago
- Reactions:3
- Comments:14 (9 by maintainers)
Top Results From Across the Web
Bug in the kubernetes deploy operator helm yaml file? - Ray
Not enough permissions to watch for resources: changes (creation/deletion/updates) will not be noticed; the resources are only refreshed on ...
Read more >Scopes — Kopf documentation - Read the Docs
An operator can be restricted to handle custom resources in one namespace only: ... If there are no permissions to list/watch the namespaces,...
Read more >Troubleshooting Operator issues - OpenShift Documentation
When Updated is True and Updating is False, there are no further changes being made. Disabling the Machine Config Operator from automatically rebooting...
Read more >RHSA-2022:5069 - Security Advisory - Red Hat 고객 포털
Red Hat OpenShift Container Platform is Red Hat's cloud computing ... error "Tag not matched: expect <fault> but got <html>" on vm resource...
Read more >RBAC issue with Kubernetes Operator built with Kopf
User Sergey Vasilyev has tested this configuration and mentioned in the comment: You are right, "*" works. I tried your repo locally with ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
It looks like additionally list/watch namespace permissions are needed.
https://github.com/nolar/kopf/blob/129fe4ced27e097ff92cde6d1b2405e726e4c820/kopf/_cogs/configs/configuration.py#L240
https://github.com/nolar/kopf/issues/901
i edited the ray-operator-clusterrole and added the list/watch permissions for namespace, and now it seems to correctly create/delete the ray clusters without needing restart of the ray operator pod. Although i still the warning during the pod startup.
edit: it also needs watch/list permission for customresourcedefinitions in api group apiextensions.k8s.io.
The older operator is deprecated and will be removed in Ray 2.2.0. Closing this issue.