Hi OVH Community;
I have an issue with the OIDC setup inside a managed kubernetes cluster when I use Google as OpenID provider.
The setup was build like described in this documentation.
The ovh terraform provider and the hashicorp kubernetes provider where used to create the cluster Role Bindings.
kubelogin is used as kubectl credential plugin.
> resource "ovh_cloud_project_kube_oidc" "cluster-oidc" {
> service_name = var.ovh_public_cloud_project_id
> kube_id = ovh_cloud_project_kube.cluster.id
>
> #required field
> client_id = var.oidc_client_id
> issuer_url = var.oidc_issuer_url
>
> oidc_username_claim = "email"
> oidc_username_prefix = "oidc:"
> depends_on = [ovh_cloud_project_kube.cluster]
> }
> resource "kubernetes_cluster_role_binding" "oidc-cluster-admin" {
> metadata {
> name = "oidc-cluster-admin"
> }
> role_ref {
> api_group = "rbac.authorization.k8s.io"
> kind = "ClusterRole"
> name = "cluster-admin"
> }
> subject {
> kind = "User"
> name = "oidc:my.user@fqdn.something"
> api_group = "rbac.authorization.k8s.io"
> }
> depends_on = [ovh_cloud_project_kube_oidc.cluster-oidc]
> }
> - name: oidc
> user:
> exec:
> apiVersion: client.authentication.k8s.io/v1beta1
> args:
> - oidc-login
> - get-token
> - --oidc-issuer-url=https://accounts.google.com
> - --oidc-client-id=somestring.apps.googleusercontent.com
> - --oidc-client-secret=superseret
> - --oidc-extra-scope=email
> - --oidc-extra-scope=profile
> - --oidc-extra-scope=openid
> - -v10
> command: kubectl
> env: null
> provideClusterInfo: false
> kubectl get pods --user=oidc
> .......
> I0425 17:35:54.204892 62059 get_token.go:107] you already have a valid token until 2023-04-25 18:35:53 +0200 CEST
> I0425 17:35:54.204895 62059 get_token.go:114] writing the token to client-go
> error: You must be logged in to the server (Unauthorized)
I also tried to get some insights via the apiserver audit logs in the OVH web interface, but this is nearly impossible without filters. WOuld be nice if it is possible to download the audit logs, and grep that log.
Or may there is a possible solution that I don't know yet.
Perhaps someone has experience with OIDC and Google Identities and give me a hint.
A similar setup in a k3s cluster worked for me, but there I had direct access to the apiserver to debug.
Sure I can use keycloak, dex or authentik in between, but this is not what I initialy want.
Best Regards
Containers and Orchestration - Managed Kubernetes OIDC with Google OpenID
Related questions
- Managed Kubernetes LoadBalancer is not activated
4153
30.05.2022 13:22
- How to check if Kubernetes is installed?
3494
25.06.2020 08:22
- How to activate Pod Security Policies on Kubernetes cloud?
2741
11.05.2020 16:12
- What is a POD in kubernetes?
2608
11.08.2020 15:45
- Loadbalancer static ip?
2091
18.05.2021 16:47
- No options for opening ports available
2081
29.09.2023 14:32
- Connection to outside services from a pod
1946
08.02.2021 22:14
- Kubernetes control plane ttl and HA
1822
04.12.2023 16:45
- Kubernetes service endpoints no longer auto created, updated or deleted, how to fix?
1754
03.06.2025 08:50
- Managed Kubernetes Service and available regions
1736
05.11.2021 08:18
بِسْمِ ٱللَّٰهِ
I had a similar problem with Keycloak. The mistake was that I added a "/" in the realm's url (in OVH)
https://example.com/auth/realms/myRealm/ -> does not work
https://example.com/auth/realms/myRealm -> works
Hope it helps :)