cat letsencrypt-prod.yaml | envsubst | kubectl apply -f - Error from server (InternalError):

uek67

New Member
Hallo zusamen,
ich versuche eben anhand dieser Anleitung
k3s zu erlernen.
Mein cluster besteht aus einem master und 4 nodes, die ich via curl mittels eines scripts installiert habe (sämtlich vm die auf proxmox laufen alles debian 12 Maschinen)


führe ich diesen Befehl aus:

cat letsencrypt-prod.yaml | envsubst | kubectl apply -f -​


erhalte ich diese Fehlermeldung:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": no endpoints available for service "cert-manager-webhook"

Informativ ist dabei für mich erstmal der letzte Teil no endpoints available doch wieso sind keine Endpunkte verfügbar? Tante google konnte nicht weiterhelfen bisher - weiss hier vlt. jemand Rat ?

DANKE schonmal

Uli
 
Dein Kubernetes Service benutzt folgende Logik um interne Endpoints (Pods) zu wählen:
- Label des Service Selektors auf Pods applizieren
--> kubectl -n cert-manager get services -o wide
--> kubectl -n cert-managet get pods --selector=<Service Selektor>
- Nur Pods die ready sind (Zustand Running + Readiness Check erfolgreich ) werden berücksichtigt.

Endpoints ist der generelle Überbegriff für etwas was erreichbar is - sei es ein Service, Pod, .... oder auch externe Ressource welche vom Cluster proxied wird. "kubectl get endpoints --all-namespaces" kann praktisch sein, aber auch sehr lang

Vermutlich ist also entweder ein Pod nicht deployed wegen fehlenden Ressourcen oder Taints/Affinities (Get auf statefulsets und deployments) oder der Container started aber wird nicht ready (Das sollte dann am Restartcount des Pods erkennbar sein)
Im Zweifelsfall "kubectl get events", aber das kann schnell sehr unübersichtlich werden.
 
kubectl get events
Hallo, nochmald DANKE für deine Hilfe ich hhane mal folgende Befehle ausgeführt:
  • kubectl -n cert-manager get services -o wide
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    cert-manager ClusterIP 10.43.116.119 <none> 9402/TCP 4d22h app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager
    cert-manager-webhook ClusterIP 10.43.97.193 <none> 443/TCP 4d22h app.kubernetes.io/component=webhook,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=webhook
    uli@master:~$ kubectl -n cert-managet get pods --selector=dns
    No resources found in cert-managet namespace.
    uli@master:~$ kubectl -n cert-managet get pods --selector=ingres
    No resources found in cert-managet namespace.
    uli@master:~$ kubectl -n cert-managet get pods --selector=traefik
    No resources found in cert-managet namespace.

  • kubectl get endpoints --all-namespaces
    NAMESPACE NAME ENDPOINTS AGE
    cert-manager cert-manager 10.42.5.16:9402 4d22h
    cert-manager cert-manager-webhook <none> 4d22h
    default kubernetes 192.168.10.59:6443 23d
    default nginx 10.42.6.16:80 22d
    default nginx-service 10.42.6.16:80 23d
    default whoami 10.42.5.15:80 4d22h
    kube-system kube-dns 10.42.6.18:53,10.42.6.18:53,10.42.6.18:9153 23d
    kube-system metrics-server <none> 23d
    kube-system traefik <none> 23d

  • kubectl get events
    LAST SEEN TYPE REASON OBJECT MESSAGE
    53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-8m8mm Cancelling deletion of Pod default/apache2-7dbc66d5c8-8m8mm
    16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-8m8mm Marking for deletion Pod default/apache2-7dbc66d5c8-8m8mm
    3d17h Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
    43m Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
    37m Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
    64s Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
    3d17h Normal BackOff pod/apache2-7dbc66d5c8-g4wz4 Back-off pulling image "apache2"
    35m Normal Pulling pod/apache2-7dbc66d5c8-g4wz4 Pulling image "apache2"
    35m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Failed to pull image "apache2": failed to pull and unpack image "docker.io/library/apache2:latest": failed to resolve reference "docker.io/library/apache2:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
    35m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Error: ErrImagePull
    106s Normal BackOff pod/apache2-7dbc66d5c8-g4wz4 Back-off pulling image "apache2"
    34m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Error: ImagePullBackOff
    53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-g6gzt Cancelling deletion of Pod default/apache2-7dbc66d5c8-g6gzt
    16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-g6gzt Marking for deletion Pod default/apache2-7dbc66d5c8-g6gzt
    3d17h Normal BackOff pod/apache2-7dbc66d5c8-vclzm Back-off pulling image "apache2"
    40m Normal SandboxChanged pod/apache2-7dbc66d5c8-vclzm Pod sandbox changed, it will be killed and re-created.
    38m Normal Pulling pod/apache2-7dbc66d5c8-vclzm Pulling image "apache2"
    38m Warning Failed pod/apache2-7dbc66d5c8-vclzm Failed to pull image "apache2": failed to pull and unpack image "docker.io/library/apache2:latest": failed to resolve reference "docker.io/library/apache2:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
    38m Warning Failed pod/apache2-7dbc66d5c8-vclzm Error: ErrImagePull
    37s Normal BackOff pod/apache2-7dbc66d5c8-vclzm Back-off pulling image "apache2"
    30m Warning Failed pod/apache2-7dbc66d5c8-vclzm Error: ImagePullBackOff
    53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-vdcq5 Cancelling deletion of Pod default/apache2-7dbc66d5c8-vdcq5
    16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-vdcq5 Marking for deletion Pod default/apache2-7dbc66d5c8-vdcq5
    53m Normal RegisteredNode node/k3s-master Node k3s-master event: Registered Node k3s-master in Controller
    21m Normal RegisteredNode node/k3s-master Node k3s-master event: Registered Node k3s-master in Controller
    53m Normal RegisteredNode node/k3s-node-1 Node k3s-node-1 event: Registered Node k3s-node-1 in Controller
    21m Normal RegisteredNode node/k3s-node-1 Node k3s-node-1 event: Registered Node k3s-node-1 in Controller
    53m Normal RegisteredNode node/k3s-node-2 Node k3s-node-2 event: Registered Node k3s-node-2 in Controller
    21m Normal RegisteredNode node/k3s-node-2 Node k3s-node-2 event: Registered Node k3s-node-2 in Controller
    53m Normal RegisteredNode node/k3s-node-3 Node k3s-node-3 event: Registered Node k3s-node-3 in Controller
    21m Normal RegisteredNode node/k3s-node-3 Node k3s-node-3 event: Registered Node k3s-node-3 in Controller
    53m Normal Starting node/master
    53m Normal Starting node/master Starting kubelet.
    53m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
    53m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
    53m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
    53m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
    53m Warning Rebooted node/master Node master has been rebooted, boot id: 8b9be67b-0df9-490e-9302-b921d41b27f2
    53m Normal NodeNotReady node/master Node master status is now: NodeNotReady
    53m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
    53m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
    53m Normal NodeReady node/master Node master status is now: NodeReady
    53m Normal RegisteredNode node/master Node master event: Registered Node master in Controller
    37m Normal Starting node/master Starting kubelet.
    37m Normal Starting node/master
    37m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
    37m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
    37m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
    37m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
    37m Warning Rebooted node/master Node master has been rebooted, boot id: 076c6d46-c3f9-42c7-bcea-88ec0379e7c3
    37m Normal NodeNotReady node/master Node master status is now: NodeNotReady
    37m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
    37m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
    31m Normal Starting node/master
    31m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
    31m Normal Starting node/master Starting kubelet.
    31m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
    31m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
    31m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
    31m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
    31m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
    31m Warning Rebooted node/master Node master has been rebooted, boot id: 1ada8b9d-93a0-4888-8ad0-96f9865f0a2a
    31m Normal NodeReady node/master Node master status is now: NodeReady
    21m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
    21m Normal RegisteredNode node/master Node master event: Registered Node master in Controller
    21m Normal Starting node/master
    21m Normal Starting node/master Starting kubelet.
    21m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
    21m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
    21m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
    21m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
    21m Warning Rebooted node/master Node master has been rebooted, boot id: 13ea490e-0846-44fa-8ff7-f75705b65d9e
    21m Normal NodeNotReady node/master Node master status is now: NodeNotReady
    21m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
    21m Normal NodeReady node/master Node master status is now: NodeReady
    52m Warning NodeNotReady pod/nginx-deployment-7c79c4bf97-2ft89 Node is not ready
    36m Normal Pulled pod/nginx-deployment-7c79c4bf97-2ft89 Successfully pulled image "nginx:latest" in 25.61s (25.61s including waiting)
    36m Normal Created pod/nginx-deployment-7c79c4bf97-2ft89 Created container nginx
    36m Normal Started pod/nginx-deployment-7c79c4bf97-2ft89 Started container nginx
    3d17h Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
    43m Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
    37m Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
    70s Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
    53m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-kvgh6 Cancelling deletion of Pod default/nginx-deployment-7c79c4bf97-kvgh6
    16m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-kvgh6 Marking for deletion Pod default/nginx-deployment-7c79c4bf97-kvgh6
    53m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-vhswd Cancelling deletion of Pod default/nginx-deployment-7c79c4bf97-vhswd
    16m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-vhswd Marking for deletion Pod default/nginx-deployment-7c79c4bf97-vhswd
    53m Normal RegisteredNode node/node2 Node node2 event: Registered Node node2 in Controller
    52m Normal NodeNotReady node/node2 Node node2 status is now: NodeNotReady
    31m Normal NodeHasSufficientMemory node/node2 Node node2 status is now: NodeHasSufficientMemory
    31m Normal NodeHasNoDiskPressure node/node2 Node node2 status is now: NodeHasNoDiskPressure
    31m Normal NodeHasSufficientPID node/node2 Node node2 status is now: NodeHasSufficientPID
    31m Warning Rebooted node/node2 Node node2 has been rebooted, boot id: 68c09a1d-ae07-4253-82a3-b00d1184a3c4
    31m Normal NodeReady node/node2 Node node2 status is now: NodeReady
    21m Normal RegisteredNode node/node2 Node node2 event: Registered Node node2 in Controller
    53m Normal RegisteredNode node/node3 Node node3 event: Registered Node node3 in Controller
    52m Normal NodeNotReady node/node3 Node node3 status is now: NodeNotReady
    40m Normal Starting node/node3 Starting kubelet.
    40m Normal NodeHasSufficientMemory node/node3 Node node3 status is now: NodeHasSufficientMemory
    40m Normal NodeHasNoDiskPressure node/node3 Node node3 status is now: NodeHasNoDiskPressure
    40m Normal NodeHasSufficientPID node/node3 Node node3 status is now: NodeHasSufficientPID
    40m Normal Starting node/node3
    40m Warning Rebooted node/node3 Node node3 has been rebooted, boot id: 28f6690a-1690-4be6-b569-8011e22d5d7a
    40m Normal NodeNotReady node/node3 Node node3 status is now: NodeNotReady
    40m Normal NodeAllocatableEnforced node/node3 Updated Node Allocatable limit across pods
    40m Normal NodeReady node/node3 Node node3 status is now: NodeReady
    21m Normal RegisteredNode node/node3 Node node3 event: Registered Node node3 in Controller
    3d17h Normal BackOff pod/snort-87898f686-2dg9x Back-off pulling image "snort"
    40m Normal SandboxChanged pod/snort-87898f686-2dg9x Pod sandbox changed, it will be killed and re-created.
    38m Normal Pulling pod/snort-87898f686-2dg9x Pulling image "snort"
    38m Warning Failed pod/snort-87898f686-2dg9x Failed to pull image "snort": failed to pull and unpack image "docker.io/library/snort:latest": failed to resolve reference "docker.io/library/snort:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
    38m Warning Failed pod/snort-87898f686-2dg9x Error: ErrImagePull
    27s Normal BackOff pod/snort-87898f686-2dg9x Back-off pulling image "snort"
    38m Warning Failed pod/snort-87898f686-2dg9x Error: ImagePullBackOff
    53m Normal TaintManagerEviction pod/snort-87898f686-zn8tr Cancelling deletion of Pod default/snort-87898f686-zn8tr
    16m Normal TaintManagerEviction pod/snort-87898f686-zn8tr Marking for deletion Pod default/snort-87898f686-zn8tr
    3d17h Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
    43m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
    37m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
    31m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
    60s Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
    53m Normal TaintManagerEviction pod/suricata-65457d7765-dv5md Cancelling deletion of Pod default/suricata-65457d7765-dv5md
    16m Normal TaintManagerEviction pod/suricata-65457d7765-dv5md Marking for deletion Pod default/suricata-65457d7765-dv5md
    52m Warning NodeNotReady pod/whoami-6bc856bfcd-hgntl Node is not ready
    40m Normal SandboxChanged pod/whoami-6bc856bfcd-hgntl Pod sandbox changed, it will be killed and re-created.
    40m Normal Pulled pod/whoami-6bc856bfcd-hgntl Container image "traefik/whoami:v1.9.0" already present on machine
    39m Normal Created pod/whoami-6bc856bfcd-hgntl Created container whoami
    39m Normal Started pod/whoami-6bc856bfcd-hgntl Started container whoami

  • und
  • kubectl -n cert-manager get deploy cert-manager-webhook -oyaml | grep -A5 readiness
    {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"webhook","app.kubernetes.io/component":"webhook","app.kubernetes.io/instance":"cert-manager","app.kubernetes.io/name":"webhook","app.kubernetes.io/version":"v1.11.0"},"name":"cert-manager-webhook","namespace":"cert-manager"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/component":"webhook","app.kubernetes.io/instance":"cert-manager","app.kubernetes.io/name":"webhook"}},"template":{"metadata":{"labels":{"app":"webhook","app.kubernetes.io/component":"webhook","app.kubernetes.io/instance":"cert-manager","app.kubernetes.io/name":"webhook","app.kubernetes.io/version":"v1.11.0"}},"spec":{"containers":[{"args":["--v=2","--secure-port=10250","--dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE)","--dynamic-serving-ca-secret-name=cert-manager-webhook-ca","--dynamic-serving-dns-names=cert-manager-webhook","--dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE)","--dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE).svc"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/jetstack/cert-manager-webhook:v1.11.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/livez","port":6080,"scheme":"HTTP"},"initialDelaySeconds":60,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"cert-manager-webhook","ports":[{"containerPort":10250,"name":"https","protocol":"TCP"},{"containerPort":6080,"name":"healthcheck","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6080,"scheme":"HTTP"},"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":1},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}}],"nodeSelector":{"kubernetes.io/os":"linux"},"securityContext":{"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}},"serviceAccountName":"cert-manager-webhook"}}}}
    creationTimestamp: "2024-07-31T13:02:58Z"
    generation: 1
    labels:
    app: webhook
    app.kubernetes.io/component: webhook
    --
    readinessProbe:
    failureThreshold: 3
    httpGet:
    path: /healthz
    port: 6080
    scheme: HTTP
    ---------------------------------------------------------------------------------------------------------------------------------------------So ganz verstehe ich es noch nicht.

Ah und hier für dich :
tasse_kaffee.jpg
 
Code-Tags sind übrigens eine ziemlich geniale Erfindung. Auch dieses Forum stellt diese Funktion bereit um einen Post lesbarer zu gestalten.
Zu finden ist diese unter den 3 senkrechten Punkten ganz rechts, bzw nebem dem Quadrat welches aussieht wie ein Fenster.

Beispiel:
Kebernetes get events:

Code:
LAST SEEN TYPE REASON OBJECT MESSAGE
53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-8m8mm Cancelling deletion of Pod default/apache2-7dbc66d5c8-8m8mm
16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-8m8mm Marking for deletion Pod default/apache2-7dbc66d5c8-8m8mm
3d17h Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
43m Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
37m Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
64s Normal SandboxChanged pod/apache2-7dbc66d5c8-9tgxj Pod sandbox changed, it will be killed and re-created.
3d17h Normal BackOff pod/apache2-7dbc66d5c8-g4wz4 Back-off pulling image "apache2"
35m Normal Pulling pod/apache2-7dbc66d5c8-g4wz4 Pulling image "apache2"
35m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Failed to pull image "apache2": failed to pull and unpack image "docker.io/library/apache2:latest": failed to resolve reference "docker.io/library/apache2:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
35m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Error: ErrImagePull
106s Normal BackOff pod/apache2-7dbc66d5c8-g4wz4 Back-off pulling image "apache2"
34m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Error: ImagePullBackOff
53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-g6gzt Cancelling deletion of Pod default/apache2-7dbc66d5c8-g6gzt
16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-g6gzt Marking for deletion Pod default/apache2-7dbc66d5c8-g6gzt
3d17h Normal BackOff pod/apache2-7dbc66d5c8-vclzm Back-off pulling image "apache2"
40m Normal SandboxChanged pod/apache2-7dbc66d5c8-vclzm Pod sandbox changed, it will be killed and re-created.
38m Normal Pulling pod/apache2-7dbc66d5c8-vclzm Pulling image "apache2"
38m Warning Failed pod/apache2-7dbc66d5c8-vclzm Failed to pull image "apache2": failed to pull and unpack image "docker.io/library/apache2:latest": failed to resolve reference "docker.io/library/apache2:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
38m Warning Failed pod/apache2-7dbc66d5c8-vclzm Error: ErrImagePull
37s Normal BackOff pod/apache2-7dbc66d5c8-vclzm Back-off pulling image "apache2"
30m Warning Failed pod/apache2-7dbc66d5c8-vclzm Error: ImagePullBackOff
53m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-vdcq5 Cancelling deletion of Pod default/apache2-7dbc66d5c8-vdcq5
16m Normal TaintManagerEviction pod/apache2-7dbc66d5c8-vdcq5 Marking for deletion Pod default/apache2-7dbc66d5c8-vdcq5
53m Normal RegisteredNode node/k3s-master Node k3s-master event: Registered Node k3s-master in Controller
21m Normal RegisteredNode node/k3s-master Node k3s-master event: Registered Node k3s-master in Controller
53m Normal RegisteredNode node/k3s-node-1 Node k3s-node-1 event: Registered Node k3s-node-1 in Controller
21m Normal RegisteredNode node/k3s-node-1 Node k3s-node-1 event: Registered Node k3s-node-1 in Controller
53m Normal RegisteredNode node/k3s-node-2 Node k3s-node-2 event: Registered Node k3s-node-2 in Controller
21m Normal RegisteredNode node/k3s-node-2 Node k3s-node-2 event: Registered Node k3s-node-2 in Controller
53m Normal RegisteredNode node/k3s-node-3 Node k3s-node-3 event: Registered Node k3s-node-3 in Controller
21m Normal RegisteredNode node/k3s-node-3 Node k3s-node-3 event: Registered Node k3s-node-3 in Controller
53m Normal Starting node/master
53m Normal Starting node/master Starting kubelet.
53m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
53m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
53m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
53m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
53m Warning Rebooted node/master Node master has been rebooted, boot id: 8b9be67b-0df9-490e-9302-b921d41b27f2
53m Normal NodeNotReady node/master Node master status is now: NodeNotReady
53m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
53m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
53m Normal NodeReady node/master Node master status is now: NodeReady
53m Normal RegisteredNode node/master Node master event: Registered Node master in Controller
37m Normal Starting node/master Starting kubelet.
37m Normal Starting node/master
37m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
37m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
37m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
37m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
37m Warning Rebooted node/master Node master has been rebooted, boot id: 076c6d46-c3f9-42c7-bcea-88ec0379e7c3
37m Normal NodeNotReady node/master Node master status is now: NodeNotReady
37m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
37m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
31m Normal Starting node/master
31m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
31m Normal Starting node/master Starting kubelet.
31m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
31m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
31m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
31m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
31m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
31m Warning Rebooted node/master Node master has been rebooted, boot id: 1ada8b9d-93a0-4888-8ad0-96f9865f0a2a
31m Normal NodeReady node/master Node master status is now: NodeReady
21m Normal NodePasswordValidationComplete node/master Deferred node password secret validation complete
21m Normal RegisteredNode node/master Node master event: Registered Node master in Controller
21m Normal Starting node/master
21m Normal Starting node/master Starting kubelet.
21m Warning InvalidDiskCapacity node/master invalid capacity 0 on image filesystem
21m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
21m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
21m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
21m Warning Rebooted node/master Node master has been rebooted, boot id: 13ea490e-0846-44fa-8ff7-f75705b65d9e
21m Normal NodeNotReady node/master Node master status is now: NodeNotReady
21m Normal NodeAllocatableEnforced node/master Updated Node Allocatable limit across pods
21m Normal NodeReady node/master Node master status is now: NodeReady
52m Warning NodeNotReady pod/nginx-deployment-7c79c4bf97-2ft89 Node is not ready
36m Normal Pulled pod/nginx-deployment-7c79c4bf97-2ft89 Successfully pulled image "nginx:latest" in 25.61s (25.61s including waiting)
36m Normal Created pod/nginx-deployment-7c79c4bf97-2ft89 Created container nginx
36m Normal Started pod/nginx-deployment-7c79c4bf97-2ft89 Started container nginx
3d17h Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
43m Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
37m Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
70s Normal SandboxChanged pod/nginx-deployment-7c79c4bf97-75l8b Pod sandbox changed, it will be killed and re-created.
53m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-kvgh6 Cancelling deletion of Pod default/nginx-deployment-7c79c4bf97-kvgh6
16m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-kvgh6 Marking for deletion Pod default/nginx-deployment-7c79c4bf97-kvgh6
53m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-vhswd Cancelling deletion of Pod default/nginx-deployment-7c79c4bf97-vhswd
16m Normal TaintManagerEviction pod/nginx-deployment-7c79c4bf97-vhswd Marking for deletion Pod default/nginx-deployment-7c79c4bf97-vhswd
53m Normal RegisteredNode node/node2 Node node2 event: Registered Node node2 in Controller
52m Normal NodeNotReady node/node2 Node node2 status is now: NodeNotReady
31m Normal NodeHasSufficientMemory node/node2 Node node2 status is now: NodeHasSufficientMemory
31m Normal NodeHasNoDiskPressure node/node2 Node node2 status is now: NodeHasNoDiskPressure
31m Normal NodeHasSufficientPID node/node2 Node node2 status is now: NodeHasSufficientPID
31m Warning Rebooted node/node2 Node node2 has been rebooted, boot id: 68c09a1d-ae07-4253-82a3-b00d1184a3c4
31m Normal NodeReady node/node2 Node node2 status is now: NodeReady
21m Normal RegisteredNode node/node2 Node node2 event: Registered Node node2 in Controller
53m Normal RegisteredNode node/node3 Node node3 event: Registered Node node3 in Controller
52m Normal NodeNotReady node/node3 Node node3 status is now: NodeNotReady
40m Normal Starting node/node3 Starting kubelet.
40m Normal NodeHasSufficientMemory node/node3 Node node3 status is now: NodeHasSufficientMemory
40m Normal NodeHasNoDiskPressure node/node3 Node node3 status is now: NodeHasNoDiskPressure
40m Normal NodeHasSufficientPID node/node3 Node node3 status is now: NodeHasSufficientPID
40m Normal Starting node/node3
40m Warning Rebooted node/node3 Node node3 has been rebooted, boot id: 28f6690a-1690-4be6-b569-8011e22d5d7a
40m Normal NodeNotReady node/node3 Node node3 status is now: NodeNotReady
40m Normal NodeAllocatableEnforced node/node3 Updated Node Allocatable limit across pods
40m Normal NodeReady node/node3 Node node3 status is now: NodeReady
21m Normal RegisteredNode node/node3 Node node3 event: Registered Node node3 in Controller
3d17h Normal BackOff pod/snort-87898f686-2dg9x Back-off pulling image "snort"
40m Normal SandboxChanged pod/snort-87898f686-2dg9x Pod sandbox changed, it will be killed and re-created.
38m Normal Pulling pod/snort-87898f686-2dg9x Pulling image "snort"
38m Warning Failed pod/snort-87898f686-2dg9x Failed to pull image "snort": failed to pull and unpack image "docker.io/library/snort:latest": failed to resolve reference "docker.io/library/snort:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
38m Warning Failed pod/snort-87898f686-2dg9x Error: ErrImagePull
27s Normal BackOff pod/snort-87898f686-2dg9x Back-off pulling image "snort"
38m Warning Failed pod/snort-87898f686-2dg9x Error: ImagePullBackOff
53m Normal TaintManagerEviction pod/snort-87898f686-zn8tr Cancelling deletion of Pod default/snort-87898f686-zn8tr
16m Normal TaintManagerEviction pod/snort-87898f686-zn8tr Marking for deletion Pod default/snort-87898f686-zn8tr
3d17h Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
43m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
37m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
31m Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
60s Normal SandboxChanged pod/suricata-65457d7765-2ftlt Pod sandbox changed, it will be killed and re-created.
53m Normal TaintManagerEviction pod/suricata-65457d7765-dv5md Cancelling deletion of Pod default/suricata-65457d7765-dv5md
16m Normal TaintManagerEviction pod/suricata-65457d7765-dv5md Marking for deletion Pod default/suricata-65457d7765-dv5md
52m Warning NodeNotReady pod/whoami-6bc856bfcd-hgntl Node is not ready
40m Normal SandboxChanged pod/whoami-6bc856bfcd-hgntl Pod sandbox changed, it will be killed and re-created.
40m Normal Pulled pod/whoami-6bc856bfcd-hgntl Container image "traefik/whoami:v1.9.0" already present on machine
39m Normal Created pod/whoami-6bc856bfcd-hgntl Created container whoami
39m Normal Started pod/whoami-6bc856bfcd-hgntl Started container whoami
 
Lieber sbr2d2,
vielen lieben Dank für deinen Hinweis; deshalb versteckt man diese so wichtige Funktion auch in einem Untermenü, damit sie möglichst niemand findet , der wie ich die irrungen und wirrungen von sorry (das soll kein persönlicher Angrifff sein)Entwicklerhirnen meist nicht nachvollziehen kann und sich nir noch wundern kann. Ich bin wohl zu alt, um sowas zu kapieren.

Liebe Grüsse

Uli
 
Code:
"docker.io/library/apache2:latest": failed to resolve reference "docker.io/library/apache2:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
35m Warning Failed pod/apache2-7dbc66d5c8-g4wz4 Error: ErrImagePull

"repository does not exist or may require authorization"
Im Fall von docker.io gibt es keine Autorisation für normale öffentliche Images und Pull-Quotas ergeben eine andere Fehlermeldung. Also bleibt die Möglichkeit dass das Image nicht existiert, und tatsächlich; laut Suchfunktion gibt es kein library/apache2:latest Image, sondern das heisst library/httpd:2
Allerdings sehe ich nicht wo das apache2 Deployment herkommt, es ist nicht von der verlinkten k3srocks Seite und auf die Schnelle auch nicht in den anderen YAML zu finden. Unüblech ist, dass es im default Namespace liegt, hast du eigene Sachen gestartet bereits?

uli@master:~$ kubectl -n cert-managet get pods --selector=dns
No resources found in cert-managet namespace.
uli@master:~$ kubectl -n cert-managet get pods --selector=ingres
No resources found in cert-managet namespace.
uli@master:~$ kubectl -n cert-managet get pods --selector=traefik
No resources found in cert-managet namespace.
Der Selektor ist die ganze Selektor-Ausgabe vom Service, also hier "app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager" (Oder ein Subset, die Kommas sind als Und-Funktion zu verstehen zwischen mehreren Selektoren). System-seitige Labels sind generell immer im Format <fqdn>/<feature>=option, auch wenn es nicht zwingend ist (zB ist der Label app=blabla gültig)


Ich kriege dein Problem so nicht wirklich verstanden. Du hast dich rein an die k3srocks Dokumentation gehalten? Ich starte mal ein mini-Cluster um deine Problematik zu reproduzieren wenn ja
 
Hallo d4f,

DANKE für deine Antwort. Also ich habe den cluster nach dieser Anleitung: https://docs.k3s.io/quick-start
installiert. Ferner habe ich diese Seiten zu Rate gezogen:

Das Installationsskript für den Master, welches ich verwendet habe:


Dann habe ich versucht dieser Video Anleitung zu folgen:
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Schliesslich wurde mir hier im Forum empfohlen mich an die k3srocks Anleitung zu halten, was ich tue. Eigene Sachen habe ich bis jetzt nicht gestartet die .yaml dateien wurden sämtlich so übernommen.

Ich hoffe ich konnte deine Fragen zufriedenstellend beantworten - ich blicke im Moment nicht mehr durch.

Ich bin kein CODER / Entwickler nur ein armer Sysadmin , der das lernen muss und möchte, doch ausser euch keinen fragen kann, weil keiner sich damit auskennt.

In disem Sinne DANKE für jede Unterstützung

Uli
 
Back
Top