I am having issues in packetmanager with CrashLoopBackOff status. I tried deleting the pod / module and reinstalling but it does not fix it.
I also tried using different Helm chart versions 1.2.0.1-beta and 1.2.0.1-B2, but it makes no difference. Looking at it closely both Helm charts uses the same docker image. Is that intentional?
image:
registry: docker.io
repository: mosipid/commons-packet-service
tag: 1.2.0.1-B1
Does anyone have encountered this and/or anyone who has a fix? Thanks!
kubectl get pod -n packetmanager
NAME READY STATUS RESTARTS AGE
packetmanager-64f5d9f9bb-rhzgm 1/2 Running 7 (5m34s ago) 16m
kubectl get event -n packetmanager
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Scheduled pod/packetmanager-64f5d9f9bb-rhzgm Successfully assigned packetmanager/packetmanager-64f5d9f9bb-rhzgm to wnode5
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
24m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container istio-init
24m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container istio-init
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
24m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container istio-proxy
24m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container istio-proxy
23m Normal Pulling pod/packetmanager-64f5d9f9bb-rhzgm Pulling image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1”
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.311621157s
23m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container packetmanager
23m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container packetmanager
22m Warning Unhealthy pod/packetmanager-64f5d9f9bb-rhzgm Startup probe failed: HTTP probe failed with statuscode: 500
23m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.428566128s
4m8s Warning BackOff pod/packetmanager-64f5d9f9bb-rhzgm Back-off restarting failed container
14m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.372473924s
24m Normal SuccessfulCreate replicaset/packetmanager-64f5d9f9bb Created pod: packetmanager-64f5d9f9bb-rhzgm
24m Normal ScalingReplicaSet deployment/packetmanager Scaled up replica set packetmanager-64f5d9f9bb to 1
extract of kubectl describe pod
Events:
Type Reason Age From Message
Normal Scheduled 22m default-scheduler Successfully assigned packetmanager/packetmanager-64f5d9f9bb-rhzgm to wnode5
Normal Created 22m kubelet Created container istio-init
Normal Started 22m kubelet Started container istio-init
Normal Pulled 22m kubelet Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
Normal Created 22m kubelet Created container istio-proxy
Normal Pulled 22m kubelet Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
Normal Started 22m kubelet Started container istio-proxy
Normal Pulled 22m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.311621157s
Normal Pulling 21m (x2 over 22m) kubelet Pulling image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1”
Normal Created 21m (x2 over 22m) kubelet Created container packetmanager
Normal Started 21m (x2 over 22m) kubelet Started container packetmanager
Normal Pulled 21m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.428566128s
Warning Unhealthy 21m (x8 over 22m) kubelet Startup probe failed: HTTP probe failed with statuscode: 500
Normal Pulled 12m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.372473924s
Warning BackOff 2m25s (x74 over 21m) kubelet Back-off restarting failed container
I extracted the lines with error from kubectl log to shorten the post
15:19:31,459 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@2:86 - no applicable action for [springProperty], current ElementPath is [[configuration][springProperty]]
{“@timestamp”:“2023-03-10T15:20:05.371Z”,“@version”:“1”,“message”:"Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘packetValidator’: Unsatisfied dependency expressed through field ‘packetKeeper’; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager
{“@timestamp”:“2023-03-10T15:20:05.534Z”,“@version”:“1”,“message”:"\n\nError starting ApplicationContext. To display the conditions report re-run your application with ‘debug’
{“@timestamp”:“2023-03-10T15:20:05.537Z”,“@version”:“1”,“message”:“Application run failed”,“logger_name”:“org.springframework.boot.SpringApplication”,“thread_name”:“main”,“level”:“ERROR”,“level_value”:40000,“stack_trace”:"org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘packetValidator’: Unsatisfied dependency expressed through field ‘packetKeeper’; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager"\n\tat
(PropertiesLauncher.java:593)\nCaused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager"\n\tat