Packetmanager - CrashLoopBackOffF in MOSIP v1.2.0.1-B2

I am having issues in packetmanager with CrashLoopBackOff status. I tried deleting the pod / module and reinstalling but it does not fix it.

I also tried using different Helm chart versions 1.2.0.1-beta and 1.2.0.1-B2, but it makes no difference. Looking at it closely both Helm charts uses the same docker image. Is that intentional?

image:
registry: docker.io
repository: mosipid/commons-packet-service
tag: 1.2.0.1-B1

Does anyone have encountered this and/or anyone who has a fix? Thanks!

kubectl get pod -n packetmanager
NAME READY STATUS RESTARTS AGE
packetmanager-64f5d9f9bb-rhzgm 1/2 Running 7 (5m34s ago) 16m

kubectl get event -n packetmanager
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Scheduled pod/packetmanager-64f5d9f9bb-rhzgm Successfully assigned packetmanager/packetmanager-64f5d9f9bb-rhzgm to wnode5
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
24m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container istio-init
24m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container istio-init
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
24m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container istio-proxy
24m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container istio-proxy
23m Normal Pulling pod/packetmanager-64f5d9f9bb-rhzgm Pulling image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1
24m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.311621157s
23m Normal Created pod/packetmanager-64f5d9f9bb-rhzgm Created container packetmanager
23m Normal Started pod/packetmanager-64f5d9f9bb-rhzgm Started container packetmanager
22m Warning Unhealthy pod/packetmanager-64f5d9f9bb-rhzgm Startup probe failed: HTTP probe failed with statuscode: 500
23m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.428566128s
4m8s Warning BackOff pod/packetmanager-64f5d9f9bb-rhzgm Back-off restarting failed container
14m Normal Pulled pod/packetmanager-64f5d9f9bb-rhzgm Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.372473924s
24m Normal SuccessfulCreate replicaset/packetmanager-64f5d9f9bb Created pod: packetmanager-64f5d9f9bb-rhzgm
24m Normal ScalingReplicaSet deployment/packetmanager Scaled up replica set packetmanager-64f5d9f9bb to 1

extract of kubectl describe pod

Events:
Type Reason Age From Message


Normal Scheduled 22m default-scheduler Successfully assigned packetmanager/packetmanager-64f5d9f9bb-rhzgm to wnode5
Normal Created 22m kubelet Created container istio-init
Normal Started 22m kubelet Started container istio-init
Normal Pulled 22m kubelet Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
Normal Created 22m kubelet Created container istio-proxy
Normal Pulled 22m kubelet Container image “docker.io/istio/proxyv2:1.15.0” already present on machine
Normal Started 22m kubelet Started container istio-proxy
Normal Pulled 22m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.311621157s
Normal Pulling 21m (x2 over 22m) kubelet Pulling image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1
Normal Created 21m (x2 over 22m) kubelet Created container packetmanager
Normal Started 21m (x2 over 22m) kubelet Started container packetmanager
Normal Pulled 21m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.428566128s
Warning Unhealthy 21m (x8 over 22m) kubelet Startup probe failed: HTTP probe failed with statuscode: 500
Normal Pulled 12m kubelet Successfully pulled image “docker.io/mosipid/commons-packet-service:1.2.0.1-B1” in 2.372473924s
Warning BackOff 2m25s (x74 over 21m) kubelet Back-off restarting failed container

I extracted the lines with error from kubectl log to shorten the post

15:19:31,459 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@2:86 - no applicable action for [springProperty], current ElementPath is [[configuration][springProperty]]

{“@timestamp”:“2023-03-10T15:20:05.371Z”,“@version”:“1”,“message”:"Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘packetValidator’: Unsatisfied dependency expressed through field ‘packetKeeper’; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager

{“@timestamp”:“2023-03-10T15:20:05.534Z”,“@version”:“1”,“message”:"\n\nError starting ApplicationContext. To display the conditions report re-run your application with ‘debug’

{“@timestamp”:“2023-03-10T15:20:05.537Z”,“@version”:“1”,“message”:“Application run failed”,“logger_name”:“org.springframework.boot.SpringApplication”,“thread_name”:“main”,“level”:“ERROR”,“level_value”:40000,“stack_trace”:"org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘packetValidator’: Unsatisfied dependency expressed through field ‘packetKeeper’; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager"\n\tat

(PropertiesLauncher.java:593)\nCaused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘packetKeeper’: Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder ‘s3.pretext.value’ in value "${s3.pretext.value}packet-manager"\n\tat

I was able to fix this after adjusting liveliness/readiness probe, a new error was shown.

“Error: configmap “softhsm-kernel-share” not found”

I edited the file copy_cm.sh

I added this line to the file

$COPY_UTIL configmap softhsm-kernel-share softhsm $DST_NS

Then, after deleting and re-installing packetmanager module, the application runs without errors.

In case you are wondering why I am now working on version 1.2.0.1-B2. My installation of MOSIP 1.2.0.1 blew up after a power outage. So I deleted and reinstalled MOSIP modules because some of the modules were no longer working.

I had opted to go through a full re-installation when I can’t make them to work again. That is why now I am again in this situation, trying to make the latest MOSIP latest version 1.2.0.1-B2 to work without any errors.

1 Like

I forgot to add how I applied my fix.

First I downoaded the raw values.yaml of the helm chart in the mosip-helm chart repository version: 12.0.1-B2 to the mosip packagemanager module directory

wget https://raw.githubusercontent.com/mosip/mosip-helm/v1.2.0.1-B2/charts/packetmanager/values.yaml

In the values.yaml configuration I added the line - softhsm-kernel-share

extraEnvVarsCM:

global
config-server-share
artifactory-share
softhsm-kernel-share

I also edited the docker image to use the latest

image:
registry: docker.io
repository: mosipid/id-repository-identity-service
tag: latest

then, increased the resources

javaOpts: “-Xms4000M -Xmx4000M”

Then I edited the file copy_cm.sh to add this line to the file

$COPY_UTIL configmap softhsm-kernel-share softhsm $DST_NS

Afterwards I edited the install.sh particularly the line installing packagemanager.

helm -n $NS install identity mosip/identity -f values.yaml --version $CHART_VERSION

Then I deleted and reinstalled packagemanager module with a helm chart override using values.yaml. This is to make sure that changes in cm_copy.sh will be loaded.

./delete.sh

kubectl delete namespace packetmanager

./install.sh

After a minute or two, Success! No more errors!

Editing the enviromental vars for config map is crucial along with using the latest docker image with the s3.pretext.value error fixed in a version v1.2.0.1-B2 image.

I can’t explain why, this is purely based on my experience in testing, deleting and reinstalling the module for several times.

@sowmya695 @ckm007 @vishwa Can you check this thread? the s3.pretext.value should have a default value and should not have broken this installation.

@ckm007 The fix that @rcsampang has made looks strange can you take a look at this?

@gsasikumar Thank you for taking a look at this.

after re-reading my post I saw some typo errors that may lead to confusion. It is another case of copy paste error because I also did this in idrepo-identity. sorry about that.

Anyway here are the lines with typo error and the actual edits/commands i used :


I also edited the docker image to use the latest

image:
registry: docker.io
repository: mosipid/packetmanager


helm -n $NS install identity mosip/packetmanager -f values.yaml --version $CHART_VERSION


I am looking forward to knowing the right fix for these errors so I could apply them.

I agree that my work around is highly irregular, and I also believe that even though the modules have been installed without errors, it may still not function as intended.

Best regards!

@rcsampang Thanks for sharing the issue details, we have created the below internal bug to track this, will keep you posted on the updates.

https://mosip.atlassian.net/browse/MOSIP-26544

1 Like

This issue is taken care, basically we have changed the below property value
from
${s3.pretext.value}
to
${s3.pretext.value:}

With this black string will be taken if the property is not added by config server from environment variable through rancher.

Please refer the JIRA for more information.

https://mosip.atlassian.net/browse/MOSIP-26544

1 Like

@vishwa Thank you very much! These updates on the mosip-config repo files fixed the issues.

I simply restarted config-server and after a few minutes all the errors were gone for packetmanager, idrepo-identity, regproc-group6, prereg-application, and resident.

Thank you so much everyone for your continuous support and understanding.

Please consider this issue closed.

Warm regards,
rcsampang