MOSIP File Manager CrashLoopBackOff

Hello! I am having issues with mosip-file-manager with a status of CrashLoopBackOff. Deleting and restarting the pod still ends up with CrashLoopBackOff.

Anyone who have a fix on this?

Also I have updated to the latest code commited a week ago. Did the changes in the code had an unintended effect that causes this error?

I tried to restart, but did not fix the issue

mosip-infra/deployment/v3/mosip/mosip-file-server$ ./restart.sh
deployment.apps/mosip-file-server restarted
kubectl -n mosip-file-server rollout status deployment.apps/mosip-file-server
Waiting for deployment “mosip-file-server” rollout to finish: 1 old replicas are pending termination…
error: deployment “mosip-file-server” exceeded its progress deadline

Looking at the logs

kubectl describe pod mosip-file-server-b69c59849-hlgzc -n mosip-file-server
Name: mosip-file-server-b69c59849-hlgzc
Namespace: mosip-file-server
Priority: 0
Node: wnode5/10.206.100.169
Start Time: Thu, 09 Mar 2023 09:15:38 +0000
Labels: app.kubernetes.io/instance=mosip-file-server
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mosip-file-server
helm.sh/chart=mosip-file-server-12.0.1-B2
pod-template-hash=b69c59849
Annotations: cni.projectcalico.org/containerID: aba45a708631afd58545977fb7e0cd8037bc196ab42a11f313bc5155d424ae70
cni.projectcalico.org/podIP: 10.42.6.182/32
cni.projectcalico.org/podIPs: 10.42.6.182/32
kubectl.kubernetes.io/restartedAt: 2023-03-09T09:15:38Z
Status: Running
IP: 10.42.6.182
IPs:
IP: 10.42.6.182
Controlled By: ReplicaSet/mosip-file-server-b69c59849
Containers:
mosipfileserver:
Container ID: docker://aed2f239ae5f6216c47387e632d24250963b96f718b353347f051fc54a443c31
Image: docker.io/mosipid/mosip-file-server:1.2.0.1-B2
Image ID: docker-pullable://mosipid/mosip-file-server@sha256:56112e1bbfa80e7bf1053afa76c8098629b1deeff5418d07265eed93312eb932
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 09 Mar 2023 10:22:02 +0000
Finished: Thu, 09 Mar 2023 10:22:34 +0000
Ready: False
Restart Count: 17
Requests:
cpu: 100m
memory: 1500Mi
Liveness: http-get http://:8080/.well-known/ delay=20s timeout=5s period=60s #success=1 #failure=2
Readiness: http-get http://:8080/.well-known/ delay=0s timeout=5s period=60s #success=1 #failure=2
Startup: http-get http://:8080/.well-known/ delay=0s timeout=5s period=30s #success=1 #failure=10
Environment Variables from:
config-server-share ConfigMap Optional: false
mosip-file-server ConfigMap Optional: false
keycloak-client-secret Secret Optional: false
Environment:
healthcheck_url_env: http://localhost:8080/.well-known/
host: api..(redacted)
Mounts:
/mnt/mosip-file-server/.well-known/mosipvc/ from wellknown (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sq8hn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
wellknown:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: wellknown-cm
Optional: false
kube-api-access-sq8hn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning Unhealthy 47m (x7 over 67m) kubelet Startup probe failed: Get “http://10.42.6.182:8080/.well-known/”: dial tcp 10.42.6.182:8080: connect: connection refused
Warning BackOff 2m40s (x278 over 66m) kubelet Back-off restarting failed container

I also looked at Helm chart values.yaml


corsPolicy:
allowOrigins:
- prefix: https://api.sandbox.xyz.net
- prefix: https://api-internal.sandbox.xyz.net
- prefix: https://verifiablecredential.io

Would this affect my installation if I am using different URI for my api-external and api-internal?

Thank you.