Issue in Mosip Installation

We have followed below approaches however we are stuck in installation. Requested to please help us.
Document of Mosip Installation

Approach : 1

We are assuming that the installation is via different modules so we tried first via modules based installation.

Machine Configuration :

OS : Ubuntu 20.04
RAM : 16 GB
ROM : 500 GB SSD
JDK : 11
IDE : IntelliJ IDEA
Apache Maven : 3.6.3
pgAdmin
Postman
Git/GitHub Desktop : 2.25.1
lombok.jar (file)
settings.xml

Installation via modules based :

Administration Module :

Step 1 : Software setup (done)
Step 2 : Code setup (done)
Step 3 : Environment setup (done)

Commons Module :

Step 1 : Software setup (done)
Step 2 : Code setup (throws error)

We get stuck while execute step 2 & Throwing error related to maven while executing below command :

$ mvn clean install -Dgpg.skip=true -DskipTests=true

Approaches for solving error related to maven : 

	Step 1 : Changes dependencies in pom.xml file (done)

Step 2 : Apache maven uninstall & reinstall it (done)
Step 3 : Apache maven configured with jdk 11 (done)
(still not execute command)

We are facing lots of challenges via module based installation so we stop the approach via modules based installation & research for another way to install the mosip.

Approach 2 :

After research we got a sandbox installer, we thought they provide sandbox so we started R&D on sandbox installer & trying to implement.

Machine Configuration :

OS : Ubuntu 20.04 

RAM : 16 GB
ROM : 500 GB SSD

We tried to install mosip on the local machine.

Installation via sandbox installer :

Step 1 : Software prerequisites (throws error)

We get stuck while installation process of yum in ubuntu 20.04 & Throwing error while executing below command :

$ sudo apt install yum -y

Approaches for solving yum installation :

	Step 1 : sudo apt-get install rpm (done)
	Step 2 : sudo apt-get install dnf (done)
	Step 3 : sudo apt-get install yum (throws same error)

We format the machine & installed Ubuntu 18.04 on the same machine.

Then We have successfully installed yum in ubuntu 18.04 & we continue the next step of installation.

Step 3 : Git clone repo in the user home directory. Switch to the appropriate
Branch (done)
Step 4 : Install Ansible and create shortcuts (throws error)

We get stuck while execute step 4 & Throwing error related to yum repos not enabled while executing below shell script :

$ . / preinstall.sh


Approaches for solving error while executing shell script :
	
	Step 1 : All repos installed (done)
	Step 2 : Enable all repos (done) (still not run shell script)

We understand that we can’t install mosip in ubuntu so we move on to the next approach. Next approach is installation via VM & in centos 7.

Approach 3 :

We installed virtualbox & configured it in ubuntu 18.04 machine.

We downloaded the iso file of Centos 7, installed it in the VM & configured it.

VM Configuration :

OS : Centos 7
RAM : 8 GB
ROM : 20 GB

Machine configuration :

OS : Ubuntu 20.04 

RAM : 16 GB
ROM : 500 GB SSD

We tried to install mosip on the VM machine.

Installation via VMs & in Centos 7 :

Step 1 : Software prerequisites (done)
Step 2 : Installing Mosip

Step 1 : Site settings (done)
Step 2 : Network interface (done)
Step 3 : Ansible vault (done)
Step 4 : Mosip configuration (done)
Step 5 : Install mosip (throws error)

We get stuck while execute step 4 & Throwing error related to host not connected while executing below command :

$ an site.yml






Approaches for solving error while executing ansible command : 

	Step 1 : Changes IP addresses in hosts.ini file (done)

Step 2 : Add hosts in /etc/ansible/hosts (done)
Step 3 : Disable firewall

Step 1 : sudo firewall-cmd --state (done)
Step 2 : sudo systemctl stop firewalld (done)
Step 3 : sudo systemctl disable firewalld (done)
Step 4 : sudo systemctl mask --now firewalld
Step 5 : sudo firewall-cmd --state (done) (still not execute command)

We do different types of approaches for solving errors while executing ansible commands.

Approach 4 :

Creating VMs in aws server :

Status : In progress

@pranav15 I am assuming you are using 1.2.0 as the version. Please follow the instruction here.

Approach 1: is the most complex and is designed for developers of the individual repo. So please do not use them.

Approach 2: Sandbox needs centos 7.8 as the instruction is based on it. Please do not attempt it on Ubuntu. Ubuntu does not yum, it would use apt commands.

Approach 3: Yes you seem to be in the right path. Please share the exact errors and we will help you. Once again I am assuming you are using the link I shared.

Approach 4: - Our scripts would work either way. If you have aws pls feel free to use it.

We have using the link u shared & installing MOSIP.

We are working on approach 3 & we are getting error as below attached screenshots.

We have stuck while run as followed command : an site.yml

Looks like you have you are not able to SSH (port 22) from the console to other machines. Do you have someone who knows Linux? it might be a connection or a firewall issue.

Generally this means the paswordless ssh connection is not established between the console machines mosipuser and all the worker nodes root user.

Can we check on the same.

@pranav15 Were you able to pass through this? Do you need more help?

@gsasikumar We can’t generate the certificate so please help into that.

We facing below issue while generating certificate.

Is this a self-signed certificate? The error says the certificate is not trusted. May be the certificate has wrong name or ipaddress?

@gsasikumar Okay We will check that & if anywhere we stuck I will let you know.

We are facing some timeout issue while executing below command :

$ an site.yml

@gsasikumar We are facing issue of python’s & pip’s version while executing below command:

$ an site.yml

You will need python3 to do this. Looks like python 2 is set as the default.

@gsasikumar Now we are trying to install via aws cloud & we are facing DNS issue while executing TASK [ssl/letsencrypt : run certbot]

Looks like the domain does not belong to you. Do you have a DNS? mor-sandbox.mosip.net is not owned by you. Please use a domain name or subdomain that’s owned by you. the default script assumes that you own a domain.

If you want to use your own internal domain then look at the “Site Setting” section of our installation.

@gsasikumar We do not have a DNS so in this case what do we have to do & please help us with whatever changes to make in installation.

@gsasikumar Please help us, We don’t have DNS so in this case what do we have to do & Please tell us whether DNS is required or not.

@pranav15 please try with below steps to continue with selfsigned certifficate:

  1. mosip-infra/all.yml at 1.2.0.1 · mosip/mosip-infra · GitHub
    update console machine’s internal ip instead of valid domain name.
  2. mosip-infra/all.yml at 1.2.0.1 · mosip/mosip-infra · GitHub
    update ca to selfsigned.
  3. then continue with the installation.
    Basically with these settings it will obtain the self signed certificates for the console ip and use it throughout the installation…

@ckm007 Thank you for your response.

However, After updating the all.yml file We are getting below other errors while performing the installation :

TASK [k8scluster/cni : Check flannel daemonset is working] *********************
fatal: [mzmaster.sb → mzmaster.sb]: FAILED! => {“changed”: false, “cmd”: “kubectl --kubeconfig=/etc/kubernetes/admin.conf get ds --all-namespaces | grep flannel”, “delta”: “0:00:00.213149”, “end”: “2022-09-05 10:33:50.686372”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-09-05 10:33:50.473223”, “stderr”: “”, “stderr_lines”: , “stdout”: “”, “stdout_lines”: }
…ignoring

TASK [Disable postgres taint] **************************************************
fatal: [console.sb]: FAILED! => {“changed”: true, “cmd”: “kubectl --kubeconfig /home/mosipuser/.kube/mzcluster.config taint nodes mzworker0.sb postgres:NoSchedule-”, “delta”: “0:00:00.083458”, “end”: “2022-09-05 10:43:25.916422”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-09-05 10:43:25.832964”, “stderr”: “error: taint "postgres:NoSchedule" not found”, “stderr_lines”: [“error: taint "postgres:NoSchedule" not found”], “stdout”: “”, “stdout_lines”: }
…ignoring

TASK [Disable hdfs taint] ******************************************************
fatal: [console.sb]: FAILED! => {“changed”: true, “cmd”: “kubectl --kubeconfig /home/mosipuser/.kube/mzcluster.config taint nodes mzworker1.sb hdfs:NoSchedule-”, “delta”: “0:00:00.077036”, “end”: “2022-09-05 10:43:26.493075”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-09-05 10:43:26.416039”, “stderr”: “error: taint "hdfs:NoSchedule" not found”, “stderr_lines”: [“error: taint "hdfs:NoSchedule" not found”], “stdout”: “”, “stdout_lines”: }
…ignoring

TASK [Disable minio taint] *****************************************************
fatal: [console.sb]: FAILED! => {“changed”: true, “cmd”: “kubectl --kubeconfig /home/mosipuser/.kube/mzcluster.config taint nodes mzworker1.sb minio:NoSchedule-”, “delta”: “0:00:00.081137”, “end”: “2022-09-05 10:43:27.064528”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-09-05 10:43:26.983391”, “stderr”: “error: taint "minio:NoSchedule" not found”, “stderr_lines”: [“error: taint "minio:NoSchedule" not found”], “stdout”: “”, “stdout_lines”: }
…ignoring

TASK [k8scluster/cni : Check flannel daemonset is working] *********************
fatal: [dmzmaster.sb → dmzmaster.sb]: FAILED! => {“changed”: false, “cmd”: “kubectl --kubeconfig=/etc/kubernetes/admin.conf get ds --all-namespaces | grep flannel”, “delta”: “0:00:00.210418”, “end”: “2022-09-05 10:46:43.111048”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-09-05 10:46:42.900630”, “stderr”: “”, “stderr_lines”: , “stdout”: “”, “stdout_lines”: }
…ignoring

@pranav15 all the errors except the one mentioned in screenshot can be ignored.
Also presently we have few known issues in sandboxV2 installation following 1.2.0.1 branch as the same is still in development phase, so in the meantime team fixes the issues you can start with V3 deployment architecture or continue with 1.1.5.5 mosip installation. Attaching the links for the installation below:
1.1.5.5 sandboxV2: mosip-infra/deployment/sandbox-v2 at 1.1.5.5 · mosip/mosip-infra · GitHub
1.2.0.1 V3: mosip-infra/deployment/v3 at 1.2.0.1 · mosip/mosip-infra · GitHub

@ckm007 Thank you for your quick support.

We tried both solutions as per your suggestion.

However, We got the following error :

TASK [helm : Install chart idrepo] ***********************************************************************************************************
fatal: [console.sb]: FAILED! => {“changed”: true, “cmd”: “/home/mosipuser/bin/helm --kubeconfig /home/mosipuser/.kube/mzcluster.config install
idrepo /home/mosipuser/mosip-infra/deployment/sandbox-v2/helm/charts/idrepo -f /home/mosipuser/mosip-infra/deployment/sandbox-v2/helm/charts/
idrepo/values.yaml -n default --create-namespace --set-string dummy=value, --wait --timeout 20m0s --debug > /home/mosipuser/mosip-infra/deploy
ment/sandbox-v2/tmp//yaml/idrepo.yaml”, “delta”: “0:21:38.907911”, “end”: “2022-09-07 13:50:31.439665”, “msg”: “non-zero return code”, “rc”: 1
, “start”: “2022-09-07 13:28:52.531754”, “stderr”: "install.go:159: [debug] Original chart version: ""\ninstall.go:176: [debug] CHART PATH:
/home/mosipuser/mosip-infra/deployment/sandbox-v2/helm/charts/idrepo\n\nclient.go:108: [debug] creating 1 resource(s)\nclient.go:108: [debug]
creating 1 resource(s)\nclient.go:467: [debug] Watching for changes to Job idrepo-salt-generator with timeout of 20m0s\nclient.go:495: [debug]
Add/Modify event for idrepo-salt-generator: ADDED\nclient.go:534: [debug] idrepo-salt-generator: Jobs active: 0, jobs failed: 0, jobs succeed
ed: 0\nclient.go:495: [debug] Add/Modify event for idrepo-salt-generator: MODIFIED\nclient.go:534: [debug] idrepo-salt-generator: Jobs active:
1, jobs failed: 0, jobs succeeded: 0\nclient.go:495: [debug] Add/Modify event for idrepo-salt-generator: MODIFIED\nclient.go:258: [debug] Sta
rting delete for "idrepo-salt-generator" Job\nclient.go:108: [debug] creating 9 resource(s)\nwait.go:53: [debug] beginning wait for 9 resour
ces with timeout of 20m0s\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods
are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwai
t.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debu
g] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment i
s not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: de
fault/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-cr
edential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-reques
t-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0
out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expect
ed pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are rea
dy\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225
: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deplo
yment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not re
ady: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/id
repo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential
-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-genera
tor. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1
expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods
are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait
.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug
] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is
not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: def
ault/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-cre
dential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request
-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 o
ut of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expecte
d pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are read
y\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225:
[debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deploy
ment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not rea
dy: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idr
epo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-
request-generator. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generat
or. 0 out of 1 expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1
expected pods are ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods a
re ready\nwait.go:225: [debug] Deployment is not ready: default/idrepo-credential-request-generator. 0 out of 1 expected pods are ready\nwait.

PLAY RECAP ***********************************************************************************************************************************
console.sb : ok=63 changed=28 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0

@ckm007 We have completed MOSIP Installation.

But Whenever We tried to access administer console of keycloak then it moved to private IP instead of public IP.

We also can’t access keycloak administer console with public IP.