By: Applications Support Account Admin 19 Dec 2023 at 1:29 a.m. CST

48 Responses
Applications Support gravatar
The servers in site 2 are up but not working. The expected behavior is that we run both Site 1 and site 2 in a load balanced model The log is here attached and the affected servers are as shown below. 10.33.33.37 Gluu App1 10.33.33.38 Gluu App2 10.33.33.39 (cluster Server) Redis servers are : 10.1.1.224, 10.1.1.226, 10.1.1.39 All redis centinel on port 26379

By Mohib Zico staff 19 Dec 2023 at 1:45 a.m. CST

Mohib Zico gravatar
Hi, From screenshot, seems like OpenDJ is down there in that server.

By Applications Support Account Admin 19 Dec 2023 at 3:02 a.m. CST

Applications Support gravatar
The nodes could not connect to Redis Kindly provide feedback

By Mohib Zico staff 19 Dec 2023 at 3:16 a.m. CST

Mohib Zico gravatar
I think first you need to resolve that screenshot issue ( can't connect to directory server / opendj ), it _should_ make your servers UP. 2nd question is: have you had any changes lately which triggered this situation? Any update? Any reboot or any network outage?

By Applications Support Account Admin 19 Dec 2023 at 4:54 a.m. CST

Applications Support gravatar
How do we go about the resolving of the screenshot error. that was why we escalated to you?. secondly, the sitiation was trigger as a result of network outage at the site 2 on the 16th of December. 2023

By Mohib Zico staff 19 Dec 2023 at 6:26 a.m. CST

Mohib Zico gravatar
>> secondly, the sitiation was trigger as a result of network outage at the site 2 on the 16th of December. 2023 Thanks. >> How do we go about the resolving of the screenshot error. that was why we escalated to you?. Sure, here is how you can try.. Log / SSH into that server: - Log into container as root: `/sbin/gluu-serverd login` - restart ldap service: `service opendj restart` - restart oxauth service: `service oxauth restart` - restart identity service: `service identity restart` - restart idp services: `service idp restart` Now, check latest log if above error appearing or not.

By Applications Support Account Admin 21 Dec 2023 at 3:03 a.m. CST

Applications Support gravatar
Dear Gluu, We have carried out the step on the server but the issue still persist. Please see attached the errors. How do we move forward.

By Mohib Zico staff 21 Dec 2023 at 3:31 a.m. CST

Mohib Zico gravatar
Hi, Seems like your replication is broken. Can you please share replication logs ( /opt/opendj/logs/ )?

By Applications Support Account Admin 28 Dec 2023 at 2:11 a.m. CST

Applications Support gravatar
Kindly find attached.

By Mohib Zico staff 28 Dec 2023 at 4:05 a.m. CST

Mohib Zico gravatar
Thanks. Let us check logs please.

By Mohib Zico staff 28 Dec 2023 at 6:48 a.m. CST

Mohib Zico gravatar
One quick question: - What is the difference between "Prod_Gluu_1", "Prod_Gluu_2", "DR_Gluu_1" and "DR_Gluu_2" servers? Do you have four gluu servers clusters?

By Applications Support Account Admin 28 Dec 2023 at 8:58 a.m. CST

Applications Support gravatar
"Prod_Gluu_1", "Prod_Gluu_2", are site1 App server while "DR_Gluu_1" and "DR_Gluu_2" are site2 app servers. Apart from the servers above we have two cluster servers one in site 1 and one in site 2.

By Mohib Zico staff 28 Dec 2023 at 9:13 a.m. CST

Mohib Zico gravatar
>> "Prod_Gluu_1", "Prod_Gluu_2", are site1 App server while "DR_Gluu_1" and "DR_Gluu_2" are site2 app servers. So, there are "two" clusters here? - Cluster_1 ( which you call Site1 app ): Prod_Gluu_1 + Prod_Gluu_2 - Cluster_2 ( which you call Site2 app ): DR_Gluu_1 + DR_Gluu_2 >> Apart from the servers above we have two cluster servers one in site 1 and one in site 2. You mean, "Cluster Manager"?

By Applications Support Account Admin 29 Dec 2023 at 1:43 a.m. CST

Applications Support gravatar
yes

By Mohib Zico staff 29 Dec 2023 at 2:01 a.m. CST

Mohib Zico gravatar
Apologies to ask lot of questions but I am just trying to understand your whole structure and trying to find the reason of failure. :-) Two more questions... - What are hostnames / FQDN of `Cluster_1` and `Cluster_2`? - Cluster which is failing.... can you please share `gluu-ldap.properties` file of that server? Location of that file is `inside container:/etc/gluu/conf/`

By Applications Support Account Admin 29 Dec 2023 at 4:28 a.m. CST

Applications Support gravatar
Kindly find attached and below. Cluster_1 the names are vi-gluuapp01 and vi-gluuapp02 Cluster_2 the names are og-gluuapp01 and og-gluuapp02

By Mohib Zico staff 29 Dec 2023 at 6:39 a.m. CST

Mohib Zico gravatar
Thanks. So, there is no problem with identity and/or oxauth individually. It's only about OpenDJ. Let's concentrate on there. Can you please share "Replication" page screenshot from Cluster Manager?

By Ariyo Omolade Account Admin 29 Dec 2023 at 7:30 a.m. CST

Ariyo Omolade gravatar
Dear Bob, Kindly take this as an escalation, we have been having some back and forth to get our site 2 up and working after a downtime. The speed of resolution is a major concern and we are not getting adequate support. We are proposing having a joint call with you team to get this issue resolved urgently. Please we need your feedback on what time to fix today and those who should be invited.? Thank you. Ariyo Omolade​ Ag. Head, Identity Operations Management | Business Operations [cid:image373338.gif@A7963B00.B99D989C] NIGERIA INTER‑BANK SETTLEMENT SYSTEM [cid:image642569.png@1B4123C5.AFD5148D] 1230 Ahmadu Bello Way , Victoria Island , Lagos , Nigeria [cid:image593275.png@ADB807DD.1C42606C] +234 703 337 7186tel:+234%20906%20243%208562> [cid:image564709.png@E165A419.D763F6E8] +234 1 271 6071 ‑ 4 | Ext. 245fax:245> [cid:image554251.png@9B91460D.AA7B8C8A] aomolade@nibss-plc.com.ngmailto:aomolade@nibss-plc.com.ng> [cid:image942302.png@4419ABFB.79241812] NIBSS Contact Centre: 07000 500 000tel:+234700500000> [NIBSS Official Website]http://www.nibss-plc.com.ng/> www.nibss-plc.com.nghttps://www.nibss-plc.com.ng/> [Facebook]https://www.facebook.com/NIBSSPLC> [Twitter]https://twitter.com/NIBSS_NG> [Instagram]https://www.instagram.com/nibssplc/> [Linked-In]https://www.linkedin.com/company/nigeria-inter-bank-settlement-systems-plc> [YouTube]https://www.youtube.com/channel/UCLD7Y56uWlXKj_XxMA-61AQ> [https://res.cloudinary.com/nibssmediamgr/image/upload/v1689248319/Staff%20QR%20July%202023%20Update/Omolade.Ariyo.png] [cid:image001074.png@15CB9101.B3169D8F]

By Mohib Zico staff 29 Dec 2023 at 7:38 a.m. CST

Mohib Zico gravatar
Sure, let's book a call here: http://gluu.org/book-support In the mean time, please share what we asked [here](https://support.gluu.org/outages/11635/downtime-in-site-2-on-the-censent-management-app/#at87697).

By Mohib Zico staff 29 Dec 2023 at 7:45 a.m. CST

Mohib Zico gravatar
OR.. if you want, we can start in 15 mins ?

By Applications Support Account Admin 29 Dec 2023 at 8:11 a.m. CST

Applications Support gravatar
Yes we can have it. Kindly join with the link below https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2NjNTQwNDItNTBkNy00NjZlLTg2MDEtYTAzMzUxODdlYzlh%40thread.v2/0?context=%7b%22Tid%22%3a%22c5cd8359-1172-457c-b3a6-38831701ec57%22%2c%22Oid%22%3a%22f63e0738-a3bd-43c2-a0f1-a6764c68b951%22%7d

By Mohib Zico staff 29 Dec 2023 at 8:16 a.m. CST

Mohib Zico gravatar
Joining...

By Mohib Zico staff 29 Dec 2023 at 8:18 a.m. CST

Mohib Zico gravatar
We are waiting in the room...

By Mohib Zico staff 29 Dec 2023 at 8:26 a.m. CST

Mohib Zico gravatar
Anyone coming? We are in waiting room for more than 10 mins. :-)

By Mohib Zico staff 29 Dec 2023 at 12:33 p.m. CST

Mohib Zico gravatar
Hi Ariyo, Though we will meet on coming Thuesday but in the mean time I would like to get some help from my colleague on container login issue. Let's try below and share output please: - Run command: `ls -l /opt/gluu-server/etc/ssh` - Run command: `netstat -plunt | grep 60022` - Stop gluu-server container: `/sbin/gluu-serverd stop` - Run command: `chroot /opt/gluu-server` - Run command: `service sshd restart` - Copy error message.

By Applications Support Account Admin 01 Jan 2024 at 12:28 p.m. CST

Applications Support gravatar
Dear Zico, Kindly find the logs below: ``` Last login: Fri Dec 29 16:27:47 WAT 2023 on pts/1 [root@og-gluuapp01 ~]# ls -l /opt/gluu-server/etc/ssh total 612 -rwxr-xr-x 1 root root 581843 Aug 3 2022 moduli -rwxr-xr-x 1 root root 2276 Aug 3 2022 ssh_config -rwxr-xr-x 1 root root 4362 Aug 3 2022 sshd_config -rwxr-xr-x 1 root root 3907 Aug 3 2022 sshd_config.rpmnew -rwxr-xr-x 1 root input 227 Jul 23 20:30 ssh_host_ecdsa_key -rwxr-xr-x 1 root root 162 Jul 23 20:30 ssh_host_ecdsa_key.pub -rwxr-xr-x 1 root input 387 Jul 23 20:30 ssh_host_ed25519_key -rwxr-xr-x 1 root root 82 Jul 23 20:30 ssh_host_ed25519_key.pub -rwxr-xr-x 1 root input 1675 Jul 23 20:30 ssh_host_rsa_key -rwxr-xr-x 1 root root 382 Jul 23 20:30 ssh_host_rsa_key.pub [root@og-gluuapp01 ~]# [root@og-gluuapp01 ~]# netstat -plunt | grep 60022 [root@og-gluuapp01 ~]# /sbin/gluu-serverd stop [root@og-gluuapp01 ~]# chroot /opt/gluu-server [root@og-gluuapp01 /]# service sshd restart cat: /proc/cmdline: No such file or directory Redirecting to /bin/systemctl restart sshd.service Failed to get D-Bus connection: Operation not permitted [root@og-gluuapp01 /]# ```

By Applications Support Account Admin 01 Jan 2024 at 12:30 p.m. CST

Applications Support gravatar
Kindly find log attached

By Applications Support Account Admin 01 Jan 2024 at 12:33 p.m. CST

Applications Support gravatar
Dear Zico, Kindly find below logs from the second server. ``` Last login: Fri Dec 29 16:53:00 WAT 2023 on pts/0 [root@og-gluuapp02 ~]# ls -l /opt/gluu-server/etc/ssh total 612 -rwxr-xr-x 1 root root 581843 Aug 3 2022 moduli -rwxr-xr-x 1 root root 2276 Aug 3 2022 ssh_config -rwxr-xr-x 1 root root 4362 Aug 3 2022 sshd_config -rwxr-xr-x 1 root root 3907 Aug 3 2022 sshd_config.rpmnew -rwxr-xr-x 1 root input 227 Jul 25 15:42 ssh_host_ecdsa_key -rwxr-xr-x 1 root root 162 Jul 25 15:42 ssh_host_ecdsa_key.pub -rwxr-xr-x 1 root input 387 Jul 25 15:42 ssh_host_ed25519_key -rwxr-xr-x 1 root root 82 Jul 25 15:42 ssh_host_ed25519_key.pub -rwxr-xr-x 1 root input 1679 Jul 25 15:42 ssh_host_rsa_key -rwxr-xr-x 1 root root 382 Jul 25 15:42 ssh_host_rsa_key.pub [root@og-gluuapp02 ~]# netstat -plunt | grep 60022 tcp 0 0 127.0.0.1:60022 0.0.0.0:* LISTEN 69177/sshd [root@og-gluuapp02 ~]# /sbin/gluu-serverd stop [root@og-gluuapp02 ~]# chroot /opt/gluu-server [root@og-gluuapp02 /]# service sshd restart cat: /proc/cmdline: No such file or directory Redirecting to /bin/systemctl restart sshd.service Failed to get D-Bus connection: Operation not permitted [root@og-gluuapp02 /]# ```

By Applications Support Account Admin 01 Jan 2024 at 12:35 p.m. CST

Applications Support gravatar
Hi Zico, Kindly find attached the logs from second server

By Mohib Zico staff 01 Jan 2024 at 10:51 p.m. CST

Mohib Zico gravatar
Thanks. Please change permission of three keys in below ways: - `cd /opt/gluu-server/etc/ssh` - `chmod 600 ssh_host_ecdsa_key` - `chmod 600 ssh_host_ed25519_key` - `chmod 600 ssh_host_rsa_key` Then stop gluu-server container and start it again. After that try below command and share full output please: `/sbin/gluu-serverd login`

By Applications Support Account Admin 02 Jan 2024 at 4:54 a.m. CST

Applications Support gravatar
Kindly find below for the first server ``` [aomolade.vi-computejmp5] ➤ ssh 10.33.33.37 ******************************************************************** * * * This system is restricted to authorized users only. Usage of * * this system may be monitored and recorded by system personnel. * * * * All information generated by, processed by or stored on this * * information system is the property of NIBSS. * * * * Anyone using this system expressly consents to such monitoring * * and is advised that if such monitoring reveals possible evidence * * of criminal activity, system personnel may provide such * * evidences to law enforcement officials. * * * ******************************************************************** X11 forwarding request failed on channel 0 Last login: Tue Jan 2 11:48:16 2024 from 192.168.201.75 [aomolade@og-gluuapp01 ~]$ sudo su - Last login: Mon Jan 1 19:23:34 WAT 2024 on pts/1 [root@og-gluuapp01 ~]# cd /opt/gluu-server/etc/ssh [root@og-gluuapp01 ssh]# chmod 600 ssh_host_ecdsa_key [root@og-gluuapp01 ssh]# chmod 600 ssh_host_ed25519_key [root@og-gluuapp01 ssh]# chmod 600 ssh_host_rsa_key [root@og-gluuapp01 ssh]# /sbin/gluu-serverd stop [root@og-gluuapp01 ssh]# /sbin/gluu-serverd start [root@og-gluuapp01 ssh]# /sbin/gluu-serverd login Last login: Wed Dec 20 10:11:03 2023 from localhost Welcome to the Gluu Server! [root@id ~]# [root@id ~]# ```

By Mohib Zico staff 02 Jan 2024 at 4:57 a.m. CST

Mohib Zico gravatar
``` [root@og-gluuapp01 ssh]# /sbin/gluu-serverd login Last login: Wed Dec 20 10:11:03 2023 from localhost Welcome to the Gluu Server! [root@id ~]# [root@id ~]# ``` Good! Your container issue is fixed. Rest we will do in today's call.

By Applications Support Account Admin 02 Jan 2024 at 4:58 a.m. CST

Applications Support gravatar
Kindly find below for the second server ``` [aomolade.vi-computejmp5] ➤ ssh 10.33.33.38 ******************************************************************** * * * This system is restricted to authorized users only. Usage of * * this system may be monitored and recorded by system personnel. * * * * All information generated by, processed by or stored on this * * information system is the property of NIBSS. * * * * Anyone using this system expressly consents to such monitoring * * and is advised that if such monitoring reveals possible evidence * * of criminal activity, system personnel may provide such * * evidences to law enforcement officials. * * * ******************************************************************** X11 forwarding request failed on channel 0 Last login: Mon Jan 1 19:16:02 2024 from 192.168.201.75 [aomolade@og-gluuapp02 ~]$ sudo su - Last login: Mon Jan 1 19:30:28 WAT 2024 on pts/0 [root@og-gluuapp02 ~]# cd /opt/gluu-server/etc/ssh [root@og-gluuapp02 ssh]# chmod 600 ssh_host_ecdsa_key You have new mail in /var/spool/mail/root [root@og-gluuapp02 ssh]# chmod 600 ssh_host_ed25519_key [root@og-gluuapp02 ssh]# chmod 600 ssh_host_rsa_key [root@og-gluuapp02 ssh]# /sbin/gluu-serverd stop [root@og-gluuapp02 ssh]# /sbin/gluu-serverd stop You have mail in /var/spool/mail/root [root@og-gluuapp02 ssh]# /sbin/gluu-serverd start [root@og-gluuapp02 ssh]# /sbin/gluu-serverd login Last login: Wed Dec 20 10:52:35 2023 from localhost Welcome to the Gluu Server! [root@id ~]# ```

By Mohib Zico staff 02 Jan 2024 at 5 a.m. CST

Mohib Zico gravatar
``` [root@og-gluuapp02 ssh]# /sbin/gluu-serverd login Last login: Wed Dec 20 10:52:35 2023 from localhost Welcome to the Gluu Server! [root@id ~]# ``` Container login issue fixed in server_2/Site_2 as well. Now we will need Donovan to access Cluster Manager and from there we will try to fix replication issue.

By Mohib Zico staff 02 Jan 2024 at 7:02 a.m. CST

Mohib Zico gravatar
Guys, Where are we going to meet? Our Zoom chat or your Microsoft Team page?

By Applications Support Account Admin 02 Jan 2024 at 7:04 a.m. CST

Applications Support gravatar
We are in Zoom waiting to be admitted.

By Mohib Zico staff 02 Jan 2024 at 7:54 a.m. CST

Mohib Zico gravatar
I just sent invitation for Friday, please check if timing is okay or not for you all.

By Aliaksandr Samuseu staff 04 Jan 2024 at 6:53 a.m. CST

Aliaksandr Samuseu gravatar
Hi, everybody. Here is our plan for Friday. 1. Stop all incoming traffic to all cluster nodes (as we'll hopefully have maintenance window arranged for 2 hours, as previously requested in the call) 2. Create a backup of (one of the) current fully functional nodes that have the most complete data in it (we could try to estimate this using output of `# dsreplication status` command, or with a command like this executed on each node: `# /opt/opendj/bin/ldapsearch -T -h 127.0.0.1 -p 1636 -Z -X -s sub -D 'cn=directory manager' -b 'o=gluu' -j /tmp/.dpw '&(objectclass=gluuPerson)' 1.1 | grep -v '^$' | wc -l`) 3. Stop replication on all still functioning nodes with `# /opt/opendj/bin/dsreplication disable --disableAll -I 'admin' -w 'REPLICATION_ADMIN_PASS' --trustAll --no-prompt` 4. Decommission nodes 3 and 4 where OpenDJ isn't able to start (either remove the Gluu Server package from there manually and restart them - or, even better, spin up two fresh vms of the same size) 5. Re-enable replication between the two still running nodes (nodes 1 and 2); hopefully, we'll be able to achieve that with Cluster Manager's UI, but just in case I'm dropping the manual commands in here too: - `# /opt/opendj/bin/dsreplication enable -I 'admin' -w 'REPLICATION_ADMIN_PASS' -b 'o=gluu' -h hostname.or.ip.of.this.node -p 4444 -D 'cn=directory manager' --bindPassword1 'LDAP_PASS_OF_INSTANCE_ON_THIS_NODE' -r 8989 -O hostname.or.ip.of.the.other.node --port2 4444 --bindDN2 'cn=directory manager'--bindPassword2 'LDAP_PASS_OF_INSTANCE_ON_THE_OTHER_NODE' -R 8989 --secureReplication1 --secureReplication2 -X -n` - `/opt/opendj/bin/dsreplication initialize --baseDN "o=gluu" --adminUID admin -w 'REPLICATION_ADMIN_PASS' --hostSource 172.31.90.19 --portSource 4444 --hostDestination 172.31.27.183 --portDestination 4444 -X -n` 6. Plan A: We could stop here and let it run for a couple days, to make sure it's stable with just two nodes, before adding more. 7. Plan B: Add the other two nodes, one by one, using CM's web UI (Gluu Server will be installed there at the same time) - that could be done without additional downtime windows if done later 8. Put the nodes behind LB as well and do some smoke testing of most critical flows To create the backup we will export all data from under "o=gluu" branch ("o=site" as well if Cache Refresh is used there) with `export-ldif` or `ldapsearch`.

By Applications Support Account Admin 04 Jan 2024 at 7:07 a.m. CST

Applications Support gravatar
Thank you.

By Applications Support Account Admin 05 Jan 2024 at 8:45 a.m. CST

Applications Support gravatar
This had been approved for implementation. However, unfortunately a new date was approved for execution which is Tuesday, 9th of January 2024 same time. We await your feedback.

By Mohib Zico staff 05 Jan 2024 at 10:18 a.m. CST

Mohib Zico gravatar
Sure, fine with us.

By Mohib Zico staff 14 Jan 2024 at 9:22 p.m. CST

Mohib Zico gravatar
**Status** All servers restored on last Friday call ( Jan 12, 2024 ). They are operational now. We need another cleanup in that cluster. If I remember correctly, NIBSS mentioned that they have around 4000 real users in Gluu Production but we saw 100s+ entries `o=gluu` there so there must be some unused data/sessions which need to clean up. Otherwise, it will continuously hammer OpenDJ and you will see another outage pretty soon. Thanks!

By Aliaksandr Samuseu staff 15 Jan 2024 at 8:30 a.m. CST

Aliaksandr Samuseu gravatar
Thanks, Zico. Correct, we were able to bring the cluster up. In addition to mentioned above, a brief summary of what was done there, per NIBSS team request from last session: 1. We started with two nodes still running and replicating 2. Issue with inability to log in to still running container was solved by restoring file system's default permissions on SSHD's keys and config files 3. On the two other nodes which were broken beyond repair (cause still unknown) we removed the Gluu Server package and installed a new one, from Cluster Manager's web UI 4. On the two new nodes then replication was configured to couple them with the running two nodes Quick analysis of LDAP db we conducted during the last session showed about 1.7 millions entries, about half of which are under "ou=people" branch and seem to be user entries. The other half is probably some temp transient entries which should be purged out on regular basis, but probably aren't. Mustafa will try to find the script we used in a similar case previously, to purge them on demand.

By Aliaksandr Samuseu staff 23 Jan 2024 at 10:31 a.m. CST

Aliaksandr Samuseu gravatar
Hi. Just a quick status update (sorry for being silent for a while): we have a clean up script we were talking about in the call. After a few more quick tests we'll share it with you. Meanwhile, how is the situation on your side? Could you update us on the current state of the setup? Sharing the current replication state (from CM's web UI or `# dsreplication status` command's output) would be enough for now.

By Aliaksandr Samuseu staff 23 Jan 2024 at 5:35 p.m. CST

Aliaksandr Samuseu gravatar
Here is the script, as promised (attached). You'll need to upload it inside container and make it executable (`# chmod +x`). Then you'll be able to run it like this: `# ./clean_tokens.py -ldap_bind_pw 'LDAP_ADMIN_PASS'` This way it will use defaults for host and bind user ("localhost" and "cn=Directory Manager"), you'll only need to provide the password. It should clean all tokens with "exp" attribute from the past and marked as deletable. Will also delete corresponding session entries. Please consider to arrange maintenance window during which you'll run the script - as you supposedly have a lot of token entries, it may generate significant load on your LDAP servers, due to replication trying to propagate all these changes. After it's completed, please provide next data to us: 1. Current output of dsreplication status command 2. Output of next two commands (assumes that you put your LDAP pass into `/tmp/.dpw`): - `# /opt/opendj/bin/ldapsearch -h 127.0.0.1 -p 1636 -Z -X -s base -D 'cn=directory manager' -b 'ou=tokens,o=gluu' -j /tmp/.dpw '&(objectclass=*)' numsubordinates` - `# /opt/opendj/bin/ldapsearch -h 127.0.0.1 -p 1636 -Z -X -s base -D 'cn=directory manager' -b 'ou=sessions,o=gluu' -j /tmp/.dpw '&(objectclass=*)' numsubordinates`

By Aliaksandr Samuseu staff 31 Jan 2024 at 5:48 a.m. CST

Aliaksandr Samuseu gravatar
Hi. Any news worth reporting? Have you had a chance to run the script?

By Aliaksandr Samuseu staff 07 Feb 2024 at 4:15 p.m. CST

Aliaksandr Samuseu gravatar
**Status update:** During the last call we were able to narrow down the cause to Redis server failure. NIBSS team insisted that no further actions needed to be taken during the call, instead they decided to work on restoring the previous configuraiton with the team responsible for that machine.

By Mohib Zico staff 16 Feb 2024 at 7:42 a.m. CST

Mohib Zico gravatar
**Status** No status update from NIBSS. We are going to close this ticket for now. Please feel free to reopen if required. Thanks!

By Bob Sirwaitis Account Admin 23 Apr 2024 at 4:11 p.m. CDT

Bob Sirwaitis gravatar
Simileoluwa - I have created a renewal invoice and you should have received it in a preceding email. I have also attached a PDF document copy to this email thread as well. Please let me know if you have any questions -  Thanks -