By: Shikha Mishra Account Admin 30 Aug 2017 at 7:38 a.m. CDT

10 Responses
Shikha Mishra gravatar
Cluster Manager support how many nodes of gluuserver? Is there any perticular number like 2 , 4 or 8 or any number of nodes? I found https://gluu.org/docs/cm/ document for cluster manager configuration. Is this correct document? If we choose not to setup cluster and use only Load balancer for mutiple nodes of gluuserver then What are the steps to enable replication between openldap instances of mutiple gluuserver nodes?

By Aliaksandr Samuseu staff 30 Aug 2017 at 10:27 a.m. CDT

Aliaksandr Samuseu gravatar
Hi, Shikha. At the moment our docs cover only case of 2 nodes in mirror mode, that's what was tested by us.

By Aliaksandr Samuseu staff 30 Aug 2017 at 10:31 a.m. CDT

Aliaksandr Samuseu gravatar
Regarding this part: >If we choose not to setup cluster and use only Load balancer for mutiple nodes of gluuserver then What are the steps to enable replication between openldap instances of mutiple gluuserver nodes? Community Support doesn't usually covers any non-standard and complex scenarios. You can check cluster manager's docs for all steps you need to prepare it for replication, then check logs it generates when following the standard procedure, to see what exactly is being performed at nodes to setup replication. That may help you to understand how to setup it with more nodes. Its sources are also available at github, for more thorough study.

By Vipin Jain named 31 Aug 2017 at 2:56 a.m. CDT

Vipin Jain gravatar
Thanks for the update. Can you please let us know what is recommended from HA side as we are looking at potentially having millions users accessing accessing the system. Thanks

By Aliaksandr Samuseu staff 31 Aug 2017 at 12:38 p.m. CDT

Aliaksandr Samuseu gravatar
No special approach is needed for Gluu in this regard, as it's still a regular Java-based web service, so usual conditions apply; Gluu consists internally of several JVMs controlled by Jetty engine, and OpenLDAP server (in clustered setup LDAP servers at several nodes are bound by replication), so whatever HA best practices exist for both of them can be used, including using a better hardware (more memory, faster CPU and disk drives). On the other hand, high availability is a complex subject with a lot of hidden rocks and peculiarities unique to a specific environment, so without proper benchmarking/load testing it's impossible to predict what bottlenecks could be there in your set up and suggest something in particular. Usually for tasks like this we recommend to get help from one of our partners dealing with system integration. For our customers we also may lend a helping hand with our internal resources (depending on the type of support plan and complexity of the task).

By Aliaksandr Samuseu staff 31 Aug 2017 at 12:51 p.m. CDT

Aliaksandr Samuseu gravatar
Number of Gluu nodes that can be used simultaneously shouldn't be limited by any obvious factor, except for the fact that current Cluster Manager setup may not be yet ready to create more than two mirrors (writeable LDAP nodes), thus steps have to be figured out to add more nodes manually. Though, no actual testing of setups with nodes more than 2 have been conducted so far, so there is no 100% certainty. Feel free to share your results and possible issues with us if you'll decide to take this road. For your information, in Gluu CE clustered setups each of the nodes does not in any way different (in it's core code and functions) from a standalone Gluu CE node. We can say that Gluu nodes are not aware they are part of a cluster, nothing is changed in behavior of an individual node. oxAuth (Gluu's core component) is by its design cluster-ready and uses LDAP heavily to store all its session data, thus through LDAP replication all other nodes get access to it. From their perspective all those sessions replicated from other nodes were just created locally, as any other sessions which actually were created at this very node. So session handoffs should happen without issues. There may be additional difficulties if you intend to use SAML flows in your cluster, as it's handled by third-party module (not oxAuth), and may need additional tweaking.

By Shikha Mishra Account Admin 05 Sep 2017 at 6:25 a.m. CDT

Shikha Mishra gravatar
Can gluu cluster manager be installed on Centos 7? In the document only Ubuntu 14.04 (Trusty) or Ubuntu 16.04 (Xenial) are mentioned.

By Chris Blanton user 05 Sep 2017 at 9:46 a.m. CDT

Chris Blanton gravatar
Shikha, The cluster-manager alpha program is still early in development, but we've developed a manual method that should work for your needs. Please see documentation here: https://github.com/GluuFederation/cluster-mgr/tree/master/manual_install

By Shikha Mishra Account Admin 07 Sep 2017 at 1:56 a.m. CDT

Shikha Mishra gravatar
Hi Chris, OS detail is : CentOS Linux release 7.2.1511. Gluu Server version is : 3.0.1 In step 3 of document (https://github.com/GluuFederation/cluster-mgr/tree/master/manual_install), we are facing issue with gluu server startup. ``` Job for systemd-nspawn@gluu_server_3.0.1.service failed because the control process exited with error code. See "systemctl status systemd-nspawn@gluu_server_3.0.1.service" and "journalctl -xe" for details. ``` ``` ● systemd-nspawn@gluu_server_3.0.1.service - Container gluu_server_3.0.1 Loaded: loaded (/usr/lib/systemd/system/systemd-nspawn@gluu_server_3.0.1.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2017-09-07 02:51:49 EDT; 3min 45s ago Docs: man:systemd-nspawn(1) Process: 9736 ExecStart=/usr/bin/systemd-nspawn --quiet --boot --link-journal=try-guest --machine=%I (code=exited, status=1/FAILURE) Main PID: 9736 (code=exited, status=1/FAILURE) Status: "Terminating..." Sep 07 02:51:49 localhost.localdomain systemd[1]: Starting Container gluu_server_3.0.1... Sep 07 02:51:49 localhost.localdomain systemd-nspawn[9736]: No image for machine 'gluu_server_3.0.1': No such file or directory Sep 07 02:51:49 localhost.localdomain systemd[1]: systemd-nspawn@gluu_server_3.0.1.service: main process exited, code=exited, status=1/FAILURE Sep 07 02:51:49 localhost.localdomain systemd[1]: Failed to start Container gluu_server_3.0.1. Sep 07 02:51:49 localhost.localdomain systemd[1]: Unit systemd-nspawn@gluu_server_3.0.1.service entered failed state. Sep 07 02:51:49 localhost.localdomain systemd[1]: systemd-nspawn@gluu_server_3.0.1.service failed. ```

By Chris Blanton user 07 Sep 2017 at 2:27 p.m. CDT

Chris Blanton gravatar
Shihka, I tried to replicate the process with 2 CentOS 7 servers and the only issue I had was logging into the Gluu. I needed the same keys from /etc/gluu/keys/ on the first server, since the install was connected to them. The login function of /sbin/gluu-serverd-3.0.1 login uses ssh to connect to the Gluu on the local host with a key-pair, at least in CentOS. I'll add this to the documentation. Now as far as your issue: Did you make sure to # /sbin/gluu-serverd-3.0.1 stop Before tar -cvf gluu.gz /opt/gluu-server-3.0.1/ If that's the case make sure your gluu.gz unpacked properly: The file structure should be: /opt/gluu-server-3.0.1/ but I've accidentally unpacked it like so: /opt/opt/gluu-server-3.0.1/ and it wasn't able to find the proper directory. This seems to match your issue: Sep 07 02:51:49 localhost.localdomain systemd-nspawn[9736]: No image for machine 'gluu_server_3.0.1': No such file or directory I even attempted it on purpose and received the same error. Let me know if this fixes your issue. -Chris

By Chris Blanton user 07 Sep 2017 at 2:50 p.m. CDT

Chris Blanton gravatar
Looking at the 3.0.1 replication documentation, I realized I didn't update it properly. It should now direct the user to the proper locations to send and extract the gluu.gz file.