By: marc caron user 05 Oct 2016 at 10:31 a.m. CDT

31 Responses
marc caron gravatar
hello so we've been struggling with initial clustering setup. Following directions [here](https://www.gluu.org/docs/cluster/). And reviewing this [ticket](https://support.gluu.org/installation/dsreplication-initialize-fails-3027/) as well. This is a new install. First box setup with Cache Refresh all went very well. Second box install with setup file and such as per directions from above link all seemed fine. I can't get past > "Run ldapGeneralConfigInstall.py in host-1. This script will prepare the host-1 LDAP server to accept various configurations such as allow-pre-encoded-passwords or applying the host and port for LDAP Server." First off. As far as I can tell the ldapGeneralConfigInstall.py and replicationSetup.py scripts are NOT included with the app. So created them manually based off links from the config page. When running ldapGeneralConfigInstall.py all i can get is the below. Password used is what it dumped out as the Admin Pass at install. This is not the first attempt to run this file. So it's not asking for machine IPs or anything anymore. ``` [root@op-dev ~]# python ./ldapGeneralConfigInstall.py Password for 'cn=Directory Manager': Setting Global properties... Unable to connect to the server at "localhost" on port 4444 Setting Default Password Policy properties... Unable to connect to the server at "localhost" on port 4444 [root@op-dev ~]# ``` Also this comment in the referenced ticket: > "When you will supply your replication nodes to script, use ip addresses, not dns names" I believe I used dns names the first time :-/ It was over a week ago now and i didn't save out the results. So yea I don't have good info. Where is that stored that I could check/change? Also I now get the below error when trying to manually run dsreplication as shown in the earlier mentioned support ticket. ``` The provided credentials are not valid in server 127.0.0.1:4444. Details: [LDAP: error code 49 - Invalid Credentials] ``` Is there a way to "fix" this without having to dump and re-install? Random bits. Tomcat is set to 3GB - this is the NON-production instance. We'll up that on the Prod box to 6GB probably. ``` # Initial Java Heap Size (in MB) wrapper.java.initmemory=2048 # Maximum Java Heap Size (in MB) wrapper.java.maxmemory=3072 ``` Other testing of connecting to internal LDAP (was going to attempt logging into second host using admin so looking how to revert back to internal ldap auth, referenced [this](https://support.gluu.org/installation/revert-back-to-default-auth-module-1998/) ticket) ``` [root@op-dev ~]# /opt/opendj/bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -j ~/.pw -b "ou=appliances,o=gluu" -s one "objectclass=*" oxAuthenticationMode Connect Error Result Code: 91 (Connect Error) [root@op-dev ~]# ``` I'm honestly thinking that this may be a result of a password screwed up. When it's asking in the initial run of ldapGeneralConfigInstall.py is it "creating" a password or just using it? It's very unclear to me and I may have typed is wrong if it's creating it which would have screwed up the whole thing I'm guessing. thanks!

By Mohib Zico staff 05 Oct 2016 at 10:39 a.m. CDT

Mohib Zico gravatar
Hi Marc, Please allow me to ask a quick question before we move to replication. Have you configured 2nd node of cluster in CentOS 7.2?

By marc caron user 05 Oct 2016 at 1:05 p.m. CDT

marc caron gravatar
[Preparing VMs](https://www.gluu.org/docs/cluster/csync-installation/) section in the Clustering documentation was applied on host-2. Cache Refresh was set to disabled and 255.255.255.255 before host-1 was shut down and setup.properties.last copied off. No other modifications have been done on host-2. One question here. csync2 was setup on the Host OS not inside the Gluu container. The documentation is unclear which level it should be at. We found that actually confusing on many parts of the instructions that it's not specified what is host level vs container level actions. We assume most things are container level. But the csync2 setup seems a host level.

By Mohib Zico staff 05 Oct 2016 at 1:40 p.m. CDT

Mohib Zico gravatar
>> Preparing VMs section in the Clustering documentation was applied on host-2. Cache Refresh was set to disabled and 255.255.255.255 before host-1 was shut down and setup.properties.last copied off. No other modifications have been done on host-2. Can you please check one thing for me? See if data are identical in these two machines. You can compare data by this way: - Take ldif backup of 'o=gluu' from node1. Output that in backup_node1.ldif - Do same for node2. - Initiate a 'diff' between these two files. >> One question here. csync2 was setup on the Host OS not inside the Gluu container. First bullet point is: _1. Log into Gluu-Server container_. :-) >> We found that actually confusing on many parts of the instructions that it's not specified what is host level vs container level actions. We assume most things are container level. But the csync2 setup seems a host level. Created a github [issue](https://github.com/GluuFederation/docs/issues/109) for enhancement.

By Mohib Zico staff 05 Oct 2016 at 5:17 p.m. CDT

Mohib Zico gravatar
>> [root@op-dev ~]# python ./ldapGeneralConfigInstall.py >> Password for 'cn=Directory Manager': >> Setting Global properties... >> Unable to connect to the server at "localhost" on port 4444 >> Setting Default Password Policy properties... >> Unable to connect to the server at "localhost" on port 4444 >> [root@op-dev ~]# Another thing... You need to run these scripts as user 'ldap' from inside container; here is the test result: ``` [ldap@clustertest ~]$ chmod +x ldapGeneral.py [ldap@clustertest ~]$ python ldapGeneral.py Password for 'cn=Directory Manager': Setting Global properties... Setting Default Password Policy properties... [ldap@clustertest ~]$ exit logout [root@clustertest opt]# cat /etc/*-release CentOS Linux release 7.2.1511 (Core) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" CentOS Linux release 7.2.1511 (Core) CentOS Linux release 7.2.1511 (Core) ```

By marc caron user 06 Oct 2016 at 11:19 a.m. CDT

marc caron gravatar
Hello. I should be able to focus on this more today. ldifs - definitely different. node1 - 23M node2 - 1M So nothing actually copied over it would seem. But there are some similarties comparing them of course. I assume the below is the ldapGeneralConfigInstall.py file. execution below. ``` [ldap@op-dev ~]$ python ./ldapGeneralConfigInstall.py Password for 'cn=Directory Manager': Setting Global properties... An error occurred while parsing the command-line arguments: Argument "single-structural-objectclass-behavior:accept" does not start with one or two dashes and unnamed trailing arguments are not allowed See "dsconfig --help" to get more usage help Setting Default Password Policy properties... An error occurred while parsing the command-line arguments: Argument "Default Password Policy" does not start with one or two dashes and unnamed trailing arguments are not allowed See "dsconfig --help" to get more usage help [ldap@op-dev ~]$ ```

By Mohib Zico staff 06 Oct 2016 at 11:47 a.m. CDT

Mohib Zico gravatar
Hi Marc, >> An error occurred while parsing the command-line arguments: Argument "single-structural-objectclass-behavior:accept" does not start with one or two dashes and unnamed trailing arguments are not allowed Can you please check if there is any unexpected space there in 'ldapGeneralConfigInstall.py' file which you are running? As you can in our previous output.. it ran smoothly.

By marc caron user 06 Oct 2016 at 1:19 p.m. CDT

marc caron gravatar
fixed that but back to port 4444 error now. ``` [ldap@op-dev ~]$ python ./ldapGeneralConfigInstall.py Password for 'cn=Directory Manager': Setting Global properties... Unable to connect to the server at "localhost" on port 4444 Setting Default Password Policy properties... Unable to connect to the server at "localhost" on port 4444 ```

By Mohib Zico staff 06 Oct 2016 at 1:32 p.m. CDT

Mohib Zico gravatar
Marc, Let's have a call tomorrow afternoon sometime or Monday? I think we will be interested to take a look at your configuration. You a book your call [here](http://www.gluu.org/book-support/)

By marc caron user 06 Oct 2016 at 2:46 p.m. CDT

marc caron gravatar
I created one for tomorrow @ 1300 CST but haven't received any email on it. Probably mistyped my emial or something.

By Mohib Zico staff 06 Oct 2016 at 2:51 p.m. CDT

Mohib Zico gravatar
Ok, just sent an invitation to 'marc.caron@pioneer.com' email address.

By marc caron user 07 Oct 2016 at 8:14 a.m. CDT

marc caron gravatar
I had an idea this morning that may be the cause. Because of how our proxy is setup and the fact I missed the comment to use IP not DNS names. I likely need to modify opendj or the file sync thing to bypass the proxy for our internal domains. I had to do the same for catalina but I'm used to having to deal with it for catalina due to some other tools we have that need the same. So how do set proxy settings for csync2? Or change the config to use IP instead of DNS? Might save us a call later.

By Mohib Zico staff 07 Oct 2016 at 9:11 a.m. CDT

Mohib Zico gravatar
Hi Marc, If you want to change IP instead of hostname; it's still better to move for reinstallation. And..one thing... you need root access to the VM where you install your Gluu Server. I saw your comment there in meeting's agenda....

By marc caron user 07 Oct 2016 at 9:13 a.m. CDT

marc caron gravatar
So you're thinking to fix the clustering it'll require a reinstall? i can't wipe what it set so far and reconfigure the clustering aspects? The single host is working fine and was obviously setup to the "public" dns of the load balancer. But for the clustering I used machine dns names instead of ip thinking I was being smart ;)

By Mohib Zico staff 07 Oct 2016 at 9:16 a.m. CDT

Mohib Zico gravatar
>> So you're thinking to fix the clustering it'll require a reinstall? i can't wipe what it set so far and reconfigure the clustering aspects? Yes, that's better. Definitely it's possible to change configuration but that require a huge learning curve. :) >> But for the clustering I used machine dns names instead of ip thinking I was being smart ;) I think in the call, we can share some 'pre-requisite' of clustering setup; might be helpful for your next journey. :-)

By marc caron user 07 Oct 2016 at 9:19 a.m. CDT

marc caron gravatar
Sounds good. Will chat with you later then.

By Mohib Zico staff 07 Oct 2016 at 5:51 p.m. CDT

Mohib Zico gravatar
Marc, My apologies for delayed response.... So here are couple of things we can check. - java -version ``` Version from my setup: GLUU.root@test:~# java -version java version "1.7.0_95" OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.2) OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode) GLUU.root@test:~# ``` - ldapsearch to localhost:4444 ``` My sample command.... ldap@test:~$ /opt/opendj/bin/ldapsearch -h localhost -p 4444 -Z -X -D "cn=directory manager" -j /tmp/.pw -b 'o=gluu' -z 5 'objectclass=*' dn: o=gluu objectClass: organization objectClass: top o: gluu dn: ou=appliances,o=gluu objectClass: organizationalUnit objectClass: top ou: appliances .... .... ``` - The output of 'status' of my ldap server: ``` ldap@test:/opt/opendj/bin$ ./status >>>> Specify OpenDJ LDAP connection parameters Administrator user bind DN [cn=Directory Manager]: Password for user 'cn=Directory Manager': --- Server Status --- Server Run Status: Started Open Connections: 14 --- Server Details --- Host Name: test.gluu.org Administrative Users: cn=directory manager Installation Path: /opt/opendj Version: Gluu-OpenDJ 3.0.0-gluu Java Version: 1.7.0_95 Administration Connector: Port 4444 (LDAPS) --- Connection Handlers --- Address:Port : Protocol : State -------------:----------:--------- -- : LDIF : Disabled 0.0.0.0:1389 : LDAP : Disabled 0.0.0.0:1636 : LDAPS : Enabled 0.0.0.0:1689 : JMX : Enabled 0.0.0.0:8080 : HTTP : Disabled --- Data Sources --- Base DN: o=gluu Backend ID: userRoot Entries: 2899 Replication: Base DN: o=site Backend ID: site Entries: 2 Replication: ldap@test:/opt/opendj/bin$ ``` If ldapsearch:4444 doesn't work for you: - Stop opendj by /opt/opendj/bin/stop-ds - Clean all logs from /opt/opendj/logs/ - Start opendj by /opt/opendj/bin/start-ds - Tarball all logs from /opt/opendj/logs and please share with us.

By marc caron user 11 Oct 2016 at 4:43 p.m. CDT

marc caron gravatar
So I'm finally getting back to this. Doing some quick checking on what you provided above. **So as root.....** ``` [root@op-dev ~]# java -versoin Unrecognized option: -versoin Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. [root@op-dev ~]# ``` **As ldap...** a quick sanity check ``` [ldap@op-dev ~]$ echo $PATH /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/home/ldap/.local/bin:/home/ldap/bin ``` ``` [ldap@op-dev ~]$ java -version java version "1.7.0_95" OpenJDK Runtime Environment (rhel-2.6.4.0.el7_2-x86_64 u95-b00) OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode) [ldap@op-dev ~]$ ``` It doens't appear to like the password for ldap. ``` [ldap@op-dev tmp]$ /opt/opendj/bin/ldapsearch -h localhost -p 4444 -Z -X -D "cn=directory manager" -j /tmp/.pw -b 'o=gluu' -z 5 'objectclass=*' The simple bind attempt failed Result Code: 49 (Invalid Credentials) ``` And ldap status. ``` [ldap@op-dev bin]$ ./status >>>> Specify OpenDJ LDAP connection parameters Administrator user bind DN [cn=Directory Manager]: Password for user 'cn=Directory Manager': --- Server Status --- Server Run Status: Started Open Connections: 0 --- Server Details --- Host Name: op-dev.phibred.com Administrative Users: cn=directory manager Installation Path: /opt/opendj Version: Gluu-OpenDJ 3.0.0-gluu Java Version: <not available> (*) Administration Connector: Port 4444 (LDAPS) --- Connection Handlers --- Address:Port : Protocol : State -------------:----------:--------- -- : LDIF : Disabled 0.0.0.0:1389 : LDAP : Disabled 0.0.0.0:1636 : LDAPS : Enabled 0.0.0.0:1689 : JMX : Enabled 0.0.0.0:8080 : HTTP : Disabled --- Data Sources --- Base DN: o=gluu Backend ID: userRoot Entries: <not available> (*) Replication: Base DN: o=site Backend ID: site Entries: <not available> (*) Replication: * Information only available if you provide valid authentication information when launching the status command. [ldap@op-dev bin]$ ``` So I'm going to say the password is messed up? I have the original admin password and that's what I've been trying to use. I don't know of another. Is that the password that "should" work? Also I'm guessing we're to a point where we should blow it away and start over now?

By Mohib Zico staff 11 Oct 2016 at 4:59 p.m. CDT

Mohib Zico gravatar
Hi Marc, >> [root@op-dev ~]# java -versoin Syntax error. it should be 'version' >> /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/home/ldap/.local/bin:/home/ldap/bin >>> :/home/ldap/.local/bin From where this dot local/bin coming from? >> The simple bind attempt failed Result Code: 49 (Invalid Credentials) You need to put your password there in /tmp/.pw first. :-) >> [ldap@op-dev bin]$ ./status >>> Java Version: <not available> (*) >>> Entries: <not available> (*) >>> * Information only available if you provide valid authentication information when launching the status command. OpenDJ broken or pass is wrong. >> Also I'm guessing we're to a point where we should blow it away and start over now? Yes, that's better. We added few checkpoints there in [Cluster doc](https://gluu.org/docs/cluster/). But if you want... we can have an hour call sometime on Thursday or Friday? You will just move forward with your new installation and I'll be there with you...

By marc caron user 11 Oct 2016 at 5:11 p.m. CDT

marc caron gravatar
haha yes I did this part :) Got the error the first time that the file wasn't there. > You need to put your password there in /tmp/.pw first. :-) typos yea i'm smart some days. > versoin Ok I'll get it prepped to do a call friday. I'll get it scheduled. I'll need assistance from one of the admins as well so will get that lined up. Also I'll review the updated docs.

By Mohib Zico staff 11 Oct 2016 at 5:29 p.m. CDT

Mohib Zico gravatar
>> Ok I'll get it prepped to do a call friday. I'll get it scheduled. I'll need assistance from one of the admins as well so will get that lined up. Also I'll review the updated docs. Sure, let's team up and make this happen. For Friday.. I am free almost all day other than 10:30 - 11:00 AM Central yet.

By Mohib Zico staff 13 Oct 2016 at 1:58 p.m. CDT

Mohib Zico gravatar
Hi Marc, Received your booking info. One thing... before we move forward to setup 2nd node and cluster.. do you think you can completed base installation in node1? It will save some time. Just make sure base installation in node1 is completed and you can see and login there.

By marc caron user 13 Oct 2016 at 2:01 p.m. CDT

marc caron gravatar
Yes we're just discussing/working on that now. My plan is to not run the setup script before the call so we would do that durring the call as well. but both would have the base yum install already done and then stop there.

By Mohib Zico staff 13 Oct 2016 at 2:10 p.m. CDT

Mohib Zico gravatar
Ok. At least don't run setup.py in 2nd node. :-) We will do that in the call. Plus I just increased meeting time 30 mins more; so it will be 1 hour call.

By Mohib Zico staff 14 Oct 2016 at 4:13 p.m. CDT

Mohib Zico gravatar
**Csync2 with CentOS/RHEL 7.x** - After performing [these](https://gluu.org/docs/cluster/csync-installation/#centos-7x) eight steps - Go to [here](https://gluu.org/docs/cluster/#csync2-installation) - run `make cert` inside source directory of csync2 ( where you compiled csync2 from ) - run `csync2 -k csync2.key` - copy this key inside /usr/local/etc/csync2/ of node1 and scp this file to node2; put that inside same location ( /usr/local/etc/csync2/ ) of node2. This is the key which will be used by nodes to know each other. - Move forward from number 5 of [this](https://gluu.org/docs/cluster/#csync2-installation) doc - One note: make sure you give proper path (/usr/local/sbin/csync2 ) in configuration files. This doc was written with CentOS6 and examples given here are with CentOS6.x path ( /usr/sbin/csync2 ).

By marc caron user 17 Oct 2016 at 11:45 a.m. CDT

marc caron gravatar
Hello Question on step 5 on [this](https://gluu.org/docs/cluster/#csync2-installation) page. They are doing this 192.168.6.1 idp.gluu.org idp1.gluu.org 192.168.6.2 idp2.gluu.org but if we have a load balancer in front of this shouldn't it be more like the below? Assuming .3 is the IP of the load balancer. ``` 192.168.6.1 idp1.gluu.org 192.168.6.2 idp2.gluu.org 192.168.6.3 idp.gluu.org ```

By Mohib Zico staff 17 Oct 2016 at 11:55 a.m. CDT

Mohib Zico gravatar
Hi Marc, this naming 'idp1.gluu.org' and 'idp2.gluu.org' are only for CSync2 recognition. And generally /etc/hosts file's mapping are not for LB; it's for specific VMs. But you can try... no harm.

By Mohib Zico staff 25 Oct 2016 at 8:27 a.m. CDT

Mohib Zico gravatar
Hi Marc, How is it going? Need any assistance anywhere?

By marc caron user 25 Oct 2016 at 8:29 a.m. CDT

marc caron gravatar
Hello So this ended up not working for us when we tried to finish it up last week. For the moment I'm getting single servers going and have another individual internally who is better suited to solving this issue from our side. So probably in November or December we will tackle this again. But for the moment I just need it running. So shutting down one of the nodes is sufficient to get it working behind our load balancer right now.

By Mohib Zico staff 25 Oct 2016 at 8:44 a.m. CDT

Mohib Zico gravatar
Thanks for the status, Marc. May be our tutorial video will help you on this. Any specific topic which need descriptive explanation in video tutorial?

By marc caron user 25 Oct 2016 at 8:56 a.m. CDT

marc caron gravatar
The clustering is the part giving us fits as a whole. The whole csync2 configuration/documentation. Having never used it before it is not clear to me. Still going through the steps didn't seem to fully work. I can't say what is wrong but it has become my secondary problem now due to timelines and other challenges to getting this fully implemented.

By Mohib Zico staff 25 Oct 2016 at 9:02 a.m. CDT

Mohib Zico gravatar
Alrighty... Thanks, Marc.