By: Alejandro Calderon user 02 Mar 2021 at 8:03 p.m. CST

7 Responses
Alejandro Calderon gravatar
I am trying to install Gluu 4.2 using docker but I run into issues almost immediately. In the begining, it was an error with my vault credentials, but I fixed it and now it is a new error. I am using AWS ECS for docker in an Amazon Linux 2 instance. This is how I edit the gcp_kms_stanza.hcl file (with my data of course): ``` seal "gcpckms" { credentials = "/vault/config/creds.json" project = "vault-project-xxxxxx" region = "us-east1" key_ring = "vault-keyring" crypto_key = "vault-key-2" } ``` The gcp_kms_creds.json file: ``` { "type": "service_account", "project_id": "vault-project-xxxxxx", "private_key_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "private_key": "-----BEGIN PRIVATE KEY-----all of my key-----END PRIVATE KEY-----\n", "client_email": "vault-service@vault-project-xxxxxx.iam.gserviceaccount.com", "client_id": "xxxxxxxxxxxxxxxxxxxxxxx", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/vault-service%40vault-project-xxxxxx.iam.gserviceaccount.com" } ``` and my settings.py is like: ``` SVC_OXPASSPORT = True SVC_OXSHIBBOLETH = True SVC_SCIM = True ENABLE_OVERRIDE = True SVC_VAULT_AUTOUNSEAL = True SVC_CR_ROTATE = True ``` As I used SVC_CR_ROTATE as true, I create my file docker-compose.override.yml with the data showed in the gluu docs: ``` version: "2.4" services: oxauth: container_name: my-oxauth ``` Finally the output when I run the setup script is: ``` [ec2-user@ip-172-31-21-147 gluu-docker]$ ./pygluu-compose.pyz up [I] Attempting to gather external IP address [I] Using 172.31.21.147 as external IP address 1.6: Pulling from library/consul 05e7bc50f07f: Pull complete 637e1a6d2f2e: Pull complete 813801266086: Pull complete 36f6e3a4d488: Pull complete 8301a486e7cf: Pull complete 03b0368ea3a8: Pull complete Digest: sha256:d6ad849a61667789cb4fcd20ec7c250f7bef5adb5dd82ecd1e3d041faf437e38 Status: Downloaded newer image for consul:1.6 Creating consul ... done 1.0.1: Pulling from library/vault cd784148e348: Pull complete 212f40d6b450: Pull complete c3009ff51254: Pull complete d8862af4a2e3: Pull complete 03dae74736fd: Pull complete Digest: sha256:1fe0c0b482c458ee3f8953299eb77c1c7d8c1f71db13c45109c5acdb40bc76a5 Status: Downloaded newer image for vault:1.0.1 Creating vault ... done [I] Checking Vault status [W] Unable to get seal status in Vault; retrying ... [I] Initializing Vault with 1 recovery key and token [I] Vault recovery key and root token saved to vault_key_token.txt [I] Unsealing Vault manually Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "./pygluu-compose.pyz/__main__.py", line 3, in <module> File "./pygluu-compose.pyz/_bootstrap/__init__.py", line 241, in bootstrap File "./pygluu-compose.pyz/_bootstrap/__init__.py", line 36, in run File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/click/decorators.py", line 33, in new_func return f(get_current_context().obj, *args, **kwargs) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/cli.py", line 74, in up app.up() File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/app.py", line 495, in up self.prepare_config_secret() File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/app.py", line 411, in prepare_config_secret secret.setup() File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/app.py", line 156, in setup self.unseal() File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/app.py", line 107, in unseal self.container.exec("vault operator unseal {}".format(self.creds["key"])) File "/home/ec2-user/.shiv/pygluu-compose_15a115e2abce351caa305e2ff63a0e024bc02d15713e1753d224514abbfd44c1/site-packages/pygluu/compose/app.py", line 70, in creds key = self.UNSEAL_KEY_RE.findall(txt)[0] IndexError: list index out of range ``` the logs for consul say: ``` [ec2-user@ip-172-31-21-147 ~]$ docker logs f63e6b6afc0e ==> Found address '172.18.0.2' for interface 'eth0', setting bind option... ==> Found address '172.18.0.2' for interface 'eth0', setting client option... bootstrap = true: do not enable unless necessary ==> Starting Consul agent... Version: 'v1.6.10' Node ID: '690799a2-b4cc-fd2e-4aaf-56806384de37' Node name: 'consul-1' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: true) Client Addr: [172.18.0.2] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600) Cluster Addr: 172.18.0.2 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2021/03/03 01:18:00 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:690799a2-b4cc-fd2e-4aaf-56806384de37 Address:172.18.0.2:8300}] 2021/03/03 01:18:00 [INFO] raft: Node at 172.18.0.2:8300 [Follower] entering Follower state (Leader: "") 2021/03/03 01:18:00 [INFO] serf: EventMemberJoin: consul-1.dc1 172.18.0.2 2021/03/03 01:18:00 [INFO] serf: EventMemberJoin: consul-1 172.18.0.2 2021/03/03 01:18:00 [INFO] consul: Adding LAN server consul-1 (Addr: tcp/172.18.0.2:8300) (DC: dc1) 2021/03/03 01:18:00 [INFO] consul: Handled member-join event for server "consul-1.dc1" in area "wan" 2021/03/03 01:18:00 [INFO] agent: Started DNS server 172.18.0.2:8600 (udp) 2021/03/03 01:18:00 [INFO] agent: Started DNS server 172.18.0.2:8600 (tcp) 2021/03/03 01:18:00 [INFO] agent: Started HTTP server on 172.18.0.2:8500 (tcp) 2021/03/03 01:18:00 [INFO] agent: started state syncer ==> Consul agent running! 2021/03/03 01:18:07 [WARN] raft: Heartbeat timeout from "" reached, starting election 2021/03/03 01:18:07 [INFO] raft: Node at 172.18.0.2:8300 [Candidate] entering Candidate state in term 2 2021/03/03 01:18:07 [INFO] raft: Election won. Tally: 1 2021/03/03 01:18:07 [INFO] raft: Node at 172.18.0.2:8300 [Leader] entering Leader state 2021/03/03 01:18:07 [INFO] consul: cluster leadership acquired 2021/03/03 01:18:07 [INFO] consul: New leader elected: consul-1 2021/03/03 01:18:07 [INFO] consul: member 'consul-1' joined, marking health alive 2021/03/03 01:18:07 [INFO] agent: Synced node info 2021/03/03 01:18:08 [INFO] agent: Synced service "vault:172.18.0.3:8200" 2021/03/03 01:18:08 [INFO] agent: Synced check "vault:172.18.0.3:8200:vault-sealed-check" 2021/03/03 01:18:11 [INFO] agent: Synced check "vault:172.18.0.3:8200:vault-sealed-check" 2021/03/03 01:18:11 [INFO] agent: Synced service "vault:172.18.0.3:8200" 2021/03/03 01:18:11 [INFO] agent: Synced check "vault:172.18.0.3:8200:vault-sealed-check" ==> Newer Consul version available: 1.9.3 (currently running: 1.6.10) ``` and the logs for vault are: ``` [ec2-user@ip-172-31-21-147 gluu-docker]$ docker logs 7b1d122f9f7d Using eth0 for VAULT_REDIRECT_ADDR: http://172.18.0.3:8200 Using eth0 for VAULT_CLUSTER_ADDR: https://172.18.0.3:8201 ==> Vault server configuration: GCP KMS Crypto Key: vault-key-2 GCP KMS Key Ring: vault-keyring GCP KMS Project: vault-project-306305 GCP KMS Region: us-east1 Seal Type: gcpckms Api Address: http://172.18.0.3:8200 Cgo: disabled Cluster Address: https://172.18.0.3:8201 Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: (not set) Mlock: supported: true, enabled: true Storage: consul (HA available) Version: Vault v1.0.1 Version Sha: 08df121c8b9adcc2b8fd55fc8506c3f9714c7e61 2021-03-03T01:18:08.144Z [INFO] core: stored unseal keys supported, attempting fetch 2021-03-03T01:18:08.144Z [WARN] core: stored unseal key(s) supported but none found ==> Vault server started! Log data will stream in below: 2021-03-03T01:18:10.337Z [INFO] core: autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery 2021-03-03T01:18:10.625Z [WARN] core: stored keys supported on init, forcing shares/threshold to 1 2021-03-03T01:18:10.626Z [INFO] core: security barrier not initialized 2021-03-03T01:18:10.641Z [INFO] core: security barrier initialized: shares=1 threshold=1 2021-03-03T01:18:10.757Z [INFO] core: post-unseal setup starting 2021-03-03T01:18:10.774Z [INFO] core: loaded wrapping token key 2021-03-03T01:18:10.774Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2021-03-03T01:18:10.775Z [INFO] core: no mounts; adding default mount table 2021-03-03T01:18:10.780Z [INFO] core: successfully mounted backend: type=kv path=secret/ 2021-03-03T01:18:10.780Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2021-03-03T01:18:10.781Z [INFO] core: successfully mounted backend: type=system path=sys/ 2021-03-03T01:18:10.781Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2021-03-03T01:18:10.802Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2021-03-03T01:18:10.802Z [INFO] core: restoring leases 2021-03-03T01:18:10.803Z [INFO] rollback: starting rollback manager 2021-03-03T01:18:10.804Z [INFO] expiration: lease restore complete 2021-03-03T01:18:10.810Z [INFO] identity: entities restored 2021-03-03T01:18:10.810Z [INFO] identity: groups restored 2021-03-03T01:18:10.810Z [INFO] core: post-unseal setup complete 2021-03-03T01:18:10.811Z [INFO] core: starting listener: listener_address=0.0.0.0:8201 2021-03-03T01:18:10.811Z [INFO] core: serving cluster requests: cluster_listen_address=[::]:8201 2021-03-03T01:18:10.914Z [INFO] core: root token generated 2021-03-03T01:18:10.914Z [INFO] core: pre-seal teardown starting 2021-03-03T01:18:10.914Z [INFO] core: stopping cluster listeners 2021-03-03T01:18:10.914Z [INFO] core: shutting down forwarding rpc listeners 2021-03-03T01:18:10.914Z [INFO] core: forwarding rpc listeners stopped 2021-03-03T01:18:11.311Z [INFO] core: rpc listeners successfully shut down 2021-03-03T01:18:11.311Z [INFO] core: cluster listeners successfully shut down 2021-03-03T01:18:11.311Z [INFO] rollback: stopping rollback manager 2021-03-03T01:18:11.311Z [INFO] core: pre-seal teardown complete 2021-03-03T01:18:11.311Z [INFO] core: stored unseal keys supported, attempting fetch 2021-03-03T01:18:11.395Z [INFO] core: vault is unsealed 2021-03-03T01:18:11.395Z [INFO] core: entering standby mode 2021-03-03T01:18:11.396Z [INFO] core: successfully unsealed with stored key(s): stored_keys_used=1 2021-03-03T01:18:11.406Z [INFO] core: acquired lock, enabling active operation 2021-03-03T01:18:11.446Z [INFO] core: post-unseal setup starting 2021-03-03T01:18:11.447Z [INFO] core: loaded wrapping token key 2021-03-03T01:18:11.447Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2021-03-03T01:18:11.448Z [INFO] core: successfully mounted backend: type=kv path=secret/ 2021-03-03T01:18:11.449Z [INFO] core: successfully mounted backend: type=system path=sys/ 2021-03-03T01:18:11.449Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2021-03-03T01:18:11.449Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2021-03-03T01:18:11.453Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2021-03-03T01:18:11.454Z [INFO] core: restoring leases 2021-03-03T01:18:11.454Z [INFO] rollback: starting rollback manager 2021-03-03T01:18:11.454Z [INFO] expiration: lease restore complete 2021-03-03T01:18:11.455Z [INFO] identity: entities restored 2021-03-03T01:18:11.455Z [INFO] identity: groups restored 2021-03-03T01:18:11.456Z [INFO] core: post-unseal setup complete 2021-03-03T01:18:11.456Z [INFO] core: starting listener: listener_address=0.0.0.0:8201 2021-03-03T01:18:11.456Z [INFO] core: serving cluster requests: cluster_listen_address=[::]:8201 ``` Anything am I doing wrong? UPDATE: I did the same process but this time in a VM with Ubuntu 20.04 and using VM Ware Workstation and the result was exactly the same...

By Isman Firmansyah staff 04 Mar 2021 at 11:17 a.m. CST

Isman Firmansyah gravatar
Hi Alejandro, What's the error says when you run `pygluu-compose.pyz up` command?

By Isman Firmansyah staff 04 Mar 2021 at 11:50 a.m. CST

Isman Firmansyah gravatar
My bad, I see that `up` command throws IndexError. I will check it as soon as possible. Thanks.

By Isman Firmansyah staff 04 Mar 2021 at 5:51 p.m. CST

Isman Firmansyah gravatar
Hi Alejandro, Please try latest `pygluu-compose.pyz` release at https://github.com/GluuFederation/community-edition-containers/releases/tag/v1.4.1. You may need to remove existing containers and volumes before running `up` command again.

By Alejandro Calderon user 05 Mar 2021 at 1:26 a.m. CST

Alejandro Calderon gravatar
Awesome! Now the installer is working perfectly, I could install Gluu, thank you so much! I appreciate it. Two more last questions: - How can I login to the gluu container in the command line? - Is it normal that during deployment I choose the SVC_OXSHIBBOLETH service as true and after that, when I login to the dashboard in my browser, I don't see the SAML section on the left navigation bar, but I can see the saml metada in [hostname]/idp/shibboleth with no problem?

By Isman Firmansyah staff 08 Mar 2021 at 9:12 a.m. CST

Isman Firmansyah gravatar
Hi Alejandro, > How can I login to the gluu container in the command line? You can execute `docker exec -ti $CONTAINER_NAME sh` command to login to image container individually. > Is it normal that during deployment I choose the SVC_OXSHIBBOLETH service as true and after that, when I login to the dashboard in my browser, I don't see the SAML section on the left navigation bar, but I can see the saml metada in [hostname]/idp/shibboleth with no problem? Please have a look at the latest documentation https://gluu.org/docs/gluu-server/installation-guide/install-docker/#saml

By Alejandro Calderon user 09 Mar 2021 at 6:35 p.m. CST

Alejandro Calderon gravatar
Alright, thank you so much guys! ...

By Isman Firmansyah staff 14 Mar 2021 at 7:02 p.m. CDT

Isman Firmansyah gravatar
Feel free to re-open if issue persists. Thanks.