Cory,
```
I am working on a Cluster with HA functionality (2 node, f5 load balancer) and I'm going through Cluster Documentation for Redis-Server, and understand that I would have to essentially comment out its listener parameter in the .conf file to enable external listening.
```
So if you're going to use stunnel, you should still have redis-server listen on localhost and use the stunnel to route localhost:6379 to <eth0>:<available_port>. This will keep redis-server from being exposed externally and keep traffic encrypted.
For a highly available system, we like to use [twemproxy](https://github.com/twitter/twemproxy) on the load balacing/proxy server. This works better for redundancy than the built in methods with Gluu Server, because it can detect downed redis-servers, and route the cache to available systems. It's also extremely extensible with the underlying configurations you can use. The routing is pretty simple, to an extent, but can get somewhat convoluted. It should conceptually [look like this.](https://raw.githubusercontent.com/GluuFederation/docs-clustermgr-prod/beta/beta/source/img/clusterarch/Encrypting_Cache.png)
To put it simply, we use Stunnel to translate localhost addresses to external addresses, and then listen on external ports for encrypted communication.
Both of our Gluu servers point to localhost:7000, which reaches twemproxy on the load balancer at localhost:2222. There twemproxy routes to the available redis-servers in your Gluu instances, or elsewhere, depending on your configuration. That's the simple routing, without including the complexity of Stunnel in between each connection. The colored segments show which process is connected to what server, for clarity.
Also, here are some example stunnel configurations based on your specs:
Gluu_Node_1 stunnel.conf
```
cert = /etc/stunnel/cert.pem
pid = /var/run/stunnel.pid
output = /var/log/stunnel4/stunnel.log
[redis-server]
client = no
accept = <Gluu_Node_1_ip>:7777
connect = 127.0.0.1:6379
[twemproxy]
client = yes
accept = 127.0.0.1:7000
connect = <load_balancer_ip>:8888
```
Load Balancer stunnel.conf
```
cert = /etc/stunnel/cert.pem
pid = /var/run/stunnel.pid
output = /var/log/stunnel4/stunnel.log
[Redis1]
client = yes
accept = 127.0.0.1:7001
connect = <Gluu_Node_1>:7777
[Redis2]
client = yes
accept = 127.0.0.1:7002
connect = <Gluu_Node_2>:7777
[twemproxy]
client = no
accept = <Load_Balancer>:8888
connect = 127.0.0.1:2222
```
Also here's an example twemproxy configuration:
```
alpha:
listen: 127.0.0.1:2222
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
server_retry_timeout: 30000
server_failure_limit: 2
timeout: 400
preconnect: true
servers:
- 127.0.0.1:7001:1
- 127.0.0.1:7002:1
```
Alternatively you can use our [Cluster Manager](https://gluu.org/docs/cm/) tool to install and configure all of the components for you. The machine running cluster manager only needs passwordless root ssh access to the servers it's configuring as it automates all these processes, including installing the "cloned" Gluu Servers, cache configuration and LDAP replication. It can also configure a monitoring service to be viewed in the GUI, as well as centralize logging and do key rotation.
The only portion you would have to do manually is change the F5 configuration to point to all the correct Gluu Server endpoints. Preferably similar to our NGINX configuration:
```
events {
worker_connections 6500;
}
http {
upstream backend_id {
ip_hash;
server <gluu_node1>:443 max_fails=2 fail_timeout=10s;
server <gluu_node2>:443 max_fails=2 fail_timeout=10s;
}
upstream backend {
server <gluu_node1>:443 max_fails=2 fail_timeout=10s;
server <gluu_node2>:443 max_fails=2 fail_timeout=10s;
}
server {
listen 80;
server_name <load_balancer>;
return 301 https://<load_balancer>$request_uri;
}
server {
listen 443;
server_name <load_balancer>;
ssl on;
ssl_certificate /etc/nginx/ssl/httpd.crt;
ssl_certificate_key /etc/nginx/ssl/httpd.key;
location ~ ^(/)$ {
proxy_pass https://backend;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /.well-known {
proxy_pass https://backend/.well-known;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /oxauth {
proxy_pass https://backend/oxauth;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /identity {
proxy_pass https://backend_id/identity;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /cas {
proxy_pass https://backend/cas;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /asimba {
proxy_pass https://backend/asimba;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /passport {
proxy_pass https://backend/passport;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
```
Note that we use ip_hash for `identity` with `backend_id` as oxTrust/Identity is a stateful application.
Hope this helps,
~ Chris