By: Eric Williams user 20 Aug 2018 at 1:16 p.m. CDT

9 Responses
Eric Williams gravatar
Following the instruction here on Mac and it fails https://github.com/GluuFederation/gluu-docker on this: kubectl apply -f opendj-repl.yaml with this error message: error: error converting YAML to JSON: yaml: line 71: did not find expected '-' indicator

By Chris Blanton user 20 Aug 2018 at 1:19 p.m. CDT

Chris Blanton gravatar
Is this a minikube deployment? Can I see your opendj-repl.yaml?

By Eric Williams user 20 Aug 2018 at 1:35 p.m. CDT

Eric Williams gravatar
No this is gke....and it has nothing to do with Mac actually, got the same error running command in Linux. ``` apiVersion: apps/v1 kind: StatefulSet metadata: name: opendj-repl spec: serviceName: opendj replicas: 1 selector: matchLabels: app: opendj template: metadata: labels: app: opendj spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - opendj topologyKey: "kubernetes.io/hostname" volumes: - name: opendj-config persistentVolumeClaim: claimName: opendj-config-volume-claim - name: opendj-ldif persistentVolumeClaim: claimName: opendj-ldif-volume-claim - name: opendj-logs persistentVolumeClaim: claimName: opendj-logs-volume-claim - name: opendj-db persistentVolumeClaim: claimName: opendj-db-volume-claim containers: - name: opendj image: 'gluufederation/opendj:latest' env: - name: GLUU_CONFIG_ADAPTER value: "kubernetes" - name: GLUU_LDAP_INIT value: "false" - name: GLUU_LDAP_PEERS_LOOKUP value: opendj - name: GLUU_RESOLVER_ADDR value: kube-dns.kube-system # the value must match serviceName `opendj` because other containers - name: GLUU_CERT_ALT_NAME value: "opendj" ports: - containerPort: 1636 name: ldaps - containerPort: 1389 name: ldap - containerPort: 8989 name: replication - containerPort: 4444 name: admin volumeMounts: - mountPath: /opt/opendj/config name: opendj-config - mountPath: /opt/opendj/ldif name: opendj-ldif - mountPath: /opt/opendj/logs name: opendj-logs - mountPath: /opt/opendj/db name: opendj-db readinessProbe: tcpSocket: port: 1636 initialDelaySeconds: 25 periodSeconds: 25 livenessProbe: tcpSocket: port: 1636 initialDelaySeconds: 30 periodSeconds: 30 nodeSelector: opendj-init: "false"```

By Eric Williams user 20 Aug 2018 at 2:13 p.m. CDT

Eric Williams gravatar
Looks like the config_init job failed as well. Also when I go into google cloud console I see that the config-init job has status: Error with exit code 1

By Chris Blanton user 20 Aug 2018 at 2:22 p.m. CDT

Chris Blanton gravatar
Can you try the following? I believe it was a tabbing issue in the yaml. If it works, I'll update the github: ``` apiVersion: apps/v1 kind: StatefulSet metadata: name: opendj-repl spec: serviceName: opendj replicas: 1 selector: matchLabels: app: opendj template: metadata: labels: app: opendj spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - opendj topologyKey: "kubernetes.io/hostname" volumes: - name: opendj-config persistentVolumeClaim: claimName: opendj-config-volume-claim - name: opendj-ldif persistentVolumeClaim: claimName: opendj-ldif-volume-claim - name: opendj-logs persistentVolumeClaim: claimName: opendj-logs-volume-claim - name: opendj-db persistentVolumeClaim: claimName: opendj-db-volume-claim containers: - name: opendj image: 'gluufederation/opendj:latest' env: - name: GLUU_CONFIG_ADAPTER value: "kubernetes" - name: GLUU_LDAP_INIT value: "false" - name: GLUU_LDAP_PEERS_LOOKUP value: opendj - name: GLUU_RESOLVER_ADDR value: kube-dns.kube-system # the value must match serviceName `opendj` because other containers - name: GLUU_CERT_ALT_NAME value: "opendj" ports: - containerPort: 1636 name: ldaps - containerPort: 1389 name: ldap - containerPort: 8989 name: replication - containerPort: 4444 name: admin volumeMounts: - mountPath: /opt/opendj/config name: opendj-config - mountPath: /opt/opendj/ldif name: opendj-ldif - mountPath: /opt/opendj/logs name: opendj-logs - mountPath: /opt/opendj/db name: opendj-db readinessProbe: tcpSocket: port: 1636 initialDelaySeconds: 25 periodSeconds: 25 livenessProbe: tcpSocket: port: 1636 initialDelaySeconds: 30 periodSeconds: 30 nodeSelector: opendj-init: "false" ```

By Chris Blanton user 20 Aug 2018 at 2:24 p.m. CDT

Chris Blanton gravatar
> Looks like the config_init job failed as well. > Also when I go into google cloud console I see that the config-init job has status: Error with exit code 1 Can you run `kubectl describe pod config-init-<hash>` and show me what the output is.

By Eric Williams user 20 Aug 2018 at 7:21 p.m. CDT

Eric Williams gravatar
Looks like that new yaml worked. Am I the first person to go through this instruction lol? As for the config-init, I don't see it in the list of pods, here's what I get when I run ``` kubectl get pods: NAME READY STATUS RESTARTS AGE opendj-init-0 0/1 Running 1 3h opendj-repl-0 0/1 Running 1 7m redis-68fcbff78f-kd4jh 1/1 Running 0 3h ``` It would be good if after each step, we verified that our status matches up with what is expected.

By Eric Williams user 20 Aug 2018 at 7:32 p.m. CDT

Eric Williams gravatar
When I try to run kubectl apply -f generate-config.yaml again I get this: I assume this is because I already ran it before. ``` kubectl apply -f generate-config.yaml The Job "config-init" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"d3033a2a-a4ab-11e8-aaef-42010a8e016e", "job-name":"config-init", "app":"config-init"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"config", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(0xc4295dab80), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"config-init", Image:"gluufederation/config-init:latest", Command:[]string{"python", "/opt/config-init/scripts/entrypoint.py", "generate", "--admin-pw", "$(ADMIN_PW)", "--email", "$(EMAIL)", "--domain", "$(DOMAIN)", "--org-name", "$(ORG_NAME)", "--country-code", "$(COUNTRY_CODE)", "--state", "$(STATE)", "--city", "$(CITY)", "--ldap-type", "$(LDAP_TYPE)"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"GLUU_CONFIG_ADAPTER", Value:"kubernetes", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"GLUU_CONFIG_ADAPTER", Value:"kubernetes", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"ADMIN_PW", Value:"secret", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"EMAIL", Value:"ericwjr@gmail.com", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"DOMAIN", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"ORG_NAME", Value:"Native Labs, LLC", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"COUNTRY_CODE", Value:"US", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"STATE", Value:"GA", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"CITY", Value:"Atlanta", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"LDAP_TYPE", Value:"opendj", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"config", ReadOnly:false, MountPath:"/opt/config-init/db/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil)}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc438bbf758), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc4361b84b0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil)}}: field is immutable ```

By Chris Blanton user 21 Aug 2018 at 2:25 p.m. CDT

Chris Blanton gravatar
> Looks like that new yaml worked. Am I the first person to go through this instruction lol? We updated the yaml with liveness and readiness probes and must have chosen a file that didn't have the proper tabbing. > When I try to run kubectl apply -f generate-config.yaml again I get this: I assume this is because I already ran it before. Can I see your generate-config.yaml? There's a formatting error somewhere from what I can tell. I'm able to run the default one with no issue. Also config-init will overwrite old configurations if it's run again by the way. It's an ongoing issue [0](https://github.com/GluuFederation/docker-config-init/issues/8)

By Chris Blanton user 23 Aug 2018 at 1:07 p.m. CDT

Chris Blanton gravatar
Are you still experiencing issues Eric?