Openshift Origin htpasswd auth无法正常工作 - 无法登录Web控制台

问题描述 投票:0回答:1

我成功安装了Openshift Origin 3.9。我有2个主人,2个etd,2个infra,2个节点。我无法使用Web控制台登录,使用CLI登录工作正常(oc login -u system:admin)。

我已经运行了“oc adm policy add-cluster-role-to-user cluster-admin system”,但没有任何变化。

做“oc get users”表示找不到资源。我有htpasswd身份验证设置。创建系统:管理员帐户工作没有问题,但创建任何其他用户,不会在“oc get users”中显示它们。几乎就好像它没有从htpasswd文件中读取任何内容。我可以手动将用户添加到htpasswd,但使用这些ID / passwd登录在CLI或Web控制台中不起作用。

一些细节:

[root@master1 master]# oc get identity
No resources found.

[root@master1 master]# oc get user
No resources found.

当我尝试创建新用户时,创建它不使用任何身份提供者:

[root@master1 master]# oc create user test1
user "test1" created
[root@master1 master]# oc get users
NAME      UID                                    FULL NAME   IDENTITIES
test1     c5352b4a-92b0-11e8-99d1-42010a8e0003               

master-config.yaml身份配置:

oauthConfig:
  assetPublicURL: https://X.X.X.X:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: add
    name: htpasswd_auth
    provider:
      apiVersion: v1
      file: /etc/origin/master/htpasswd
      kind: HTPasswdPasswordIdentityProvider

下面是我的ansible配置:

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=timebrk
openshift_deployment_type=origin
ansible_become=yes

# Cloud Provider Configuration
openshift_cloudprovider_kind=gce
openshift_gcp_project=emerald-ivy-211414
openshift_gcp_prefix=453007126348
openshift_gcp_multizone=False 

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=X.X.X.X
openshift_master_cluster_public_hostname=X.X.X.X

# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X
master2.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X

# host group for etcd
[etcd]
etcd1.c.emerald-ivy-211414.internal
etcd2.c.emerald-ivy-211414.internal

# Specify load balancer host
[lb]
lb.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X openshift_public_ip=X.X.X.X

# host group for nodes, includes region info
[nodes]
master[1:2].c.emerald-ivy-211414.internal
node1.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
openshift .htpasswd
1个回答
2
投票

看起来htpasswd文件由于某种原因在我的master2节点上不存在。一旦我从master1复制它,我就可以使用system:admin凭据登录Web控制台。

我仍然不知道为什么密码文件没有在主节点上同步,但我的原始问题已经解决。

© www.soinside.com 2019 - 2024. All rights reserved.