Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OVN-Raft activated even when flag is set to False #338

Open
astoycos opened this issue Jul 24, 2020 · 0 comments
Open

OVN-Raft activated even when flag is set to False #338

astoycos opened this issue Jul 24, 2020 · 0 comments

Comments

@astoycos
Copy link

I am spinning up a cluster with a single master node and two worker nodes to run ovn-kubernetes on however the flags don't seem to be applying correctly as follows:

For reference my all.yml looks like the following

---
# --------------------------- -
# Changes for bare metal    - -
# Name of inventory file    - -
# --------------------------- -
all_inventory: "all.local.generated"

# What container runtime do we use?
# valid values:
# - docker
# - crio
container_runtime: docker

# --------------------------- -
# docker vars               - -
# --------------------------- -
docker_install_suppress_newgrp: true

# --------------------------- -
# crio vars                 - -
# --------------------------- -
# Which version of crio?
# (doesn't matter if docker is container runtime)
crio_build_version: v1.11.1
crio_build_install: False
crio_use_copr: True
crio_baseurl: https://cbs.centos.org/repos/paas7-crio-115-candidate/x86_64/os

# Network type (2nics or default)
network_type: "default"
# Pod net work CIDR
pod_network_cidr: "10.244.0.0"

# General config

# At 1.7.2 you need this cause of a bug in kubeadm join.
# Turn it off later, or, try it if a join fails.
skip_preflight_checks: true

# Stable. (was busted at 1.6 release, may work now, untested for a couple months)
kube_baseurl: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64

# Unstable
# kube_baseurl: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64-unstable

# Kube Version
# Accepts "latest" or the version part of an RPM (typically based on the kubelet RPM).
# For example if you were to look at `yum search kubelet --showduplicates`
# You'd see things like "kubelet-1.7.5-0.x86_64"
# You'd use "1.7.5-0" here, such as:
# kube_version: 1.7.5-0
# The default is... "latest"
kube_version: "latest"

# Binary install
# Essentially replaces the RPM installed binaries with a specific set of binaries from URLs.
# binary_install: true
# binary_install_force_redownload: false

images_directory: /home/images
system_default_ram_mb: 4096
system_default_cpus: 4

# Define all VM's that need to be created and their respective roles.
# There are three roles user can defined
#  - master: Kubernets primary master node
#  - master_slave: Kubernets secondary master nodes that joins primary master
#  - nodes : Kubernetes nodes (worker)
virtual_machines:
  - name: kube-singlehost-master
    node_type: master
    system_ram_mb: 12288
    system_cpus: 2
  - name: kube-node-1
    node_type: nodes
  - name: kube-node-2
    node_type: nodes
# Uncomment following (lb/master_slave) for k8s master HA cluster
#  - name: kube-lb
#    node_type: lb
#  - name: kube-master2
#    node_type: master_slave
#  - name: kube-master3
#    node_type: master_slave

#  - name: builder
#    node_type: builder
#    system_ram_mb: 24576
#  - name: my-support-node
#    node_type: other
#    system_ram_mb: 8192
#    system_cpus: 8

# Kubectl proxy.
kubectl_proxy_port: 8088

# Allow the kubernetes control plane to listen on all interfaces
#control_plane_listen_all: true

# ----------------------------
# ovn vars.
# ----------------------------
ovn_image_repo: "docker.io/ovnkube/ovn-daemonset-u:latest"

# OVN Kubernets repo and branch
ovn_kubernetes_repo: https://github.com/ovn-org/ovn-kubernetes
ovn_kubernetes_branch: master
# Setup ovn-kubernetes in clustered HA mode (Raft based)
enable_ovn_raft: False

# Set logging parameters for different OVN components
# Log level for ovnkube master
# ovnkube_master_loglevel: "5"

# Log level for ovnkube node
# ovnkube_node_loglevel: "5"

# Log config for ovn northd
# ovn_loglevel_northd: "-vconsole:info -vfile:info"

# Log config for OVN Northbound Database
# ovn_loglevel_nb: "-vconsole:info -vfile:info"

# Log config for OVN Southbound Database
# ovn_loglevel_sb: "-vconsole:info -vfile:info"

# Log config for OVN Controller
# ovn_loglevel_controller: "-vconsole:info"

# Log config for OVN NBCTL daemon
# ovn_loglevel_nbctld: "-vconsole:info"

# ----------------------------
# virt-host vars.
# ----------------------------

# Allows one to skip the steps to initially setup a virthost
# convenient when iterating quickly.
skip_virthost_depedencies: false

# Enables a bridge to the outside LAN
# (as opposed to using virbr0)
bridge_networking: false
bridge_name: virbr0
bridge_physical_nic: "enp1s0f1"
bridge_network_name: "br0"
bridge_network_cidr: 192.168.1.0/24

# ----------------------------
# device plugins
# ----------------------------
enable_device_plugins: false

# ----------------------------
# builder vars
# ----------------------------

# NOTE: these builder vars are here and not in the group_vars/builder.yml file
# because these values are used across different types of nodes, and not just
# directly on the builder server itself.

# artifact paths
artifacts_sync_path: /opt/k8s/artifacts

# builder archive list
archive_list:
  - rpms/kubeadm-x86_64.rpm
  - rpms/kubectl-x86_64.rpm
  - rpms/kubelet-x86_64.rpm
  - rpms/kubernetes-cni-x86_64.rpm
  - cloud-controller-manager.tar
  - kube-apiserver.tar
  - kube-controller-manager.tar
  - kube-proxy.tar
  - kube-scheduler.tar

First I start the VMs with

ansible-playbook -i inventory/virthost/ -e 'network_type=2nics' -e ssh_proxy_enabled=true playbooks/virthost-setup.yml

And the VM's seem to spin up correctly

[root@localhost group_vars]# virsh list 
 Id   Name                     State
----------------------------------------
 11   kube-singlehost-master   running
 12   kube-node-1              running
 13   kube-node-2              running

Then I try to start K8's and OVN-K8's with the existing yaml's

ansible-playbook -i inventory/vms.local.generated -e 'enable_ovn_raft=False' -e 'ovn_image_repo=docker.io/astoycos/ovn-kube-f' playbooks/kube-install-ovn.yml

However based on the error logs it seems ansible is still trying to spin up a RAFT enabled cluster even though enable_ovn_raft is explicitly set to False based on the error logs

TASK [ovnkube-setup : Add label to master node for ovn raft mode] ****************************************************
fatal: [kube-singlehost-master]: FAILED! => {"msg": "'dict object' has no attribute 'master_slave'"}

PLAY RECAP ***********************************************************************************************************
kube-node-1                : ok=46   changed=30   unreachable=0    failed=0    skipped=83   rescued=0    ignored=2   
kube-node-2                : ok=46   changed=30   unreachable=0    failed=0    skipped=83   rescued=0    ignored=2   
kube-singlehost-master     : ok=67   changed=46   unreachable=0    failed=1    skipped=90   rescued=0    ignored=2   
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant