Deploying a 4-Node OpenShift Cluster on GCE

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024
  • Openshift Ansible Installer
    github.com/ope...
    GCE inventory gist (modify to your liking)
    gist.github.co...
    0:39 Setup openshift-ansible
    5:54 Create instances in GCE
    9:40 Configure Cloud DNS
    11:46 Run openshift-ansible playbook
    12:50 SSH into the master
    14:07 Log into the web console and deploy a sample application
    Openshift Origin Docs
    docs.openshift...
    Google Compute Engine (free trial available)
    cloud.google.c...

Комментарии • 16

  • @jfchevrette
    @jfchevrette 8 лет назад

    Very nice and straightforward demo!

  • @crmarquesjc
    @crmarquesjc 7 лет назад

    Amazing, man! Thank you!

  • @coolich76
    @coolich76 7 лет назад

    gr8 tutorial, thanks

  • @xendorkevin
    @xendorkevin 4 года назад

    just downloaded and seems so many different file in git from 3.6 brach according to your record. Any suggest ?

  • @maury83
    @maury83 6 лет назад

    HI,
    I'm new in this field , I have some question about opeshift v3 than v2 ....
    -Does exist the "district" concept on OSE3? If not, there is an alternative?
    -It's possible to define profiles for pods? And we could associate them to districts?
    -How we can create a set of nodes of "site 1" and make that an "application instance" run only on "site 1" nodes?
    thanks

  • @mkraochirumamilla
    @mkraochirumamilla 7 лет назад +2

    Owesome demo. Please share the got hub url for ansible hosts file. Thanks

  • @hussainaljamri5788
    @hussainaljamri5788 6 лет назад

    Great video. The playbook fails for me with message
    atal: [localhost]: FAILED! => {"failed": true, "msg": "The conditional check 'g_etcd_hosts is not defined and g_new_etcd_hosts is not defined' failed.
    I can't make sense of this message.

  • @apra143
    @apra143 7 лет назад

    What credentials are needed to login to the Web Console as administrator?

    • @apra143
      @apra143 7 лет назад

      Figured it out: 1. Update the /etc/origin/master/htpasswd file (e.g. vi or httpasswd) to create a user called 'admin' then 2. On the master run: 'oadm policy add-cluster-role-to-user cluster-admin admin'

    • @SethJennings
      @SethJennings  7 лет назад

      You would need to ssh into the master and add the cluster-admin role to the demo user with:
      oadm policy add-cluster-role-to-user cluster-admin demo

  • @tapasjena684
    @tapasjena684 7 лет назад

    Nice video but not working anymore. Getting below exception: ```Running etcd as an embedded service is no longer supported. If this is a new install please define an 'etcd' group with either one or three hosts. These hosts may be the same hosts as your masters. If this is an upgrade you may set openshift_master_unsupported_embedded_etcd=true until a migration playbook becomes available. ```

    • @SethJennings
      @SethJennings  7 лет назад

      Tapas Jena yes, that is a very recent change to the installer. If you checkout the release-3.6 branch in git, it should work again.

    • @tapasjena684
      @tapasjena684 7 лет назад

      Thanks Seth Jennings, It worked!

    • @tapasjena684
      @tapasjena684 7 лет назад

      Hi Seth Jennings, After cluster creation I am not able to provision storage. Created storage always has "Pending" status and in Monitoring->Events I receive error message "no persistent volumes available for this claim and no storage class is set." Could you please guide me what could be the issue?

    • @tapasjena684
      @tapasjena684 7 лет назад

      Found the solution for above issue, Need to add "openshift_cloudprovider_kind=gce" in inventory/hosts file to allow dynamic PV creation configuration

  • @dani305p8
    @dani305p8 7 лет назад

    Amazing demo I own you a 🍺