LXD
LXD
  • Видео 114
  • Просмотров 278 370
A look into the LXD 5.21.0 LTS release
The LXD team is happy to announce the newest 5.21.0 LTS release, supported until 2029. The release is packed with new features we have been developing over the past two years. In this video, we’ll go over some general changes in this LTS, as well as demo some of the most recent features. Please share your feedback with the team on our discourse page.
Timestamps:
00:00 Introduction
01:07 Version numbering scheme change
04:00 Default snap track change
04:50 UI enabled by default
06:18 Legacy removal and snap changes
08:03 Update on the image server
09:00 LTS overview
10:43 Demos intro
11:40 Ceph RBD optimized refresh
17:11 Dell PowerFlex storage driver
21:08 CephFS remote filesystem creation
24:27 Inst...
Просмотров: 2 419

Видео

LXD and MicroCloud Roadmap until April 2024
Просмотров 1,6 тыс.Год назад
Let's look at what the team at Canonical is expecting to work on between November 2023 and April 2024 in preparation for the next LTS release. RESOURCES: - LXD Webpage: ubuntu.com/lxd - MicroCloud Webpage: canonical.com/microcloud - Forum: discourse.ubuntu.com/c/lxd/ - LXD Github: github.com/canonical/lxd - MicroCloud Github: github.com/canonical/microcloud - Specifications: discourse.ubuntu.co...
Terraform and LXD
Просмотров 6 тыс.Год назад
Terraform is a widely used infrastructure as code solution used to configure and deploy cloud environments. A Terraform provider for LXD exists and allows for the configuration of LXD as well as creation of profiles, volumes and instances. RESOURCES: - Terraform: www.terraform.io - Terraform LXD provider: registry.terraform.io/providers/terraform-lxd/lxd/latest/docs - Forum: discourse.ubuntu.co...
LXD backup and disaster recovery
Просмотров 2,2 тыс.Год назад
Backups are something very easily overlooked until it's too late, so let's talk about how to backup and restore LXD instances and storage volumes. The different strategies available as well as look at disaster recovery should the worse happen. RESOURCES: - Forum: discourse.ubuntu.com/c/lxd/ - Github: github.com/canonical/lxd - Documentation: documentation.ubuntu.com/lxd/en/latest/backup/
LXD REST API
Просмотров 1,7 тыс.Год назад
Everything you do with LXD is driven through our REST API. This video tries to go through its general structure and how to easily interact with it. RESOURCES: - Forum: discourse.ubuntu.com/c/lxd/ - Github: github.com/canonical/lxd - Documentation: documentation.ubuntu.com/lxd/en/latest/rest-api/
LXD roadmap for late 2023
Просмотров 1,3 тыс.Год назад
Let's look at what the team at Canonical is expecting to work on between May and October 2023. RESOURCES: - Forum: discourse.ubuntu.com/c/lxd/ - Github: github.com/canonical/lxd - Specifications: discourse.ubuntu.com/c/lxd/specifications/147
MicroCloud, now with OVN!
Просмотров 9 тыс.Год назад
The full LXD MicroCloud is here now with the addition of OVN for distributed networking! Let's build a MicroCloud with 3 machines, local and distributed storage as well as distributed networking. RESOURCES: - MicroCloud: https;//microcloud.is - Github: github.com/canonical/microcloud - LXD Website: ubuntu.com/lxd - Community forum: discourse.ubuntu.com/c/lxd/microcloud/145
Early look at the LXD web UI
Просмотров 20 тыс.Год назад
Something that's been requested since the beginning of the LXD project, but we finally have it, a built-in LXD web interface. Let's take a look and see what it can do today and talk a bit about where it's headed. RESOURCES: - LXD UI: github.com/canonical/lxd-ui - Github: github.com/canonical/lxd - Website: ubuntu.com/lxd - Community forum: discourse.ubuntu.com/c/lxd/
LXD nic devices
Просмотров 1,7 тыс.Год назад
The last device in the series, nic devices are the most versatile of LXD devices, supporting anything from simple bridging all the way to fully offloaded OVN connectivity. RESOURCES: - Proxy devices: documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/ - Github: github.com/canonical/lxd - Website: ubuntu.com/lxd - Community forum: discourse.ubuntu.com/c/lxd/
LXD proxy devices
Просмотров 1,7 тыс.Год назад
A very versatile LXD device, the proxy device can be used to forward all kind of traffic including across protocols. RESOURCES: - Proxy devices: documentation.ubuntu.com/lxd/en/latest/reference/devices_proxy/ - Github: github.com/canonical/lxd - Website: ubuntu.com/lxd - Community forum: discourse.ubuntu.com/c/lxd/126
LXD infiniband devices
Просмотров 673Год назад
A somewhat uncommon device type as few have Infiniband hardware and even fewer are using that hardware in actual Infiniband mode. But here is how to configure Infiniband devices and pass them to containers and virtual machines, including SR-IOV support! RESOURCES: - Infiniband devices: documentation.ubuntu.com/lxd/en/latest/reference/devices_infiniband/ - Github: github.com/canonical/lxd - Webs...
LXD none devices
Просмотров 444Год назад
Probably our shortest video yet, this one is about the "none" device. It does absolutely nothing and is just used to prevent inheritance from a profile. Wouldn't normally have given it its own video, but it's April 1st after all, so why not. RESOURCES: - None type devices: documentation.ubuntu.com/lxd/en/latest/reference/devices_none/ - Github: github.com/canonical/lxd - Website: ubuntu.com/lxd...
LXD pci devices
Просмотров 887Год назад
LXD pci devices
LXD usb devices
Просмотров 1,7 тыс.Год назад
LXD usb devices
LXD tpm devices
Просмотров 1 тыс.Год назад
LXD tpm devices
LXD unix devices
Просмотров 781Год назад
LXD unix devices
LXD disk devices
Просмотров 1,6 тыс.Год назад
LXD disk devices
The LXD team at FOSDEM 2023
Просмотров 5982 года назад
The LXD team at FOSDEM 2023
LXD LTS releases
Просмотров 8022 года назад
LXD LTS releases
LXD cluster groups
Просмотров 1,3 тыс.2 года назад
LXD cluster groups
LXD roadmap for early 2023
Просмотров 2,3 тыс.2 года назад
LXD roadmap for early 2023
Introducing MicroCloud
Просмотров 15 тыс.2 года назад
Introducing MicroCloud
Introducing MicroCeph
Просмотров 7 тыс.2 года назад
Introducing MicroCeph
Running LXD in production
Просмотров 1,8 тыс.2 года назад
Running LXD in production
LXD security
Просмотров 1,1 тыс.2 года назад
LXD security
Migrating systems into LXD
Просмотров 3,4 тыс.2 года назад
Migrating systems into LXD
Overview of LXD projects
Просмотров 1,6 тыс.2 года назад
Overview of LXD projects
LXD's development process
Просмотров 4242 года назад
LXD's development process
LXD's S3 API
Просмотров 1,3 тыс.2 года назад
LXD's S3 API
BSD in a LXD VM
Просмотров 2,7 тыс.2 года назад
BSD in a LXD VM

Комментарии

  • @anilgargsfo
    @anilgargsfo 6 дней назад

    Can this be used on Debian 12

  • @David-re8bi
    @David-re8bi 10 дней назад

    my Windows 11 VM doesn't have internet and it sucks. I've tried everything to no avail.

  • @sebastian96s
    @sebastian96s 19 дней назад

    hi, thanks for the video, i being migrating some virtual machines and containers, but i don't get internet access in the new containers, can someone point me out what to do after the migration?

  • @SaiChandraRapolu
    @SaiChandraRapolu 2 месяца назад

    Do we have any API to connect with VGA console

  • @Zizaco
    @Zizaco 4 месяца назад

    This is gold. Thanks!

  • @AgnesMeredith-z3q
    @AgnesMeredith-z3q 4 месяца назад

    Adolphus Loaf

  • @VenessaPleau-g5s
    @VenessaPleau-g5s 4 месяца назад

    Rath River

  • @MaricaStipp-e7l
    @MaricaStipp-e7l 4 месяца назад

    Gerald Fork

  • @DanielThomason-c1t
    @DanielThomason-c1t 4 месяца назад

    Robinson William Thomas Kimberly Anderson David

  • @RobertGonzales-c8d
    @RobertGonzales-c8d 4 месяца назад

    Destany Loop

  • @BensonBradley-f7m
    @BensonBradley-f7m 5 месяцев назад

    Lonnie Plaza

  • @fio_mak
    @fio_mak 5 месяцев назад

    720P? Really canonical?

  • @Rich-Ard.
    @Rich-Ard. 6 месяцев назад

    heads up, zfs version matter for disaster recover. OS hard drive died and had separate zfs disk for container. thought I would upgrade from ubuntu jammy to ubuntu noble. recover didn't work at all, zpool threw errors and once after fixing those errors, recover didn't care about containers. reinstalled ubuntu jammy and recover worked perfectly

  • @cheebadigga4092
    @cheebadigga4092 7 месяцев назад

    Thank you for all of your work!! This channel is such a gem!!

  • @cheebadigga4092
    @cheebadigga4092 7 месяцев назад

    I'm starting to love LXD more and more everyday!

  • @coccolesto
    @coccolesto 7 месяцев назад

    what a amazing video! Just a question: what if I have an existing OVN from an openstack juju installation and I want to use it also for my lxd cluster? How to configure the lxd network properly?

  • @connoy2rex
    @connoy2rex 7 месяцев назад

    this is a great tutorial and it has me most of the way there! I'm sure something has changed within windows since this video, though. I created a default bridge network when I initialized lxd, but the windows VM can't find a network and won't let me past the setup stage without it.

  • @radhwanbasher
    @radhwanbasher 7 месяцев назад

    Thanks , can give some tetoreals about container live migration in lXD

  • @G.Y.-bw2no
    @G.Y.-bw2no 8 месяцев назад

    Doesn't work anymore.Error message: Only Incus-managed disks are allowed with migration.stateful=true.

    • @G.Y.-bw2no
      @G.Y.-bw2no 8 месяцев назад

      My bad. The migration.stateful=true flag is not necessary anymore. Anyhow: drop the migration.stateful=true flag in the above video, and you're good to go.

  • @SergioAlonso-pancutan
    @SergioAlonso-pancutan 8 месяцев назад

    You rocks man

  • @BYAZIT
    @BYAZIT 8 месяцев назад

    It was a quick help, thanks!

  • @ccaiuss
    @ccaiuss 8 месяцев назад

    How can I enable authentication ? user, password...

  • @jairunet
    @jairunet 8 месяцев назад

    Excellent demo, I have been looking for a quick real scenario demo on MAAS and finally got a very good one, I appreciate it very much as always @stgraber 🙇

  • @RichardBuckerCodes
    @RichardBuckerCodes 9 месяцев назад

    I bought 3 brand new mini-PCs with an i7, 64GB ram, 2TB NVME and 512GB SATA. I installed Ubuntu 24.04 and then installed via the snap and init commands... the installation completed but never finished. I tried both the 24.04 minimized and full server. One error was a qemu error. and another was some sort of network error.

  • @torbenm6381
    @torbenm6381 9 месяцев назад

    hey i love the videos of you guy. i am strungle for setup vlan q-in-q normal vlan works!!! 1. i have setup q-in-q on my pfsense firewall. 2. how to setup that on ubuntu 22.04/netplan and lxd/lxc. i would like to see a profile config similar this. this works for my normal vlans name: vlan30 description: '' devices: eth1: nictype: macvlan parent: vlan30 type: nic root: path: / pool: local size: 5GiB type: disk config: cloud-init.user-data: | #cloud-config # Enables SSH password authentication ssh_pwauth: yes #ubuntu:ubuntu users: - name: ubuntu passwd: "/6UPE2u.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0" lock_passwd: false groups: lxd shell: /bin/bash sudo: ALL=(ALL) NOPASSWD:ALL ----------------------------------------------------------- network: version: 2 renderer: NetworkManager ethernets: enp1s0: dhcp4: no dhcp6: no optional: true bridges: br0: interfaces: [enp1s0] addresses: [10.0.3.4/24] routes: - to: default via: 10.0.3.1 nameservers: addresses: [10.0.3.1] search: [] dhcp4: no dhcp6: no vlans: vlan30: id: 30 link: br0

  • @ozxbt
    @ozxbt 9 месяцев назад

    does it support GPU pass though and how do i convert virtualbox image to lxd

  • @ewenchan1239
    @ewenchan1239 9 месяцев назад

    i just tested this and it doesn't really work. When I tried to add the Infiniband SR-IOV to a VM, this is the error message that I get when I try to start up the VM: "Failed setting up device via monitor: Failed setting up device "ib0": Failed adding NIC device: Monitor is disconnected" When I passed the SR-IOV VF as a physical Infiniband NIC, it doesn't recognise the VF, and the graphical console is stuck on the EFI/UEFI boot loading page (but no UEFI prompt though).

  • @ewenchan1239
    @ewenchan1239 9 месяцев назад

    For those that are wondering, this does NOT work with Infiniband. Maybe for ethernet adapters (or if you change the VPI port type to 2 (ETH)), but for IB, I just confirmed that this does NOT work. SRIOV is enabled via my Proxmox 7.4-17 (Debian 11) host. Installed snap and then installed LXD. Virtual functions are confirmed operational in `ip link`. (I created I think it was either 8 virtual functions or 16 - I can't remember now), but they show up in `ip link`. Create SRIOV network root@node1# lxc network create sriov1 --type=sriov parent=ibp8s0f0 Network sriov1 created root@node1# lxc launch ubuntu:22.04 c3 --network sriov1 Creating c3 Starting c3 Error: Failed to start device 'eth0": Failed clearing MAC for VF "0": Failed to run ip link set dev ibp8s0f0 vf 0 mac 00;00;00:00:00:00: exit status 1 (Invalid address length 6 - must be 20 bytes)

  • @olegbrigmann378
    @olegbrigmann378 9 месяцев назад

    thank you

  • @ewenchan1239
    @ewenchan1239 9 месяцев назад

    "100, 200, 400 gigabit NICs, most people run them in ethernet mode. Infiniband really isn't a thing anymore." Unfortunately, that statement is not entirely correct. In the HPC space, Infiniband with 200 Gbps and/or 400 Gbps (which is where high speed system interconnects started like 30-40 years ago), is still VERY much a thing. With my Mellanox ConnectX-4 dual port 100 Gbps VPI, one of the port is set using IB whilst the other port was set to use ETH. My two compute nodes were then connected point-to-point via a DAC and a Linux Network bridge was created for the 100 GbE. Host to host, using iperf, using 8 parallel streams, I was able to get 96.9 Gbps out of a possible 100 Gbps. But when I am using a pair of CentOS 7.7.1908 VMs, the best that it could do VM to VM over ethernet is 23.4 Gbps (single stream), and goes down to 23.0 Gbps (with 8 parallel streams), which is a far cry from what the NIC is actually capable of. (The CentOS VMs were using the virtio NIC. During the test, according to htop, none of the 5950X cores would go > 25% CPU usage.) After I completed this test, I tried to run one of my FEA models over this 23-ish GbE network, and the FEA simulation failed to start. (But it would start on the local host without any issues.) Infiniband didn't have this issue. So...IB is still VERY much a thing, and this is definitely true for 100, 200, 400 Gbps networks.

  • @ewenchan1239
    @ewenchan1239 10 месяцев назад

    Great video! Two questions: 1) In your video, you mentioned that the virtual functions were "tied to the host driver", but then the SR-IOV weren't available to containers. However, in the demo, you showed that you were able to attached them as either type=physical (which is the virtual function) or as type=sriov, so I'm a little bit confused by your statement in regards to what you said about them not being able to be used by the container. Can you maybe clarify this a little bit further? 2) You also mentioned in the video that you were using Ubuntu 22.04 LTS and that you had to install or enable the Infiniband kernel driver because it isn't installed/enabled (in the linux kernel) by default. Would you happen to drop the commands that you used to install said IB kernel driver? Your help is greatly appreciated. Thank you.

  • @SimonCopsey
    @SimonCopsey 10 месяцев назад

    "The Linux container community stopped providing access to LDX ...." errrr … not much transparency and a shed load of spin on that statement. LXD’s new masters have changed the license under which LXD is made available and forced all community contributors to sign away their rights if they wish to continue contributing to LDX. For those interested in understanding in more detail, check out Stéphane Graber’s own web site that explains what’s gone on in a manner I’m far more inclined to believe. We won’t be seeing Stéphane again on the LDX channel I doubt. It is a very significant loss for the project.

  • @nuccitheboss
    @nuccitheboss 10 месяцев назад

    Excellent work folks! Keep up the good work 👏

  • @devops-show
    @devops-show 10 месяцев назад

    Happy to hear the image server is almost ready. The OIDC stuff is really neat, really happy about that!

  • @ShaneHolloman
    @ShaneHolloman 10 месяцев назад

    would be good to improve video resolution to at least 1080, 720 is soft

  • @TheLovinator
    @TheLovinator 10 месяцев назад

    Stéphane Graber looks different today

  • @TimLF
    @TimLF 10 месяцев назад

    Can it run on LXC instead of KVM though?

  • @TechHotLine
    @TechHotLine 11 месяцев назад

    Hi, qq, when i try to connect to the web interface of my local LXD system, all i got was this message in the web browser (Brave, tried chrome and edge also ) "{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]}" anything u can point me to fix this?

  • @clikcspeed
    @clikcspeed 11 месяцев назад

    Amazing 🎉🎉

  • @ruslanbruma3687
    @ruslanbruma3687 11 месяцев назад

    Thank you Stephane for your hard work. A few questions regarding bgp: 1. Is there a way to see the bgp status in lxd? 2. Are there some limitations regarding bgp between the lxd host and lxd inside VM? I established successfully the bgp session between the physical router and lxd on the host, but can't establish the bgp between the host and VM that also runs lxd bgp.

    • @NC_Sketchy
      @NC_Sketchy 9 месяцев назад

      (I could be wrong but:) At around the 2:45 - 3:00 mark it sounds like it does not do any configuration changes, just bgp announcements, which is unfortunate since that's what I was hoping for. You can check the status with lxc query /internal/testing/bgp

  • @mazarsmikael898
    @mazarsmikael898 11 месяцев назад

    very good video

  • @haider3701
    @haider3701 11 месяцев назад

    dropping some commands that resolved some issues for me sudo snap set lxd ui.enable=true sudo systemctl reload snap.lxd.daemon lxc config set win11 raw.qemu="-cpu host"

  • @skwailab
    @skwailab 11 месяцев назад

    Awesome. How easy to install microk8s in the nodes of microcloud ?

  • @b14ckh4wk3
    @b14ckh4wk3 11 месяцев назад

    Hellow

  • @sebastian96s
    @sebastian96s Год назад

    hi and ty for the video, if i have 2 networks, my wifi and wired one, can one container use wifi, and other use wired?

  • @Joseph-q4y5y
    @Joseph-q4y5y Год назад

    what is the name of package manager? thanks

  • @felipemateo
    @felipemateo Год назад

    Is that possible with WAN public IP interface? I am trying this configuration and the server(s) get disconnected

    • @felipemateo
      @felipemateo Год назад

      OK, i did it; it was a misconfiguration with a <local-ip> on one of the instances, Set the correct IP and it pings outside

  • @mercurial0ne
    @mercurial0ne Год назад

    Booooooooooooo!!!! 👎👎👎👎👎👎

  •  Год назад

    Thanks for the infos, much appreciated :-)Will there be a 'non snap' way to install LXD and the 'suite' on Ubuntu ? Like plain 'deb' ? (When running an Apt repo for exemple..) Would one still needs to use debian repos or Switch to Incus ?

  • @pedrophmg
    @pedrophmg Год назад

    Hi, I'm trying to make a home lab, due to the new update MicroCloud got my attention, the problem is: I do have a six-core 12TB (16TB Raid 5) as storage and a 2x 10-core to use as compute, but I don't have the third machine that I don't even get why would I need it. Is it possible for me to use microcloud or would I need to get a third machine for this? Would it work if I install Ceph and Ovn in the 6-core while I keep LXD on the physical 2x10-core machine?