Great update! Some feedback about the video; could you ctrl+ the webpage a bit during recording - to make the text easier to read? Anyway, exited to be running Incus on all my hosts at this point. :)
I'm not sure if I should switch to incus! Maybe after the first minors release of the major. I love the work Stéphane, thank you for all you've done! Keep up the good work! is doing, but I can't imagine the development can keep up with Canonical's due to the number of contributors. I haven't compared the number of developers/contributors yet, but it will be hard to keep up as an open source project. In the long term, LXD and Incus will be massively different and in the end one will survive! I hope it's Incus because I don't like the way it's gone without knowing the background. Stéphane, thank you for all you've done! Keep up the good work!
I can definitely understand the sentiment even though I don't agree with your assessment :) My disagreement is probably because of a better understanding of the size, structure and general time allocation of the LXD team at Canonical. The overall LXD team has around 10 engineers but of those, only 3 or so work on LXD itself, the rest work on MicroCloud, Juju deployment stuff and other currently undisclosed initiatives. Worth noting that of those 3 folks, only one has been at Canonical for more than two years and he's the only senior engineer on the entire LXD team. That guy is amazing and has been doing a great job running most of LXD since my departure, but that means a lot of reviews, meetings and paperwork so limited time to work on LXD itself. On the Incus side, things are obviously a bit different as I'm the only full time developer on it, though having created LXD and implemented a lot of its major features over the years, I do have a tendency to get things down rather quickly ;) But it's also worth noting that I'm aided by an active team of maintainers, all of whom were senior engineers working on LXD in the past. So the collective knowledge about the code base is far greater in the Incus team than it is the LXD team at this point. That's obviously for the core team, then you have the occasional contributors, most of whom have migrated from LXD to Incus as they can keep contributing without having to enter a legal agreement with Canonical.
@@TheZabbly Thank you very much for your assessment and the more detailed information! I had no major concerns as I really appreciate your work and your presence and support! I will migrate to incus soon! Thank you!!!
Another great update. One thing I didn't catch was whether virtiofs works as a mechanism for attaching a disk device to a vm instance. Not added in this release but as you enumerated all of the disk device types, I didn't hear it listed. Is it supported?
We treat block volumes and filesystem volumes separately. A block volume shows up as a full disk on the SCSI, NVME or virtio-blk bus. A filesystem volume or a shared path on the host is exposed to the VM over both 9p and virtiofs. So for example `incus config device add MY-VM home disk source=/home/foo path=/mnt/foo` will result in /mnt/foo inside of the VM showing you /home/foo on the host and this will typically be using virtiofs unless your VM doesn't support it in which case it will fallback to 9p.
Yeah, NVMEoTCP works in much the same way as iSCSI does. You have a server or a storage enclosure expose physical NVME drives over the network and then have one or more servers connect to that over the network.
Nice update
Great update! Some feedback about the video; could you ctrl+ the webpage a bit during recording - to make the text easier to read? Anyway, exited to be running Incus on all my hosts at this point. :)
I'll try to remember to zoom the web browser too next time.
I usually think of it for the terminal but not the web browser.
I'm not sure if I should switch to incus! Maybe after the first minors release of the major. I love the work Stéphane, thank you for all you've done! Keep up the good work! is doing, but I can't imagine the development can keep up with Canonical's due to the number of contributors. I haven't compared the number of developers/contributors yet, but it will be hard to keep up as an open source project.
In the long term, LXD and Incus will be massively different and in the end one will survive! I hope it's Incus because I don't like the way it's gone without knowing the background.
Stéphane, thank you for all you've done! Keep up the good work!
I can definitely understand the sentiment even though I don't agree with your assessment :)
My disagreement is probably because of a better understanding of the size, structure and general time allocation of the LXD team at Canonical. The overall LXD team has around 10 engineers but of those, only 3 or so work on LXD itself, the rest work on MicroCloud, Juju deployment stuff and other currently undisclosed initiatives.
Worth noting that of those 3 folks, only one has been at Canonical for more than two years and he's the only senior engineer on the entire LXD team. That guy is amazing and has been doing a great job running most of LXD since my departure, but that means a lot of reviews, meetings and paperwork so limited time to work on LXD itself.
On the Incus side, things are obviously a bit different as I'm the only full time developer on it, though having created LXD and implemented a lot of its major features over the years, I do have a tendency to get things down rather quickly ;)
But it's also worth noting that I'm aided by an active team of maintainers, all of whom were senior engineers working on LXD in the past.
So the collective knowledge about the code base is far greater in the Incus team than it is the LXD team at this point.
That's obviously for the core team, then you have the occasional contributors, most of whom have migrated from LXD to Incus as they can keep contributing without having to enter a legal agreement with Canonical.
@@TheZabbly Thank you very much for your assessment and the more detailed information!
I had no major concerns as I really appreciate your work and your presence and support!
I will migrate to incus soon!
Thank you!!!
Another great update. One thing I didn't catch was whether virtiofs works as a mechanism for attaching a disk device to a vm instance. Not added in this release but as you enumerated all of the disk device types, I didn't hear it listed. Is it supported?
We treat block volumes and filesystem volumes separately.
A block volume shows up as a full disk on the SCSI, NVME or virtio-blk bus.
A filesystem volume or a shared path on the host is exposed to the VM over both 9p and virtiofs.
So for example `incus config device add MY-VM home disk source=/home/foo path=/mnt/foo` will result in /mnt/foo inside of the VM showing you /home/foo on the host and this will typically be using virtiofs unless your VM doesn't support it in which case it will fallback to 9p.
Newb here, can this be installed along side docker yet?
It can, but you need to understand and configure Docker's firewalling as otherwise it will prevent Incus containers from getting any networking.
Trying to get my head around how you could use NVMEoTCP to access shared storage - is it a replacement for NFS/iSCSI ?
Yeah, NVMEoTCP works in much the same way as iSCSI does.
You have a server or a storage enclosure expose physical NVME drives over the network and then have one or more servers connect to that over the network.
hellow