Semaphor a day ago

So it looks like a Proxmox alternative, this [0] goes into some reasons to switch. Main selling point seems to be fully OSS and no enterprise version.

[0]: https://tadeubento.com/2024/replace-proxmox-with-incus-lxd/

  • hardwaresofton a day ago

    It’s more like a Kubernetes alternative

    • moondev a day ago

      Proxmox feels like a more apt comparison, as they both act like a controlplane for KVM virtual-machines and LXC containers across one or multiple hosts.

      If you are interested in running kubernetes on top of incus, that is your kubernetes cluster nodes will be made up of KVM or LXC instances - I highly recommend the cluster-api provider incus https://github.com/lxc/cluster-api-provider-incus

      This provider is really well done and maintained, including ClusterClass support and array of pre-built machine images for both KVM and LXC. It also supports pivoting the mgmt cluster on to a workload cluster, enabling the mgmt cluster to upgrade itself which is really cool.

      I was surprised to come across this provider by chance as for some reason it's not listed on the CAPI documentation provider list https://cluster-api.sigs.k8s.io/reference/providers

      • neoaggelos 5 hours ago

        Hey! Long-time lurker, never really posted before, author of cluster-api-provider-incus here, did not really expect it to come up on hackernews.

        Thanks for the good comments! Indeed, adding to the list of CAPI providers is on the roadmap (did not want to do it before discussing with Stephan to move the project under the LXC org, but that is now complete). Also, I'm working on a few other niceties, like a "kind"-style script that would allow easily managing small k8s clusters without the full CAPI requirements (while, at the same time, documenting all it takes to run Kubernetes under LXC in general).

        You can expect more things about the project, and any feedback would be welcome!

    • loloquwowndueo a day ago

      Not really, Kubernetes does a lot of different things that are out of scope for incus or lxd or docker compose for that matter or any hypervisor or …

danofsteel32 a day ago

Incus is great when developing ansible playbooks. The main benefit for me over docker/podman is systemd works out of the box in incus containers.

  • mekster a day ago

    Not to mention the easy to use web UI.

  • anonfordays a day ago

    What makes it better than Vagrant for this use-case?

    • fuzzy2 a day ago

      Doesn't Vagrant spin up full VMs? Incus/LXD/LXC is about system containers. So like Docker, but with a full distro including init system running inside the container. They are much faster to spin up and have the best resource sharing possible.

      • goku12 a day ago

        Vagrant has a provider plugin for lxc. Please check my reply to the parent commenter for more explanation.

    • goku12 a day ago

      Vagrant is not the right comparison against Incus for this use case. Vagrant is used to spin up VM or system container instances that are configured for software development and testing. But vagrant doesn't create those VMs or containers by itself. Instead, it depends on virtualization/container providers like VMware, Virtualbox, libvirt or lxc. In fact, you could create a provider plugin for vagrant to use Incus as its container/VM back end. (I couldn't find any, though.)

tcfhgj a day ago

The only tool I found which allows to easily spin up pre-configured VMs without any gui hassle

actinium226 a day ago

I went through the online tutorial, but I'm not really seeing how it's different from docker?

  • CoolCold 17 hours ago

    if your question is genuine, then the simple answer would be full system inside container of LXD/Incus (I prefer to use term VE for Virtual Environment to distinguish from barebone containers of Docker style).

    The full system, means, all your standard stuff works in expected way - crons, systemd units, sshd, multiple users, your regular backup solutions and so on. As well, that "system" can be dumped/exported/backuped/snapshotted as a whole - very much similar like you do with your vSphere/Qemu or whatever you use in your datacenters as hypervisor.

    Foreseeing the question - yep, you can run Docker inside LXD/Incus VEs. In practical terms, that makes much simple when you need to give some dev team (who are of course not anywhere known to do sane things) access to environment with Docker access (which in 99% cases means that host's root level access is exposed).

  • skydhash a day ago

    Instead of ephemeral containers, you have instances that are like VM (and incus can manage VM via qemu), so pretty much everything you would use a VM for, but if you do not need the kernel separation. It's more similar to FreeBSD jails than to docker.

  • Levitating a day ago

    It's a difference between system containers and application containers.

    LXC containers used in incus run their own init, they act more like a VM.

    However incus can also execute actual VMs via libvirt and since recently even OCI containers like docker.

gavinray a day ago

Can someone explain the usecase for this?

Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?

When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?

  • goku12 a day ago

    > Can someone explain the usecase for this?

    Use cases are almost the same as Proxmox. You can orchestrate system containers or VMs. Proxmox runs lxc container images, while incus is built on top of lxc and has its own container images.

    System vs Application containers: Both share virtualized kernels. Application containers usually run only a single application like web app (eg: OCI containers). System containers are more like VMs with systemd and multiple apps managed by it. Note: This differentiation is often ambiguous.

    > Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?

    Yes, you could build a private cloud with it.

    > When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?

    You would use it when you need traditional application setups inside VMs or system containers, instead of containerized microservices.

    I actually use Incus containers as host nodes for testing full fledged multinode K8s setups.

  • throwaway2056 a day ago

    I know some webhosting provider that used one VM for every user. Now they moved to using this. Firstly low resource usage. If one uses ZFS or btrfs then one can save storage as in common bits are not duplicated across system containers. Note this is a system container - not traditional container. This one can be rebooted to get the previous state. It is not ephemeral.

  • pxc a day ago

    System container tech like Incus powers efficient Linux virtualization environments for developers, so that you can have only one VM but many "machines". OrbStack machines on macOS work like this, and the way WSL works is similar (one VM and one Linux kernel, many guests sharing that kernel via system containers).

    • CoolCold 17 hours ago

      Just in case - I'm using LXD inside my WSL and it working great. BTRFS backed storage via loopfile, saves $$$.

      For others, why it may be useful in regular sysadmin job:

      * say doing Ansible scripting against LOCAL network is hell amount of time faster than against 300+ ms remote machines

        * note that because you can use VE snapshots, it's very easy to ensure your playbook works fine without guessing what have you modified when testing things - just do rollback to "clean" state and start over
      
      * creating test MariaDB 3 nodes cluster - easy peasy

      * multiple distros available - need to debug Haproxy from say Rocky 8 linux? Check!

      • gavinray 9 hours ago

        These are pretty clever usecases I wouldn't have thought of. Maybe a little complex for what I do, but clever nonetheless.

    • gavinray a day ago

      Thanks -- though I'm not sure I fully grok how this is different than something like Firecracker?

      • pxc a day ago

        Firecracker has some elements in common-- namely images that are to run on Firecracker have a non-standard init system so that they can boot more quickly than machines that are dealing with real hardware and a wider variety of use cases. That's also typically used for the guest VMs that host containers for systems like WSL and OrbStack.

        But Firecracker is fundamentally different because it has a different purpose: Firecracker is about offering VM-based isolation for systems that have container-like ephemerality in multitenant environments, especially the cloud. So when you use Firecracker, each system has its own kernel running under its own paravirtualized hardware.

        With OrbStack and WSL, you have only one kernel for all of your "guests" (which are container guests, rather tha hardware paravirtualized guests). In exchange you're working with something that's simpler in some ways, more efficient, has less resource contention, etc. And it's easier to share resources between containers dynamically than across VMs, so it's very easy to run 10 "machines" but only allocate 4GB of RAM or whatever, and have it shared freely between them with little overhead. They can also share Unix sockets (like the socket for Docker or a Kubernetes runtime) directly as files, since they share a kernel-- no need for some kind of HTTP-based socket forwarding across virtualized network devices.

        I imagine this is nice for many use cases, but as you can imagine, it's especially nice for local development. :)

  • Levitating a day ago

    There's no particular usecase, though I do know of a company whose entire infrastructure is maintained within incus.

    I personally use it mostly for deploying a bunch of system containers and some OCI containers.

    But anyone who uses LXC, LXD, docker, libvirt, qemu etc. could potentially be interested in Incus.

    Incus is just an LXD fork btw, developed by Stephane Graber.

    • clvx a day ago

      Who also developed LXD and contributed to LXC. I wouldn’t say it’s just a fork but a continuation of the project without Canonical.

      • Levitating a day ago

        You're right, I should've worded it differently.

      • anonfordays a day ago

        >a continuation of the project without Canonical.

        This being a big plus considering Canonical just outed itself as a safe space for pedophiles. Not being a pedo-bar is a good thing.

mathfailure 9 hours ago

Their site doesn't open, that's a poor sign.

  • mdaniel 6 hours ago

    Only if they run their website on Incus

63stack a day ago

How do you handle updating the machine that Incus itself runs on? I imagine you have to be super careful not to introduce any breakage, because then all the VMs/containers go down.

What about kernel updates that require reboots? I have heard of ksplice/kexec, but I have never seen them used anywhere.

  • CoolCold 16 hours ago

    I'm not quite sure what's your question here? Very much similar to any other system which needs to reboot and you getting ready to these reboots in advance.

    To some extent, of course things like vSphere/Virtuozzo and even LXD/Incus, and even simple Qemu/Virsh systems can do live migration of VMs, so you may care less on preparing things inside VMs to be fault taulerant, but to some extent.

    I.e. if your team do run PostgreSQL, your run it in cluster with Patroni and VIPs and all that lovely industry standard magic and tell dev teams to use that VIP as entry point (in reality things bit more complicated with Haproxy/Pgbouncer on top, but enough to express the idea).

    • 63stack 6 hours ago

      I missed the part that Incus supports clustering, for some reason I thought it's single node only.

      • CoolCold 4 hours ago

        I'm not sure that clustering goes beyond "multiple hosts with single API to rule them all" - thus I assume when physical node needs maintenance, it won't magically migrate / restart VEs on other cluster members. May be wrong here.

        P.S. Microcloud tries to achieve this AFAIR, but it's from Canonical, so on LXD.

  • dsr_ a day ago

    As with any such system, you need a spare box. Upgrade the spare, move the clients to it, upgrade the original.

    • loloquwowndueo a day ago

      But then the clients have downtime while they’re being moved.

      • pezezin a day ago

        I don't know about Incus, but on ProxMox the downtime when moving a VM is around 200 ms.

      • pylotlight a day ago

        Isn't that the exact problem that k8s workloads solve by scaling onto new nodes first etc? No downtime required.

        • loloquwowndueo a day ago

          Right but incus is not k8s. You can stand up spares and switch traffic, but it’s not built in functionality and requires extra orchestration.

          • goku12 a day ago

            It is a built-in functionality [1] and requires no extra orchestration. In a cluster setup, you would be using virtualized storage (ceph based) and virtualized network (ovn). You can replace a container/VM on one host with another on a different host with the same storage volumes, network and address. This is what k8s does with pod migrations too (edit: except the address).

            There are a couple of differences though. The first is the pet vs cattle treatment of containers by Incus and k8s respectively. Incus tries to resurrect dead containers as faithfully as possible. This means that Incus treats container crashes like system crashes, and its recovery involves systemd bootup inside the container (kernel too in case of VMs). This is what accounts for the delay. K8s on the other hand, doesn't care about dead containers/pods at all. It just creates another pod, likely with a different address and expects it to handle the interruption.

            Another difference is the orchestration mechanism behind this. K8s, as you may be aware, uses control loops on controller nodes to detect the crash and initiate the recovery. The recovery is mediated by the kubelets on the worker nodes. Incus seems to have the orchestrator on all nodes. They take decisions based on consensus and manage the recovery process themselves.

            [1] https://linuxcontainers.org/incus/docs/main/howto/cluster_ma...

            • mdaniel a day ago

              > and address. This is what k8s does with pod migrations too.

              That's not true of Pods; each Pod has its own distinct network identity. You're correct about the network, though, since AFAIK Service and Pod CIDR are fixed for the lifespan of the k8s cluster

              You spoke to it further down, but guarded it with "likely" and I can say with certainty that it's not likely, it unconditionally does. That's not to say address re-use isn't possible over a long enough time horizon, but that bookkeeeping is delegated to the CNI

              ---

              Your "dead container" one also has some nuance, in that kubelet will for sure restart a failed container, in place, with the same network identity. When fresh identity comes into play is if the Node fails, or the control loop determines something in the Pod's configuration has changed (env-vars, resources, scheduling constraints, etc) in which case it will be recreated, even if by coincidence on the same Node

              • moondev a day ago

                > I can say with certainty that it's not likely, it unconditionally does. That's not to say address re-use isn't possible over a long enough time horizon, but that bookkeeeping is delegated to the CNI

                You are 100% wrong then. The kube-ovn CNI enables static address assignment and "sticky" IPAM on both pods and kubevirt vms.

                https://kubeovn.github.io/docs/v1.12.x/en/guide/static-ip-ma...

                • mdaniel a day ago

                  Heh, I knew I was going to get in trouble since the CNI could do whatever it likes, but felt safe due to Pods having mostly random identities. But at that second I had forgotten about StatefulSets, which I agree with your linked CNI's opinion would actually be a great candidate for static address assignment

                  Sorry for the lapse and I'll try to be more careful when using "unconditional" to describe pluggable software

                  • moondev a day ago

                    All good and i'll cheers you on the composability of k8s for sure

              • goku12 a day ago

                I agree with everything you pointed out. They were what I had in my mind too. However, I avoided those points on purpose for the sake of brevity. It was getting too long winded and convoluted for my liking. Thanks for adding a separate clarification, though.

manosyja 2 days ago

What can this work with? It says „Containers and VMs“ - I guess that’s LXCs and QEMU VMs?

  • nrabulinski 2 days ago

    Yes, it uses QEMU under the hood for VMs and runs LXC containers. But also, since recently, you can run docker images in it. Very handy, especially since it has 1st class remote support, meaning you can install only the incus client and when doing `incus launch` or whatever, it will transparently start the container/vm on your remote host

sigmonsays a day ago

the features worth mentioning imho are the different storage backends and their features. Using btrfs, lvm or zfs there is some level of support of thin copy provisioning and snapshotting. I believe btrfs/zfs have parity in terms of supported operations. Cheap snapshots and provisioning of both containers and VMs using the same tool is pretty awesome.

I personally use lxd for running my homelab VMs and containers

burnt-resistor a day ago

Nothing about resource (net, io, disk, cpu) isolation, limits, priorities, or guarantees. Not the same as a type 1 hypervisor. These qualities are needed to run things safely and predictably in the real world™, at scale. Also, accounting and multitenancy if it's going to be used as some sort VAR or VPS offering.

  • tok1 a day ago

    Fun fact, Incus is being used as underlying infrastructure for the NorthSec CTF, i.e. in an "as hostile as it can get" environment. If you have close to a hundred teams of hackers on your systems trying to break stuff, I think it speaks for Incus and its capabilities regarding isolation and limits.

    In case you are interested, Zabbly has some interesting behind-the-scenes on Youtube (not affiliated).

    • maple3142 17 hours ago

      If being used in a CTF counts, then running latest docker with no extra privilege and non-root user on a reasonably up-to-date kernel meets the definition of secure I think. At least for what I have seen, this kind of infrastructure is pretty common in CTF.

  • goku12 a day ago

    Incus supports Qemu/KVM VMs. And KVM is arguably a Type 1 hypervisor since it's part of the Linux kernel. So I guess it qualifies?

pm2222 a day ago

Should lxc user migrate to incus?

  • goku12 a day ago

    Short answer: No. Long answer: Depends upon what you use lxc for.

    Incus is not a replacement for lxc. It's an alternative for LXD (LXD project is still active). Both Incus and LXD are built upon liblxc (the library version of lxc) and provide a higher level user interface than lxc (eg: projects, cloud-init support, etc). However, lxc gives you fine grained control over container options (this is sort of like flatpak and bubblewrap).

    So, if you don't need the fine grained control of lxc, Incus may be a more ergonomic solution.

    PS: Confusingly enough, LXD's CLI is also named lxc.

  • Levitating a day ago

    an lxd user should

    • jenders a day ago

      LXD is actually pronounced “lex-d” by the community and similarly, LXC is “lex-c”.

oulipo a day ago

Is there some kind of Terraform/Pulumi integration to make it easy to deploy stuff to some VM running Incus for my deployments? Or I'm missing the point of what Incus is for?

  • belthesar a day ago

    There is a Terraform provider that is actively maintained, in addition to Ansible integration. https://linuxcontainers.org/incus/docs/main/third_party/

    I'm a Pulumi user myself, and I haven't seen a Pulumi provider for Incus yet. Once I get further into my Incus experiments, if someone hasn't made an Incus provider yet, I'll probably go through the TF provider conversion process.

    • joeduffy an hour ago

      Note that you can now use TF providers "on the fly" from Pulumi: https://www.pulumi.com/blog/any-terraform-provider. No provider bridging/conversion necessary.

      I just tried and it seems to have worked (though I haven't tested any specific resources yet):

      $ pulumi package add terraform-provider lxc/incus

    • oulipo 13 hours ago

      That would be quite useful indeed!

  • goku12 a day ago

    Incus is like a cloud management software - especially in cluster mode. It has management API like many cloud services. So, yes, there's a terraform provider for Incus, which can be used to configure and provision instances. Guest setup can be managed using cloud-init. Ansible is also an alternative option for this.

    • oulipo 13 hours ago

      Very clear! And would it make sense to run it on a GCP VM, and use it as a "nicer docker-compose" ? Or that would entirely miss the point?

      • goku12 11 hours ago

        Incus is sort of like GCP's own resource management software, though GCP does a lot more than what Incus does. So you'd often be using Incus like a self-hosted alternative to GCP.

        Meanwhile, running Incus inside GCP VM(s) should be possible, though I haven't tried it and can't confirm it. Incus can manage system containers - containers that behave like VMs running full distros, except for the kernel.

        But keep in mind that Incus is more like docker than docker-compose. You will need a different tool to provision and configure all those containers over Incus's API (docker-compose does this for application containers over the docker API). As mentioned before, that could be Terraform/OpenTofu, cloud-init and Ansible. Or you could even write a script to do it over the API. I have done this using Python.