• 2 Posts
  • 110 Comments
Joined 2 年前
cake
Cake day: 2023年8月10日

help-circle

  • Which means my distro-morphing idea should work in theory with OpenStack

    I also don’t recommend doing a manual install though, as it’s extremely complex compared to automated deployment solutions like kolla-ansible (openstack in docker containers), openstack-ansible (host os/lxc containers), or openstack-helm/genestack/atmosphere (openstack on kubernetes). They make the install much more simpler and less time consuming, while still being intensely configurable.


  • Personally, I think Proxmox is somewhat unsecure too.

    Proxmox is unique from other projects, in it’s much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux’s older networking stack, called ifupdown2, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch.

    I think Proxmox is definitely secure enough, but I don’t know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.

    If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure

    If you’re interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.

    I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.



  • This is moving the goal posts. You went from “ssh is not fine to expose” to “VPN’s add security”. While the second is true, it’s not what was being argued.

    Never expose your SSH port on the public web,

    Linux was designed as a multi user system. My college, Cal State Northridge, has an ssh server you can connect to, and put your site up. Many colleges continue to have a similar setup, and by putting stuff in your homedir you can have a website at no cost.

    There are plenty of usecases which involve exposing ssh to the public internet.

    And when it comes to raw vulnerabilities, ssh has had vastly less than stuff like apache httpd, which powers wordpress sites everywhere but has had so many path traversal and RCE vulns over the years.


  • Firstly, Xen is considered by secure by Qubes — but that’s mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.

    But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.

    Amazon moved to KVM because, despite the security trade off’s, it’s “good enough” for their usecase, and KVM is easier to manage because it’s in the Linux kernel itself, meaning you get it if you install Linux on a machine.

    In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I’ll get to this later.

    Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)

    I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.

    Name Summary Full Article Notes
    Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud Environment Compares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation. springer.com, html Not honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host.
    Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks Compares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM. pdf
    Hypervisors Comparison and Their Performance Testing (2018) Compares Hyper-V, XenServer, and vSphere springer.com, html
    Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017) Compares xen, native, and docker. Docker and native have neglible performance differences. ieee, html
    Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015) Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower. ieee, html
    A component-based performance comparison of four hypervisors (2015) Hyper-V vs KVM vs vSphere vs XEN. ieee, html
    Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021) VMWare workstation vs KVM vs XEn springer, html Most rigorous and in depth on the list. Workstation, not esxi is tested.

    The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.

    default PROXMOX and XCP-NG installations.

    What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox’s Debian or XCP’s RHEL-like), or the hypervisor itself?

    I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an “appliance” and you’re not really supposed to touch it. I wouldn’t be suprised if it’s immutable nowadays.

    For the hypervisor itself, it depends on how secure you want things, but I’ve heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.



  • I despise the way Canonical pretends discourse forum posts by their team members* are documentation.

    I’ve noticed they have been a bit better lately, and have migrated much of the posts to their documentation, but it seems they are doing it again.

    As this is developed, we will update this post to link to the new documentation and feature release notes.

    Pro tip: You could have just made the documentation directly, with the content of this post. Or maybe a blog post. But please stop with the forum posts. They are very confusing for people not used to these… unique locations.

    *Not that people are easily able to find this out when they don’t give any indication that the forum post is something other than just another post by a rando. Actually, I’m just guessing here, based on the quoted reply, for all I know this could be a post by someone unrelated to Canonical. The account is 3 months, and the post itself is identical to a regular forum post from a regular forum member…


  • It actually is a language issue.

    Although rust can dynamically link with C/C++ libraries, it cannot dynamically link with other Rust libraries. Instead, they are statically compiled into the binary itself.

    But the GPL interacts differently with static linking than with dynamic. If you make a static binary with a GPL library or GPL code, your program must be GPL. If you dynamically link a GPL library, you’re program doesn’t have to be GPL. It’s partially because of this, that the vast majority of Rust programs and libraries are permissively licensed — to make a GPL licensed rust library would mean it would see much less use than a GPL licensed C library, because corporations wouldn’t be able to extend proprietary code off of it — not that I care about that, but the library makers often do.

    https://en.wikipedia.org/wiki/GNU_General_Public_License#Libraries — it’s complicated.

    EDIT: Nvm I’m wrong. Rust does allow dynamic linking

    Hmmmm. But it seems that people really like to compile static rust binaries, however, due to their portability across Linux distros.

    EDIT2: Upon further research it seems that Rust’s dynamic linking implementation lacks a “stable ABI” as compared to other languages such as Swift or C. So I guess we are back to “it is a language issue”. Well thankfully this seems easier to fix than “Yeah Rust doesn’t support dynamic linking at all.”.




  • [moonpie@osiris ~]$ du -h $(which filelight)
    316K    /usr/bin/filelight
    

    K = kilobytes.

    [moonpie@osiris ~]$ pacman -Ql filelight | awk '{print $2}' | xargs du | awk '{print $1}' | paste -sd+ | bc
    45347740
    

    45347740 bytes is 43.247 megabytes. That is to say, the entire install of filelight is only 43 megabytes.

    KDE packages have many dependencies, which cause the packages themselves to be extremely tiny. By sharing a ton of code via libraries, they save a lot of space.


  • I always wonder how Docker works on macOS with a more UNIX-style kernel than Linux

    It doesn’t. Macos also uses a virtual machine for docker.

    but is it really that hard to do Docker/OCI out of Linux?

    Yes. The runtimes containers use are dependent on cgroups, seccomp, namespaces, and a few other linux kernel specific features.

    You could implement a wine like project to run the linux binaries that containers contain, and then run some sandboxing to make it be a proper container, but no virtual machines or virtual machine container runtimes* are easier.

    Linuxulator, a freebsd project does the above.

    https://people.freebsd.org/~dch/posts/2024-12-04-freebsd-containers/

    *these are much lighter than a normal vm, I’ll need to check if this is what macos does. I know for a fact docker on windows uses a full Linux vm though.




  • No. Netplan uses it’s own yaml format, which people would have to learn and use. I don’t want to do that, I would rather just configure my existing networkmanager setup, rather than learning another abstraction layer over what is already an abstraction layer.

    I understand that cockpit (and similar type tools) are “the whole kitchen sink” of utilities, and it may seem like they come with more than you may need. But that doesn’t change the fact that they get the job done, and in some usecases, are better than dedicated tools.





  • Here’s my commentary on the options you listed in the image:

    Anaconda: They changed the licensing so that it’s not really fully FOSS, as the repos have restrictions on them. There are also other issues like this dark pattern of a download page.

    But, forgetting about the licensing or problematic company practices: The software itself is trash. Worst thing I’ve ever used. It’s sooooo slow to install packages when it’s doing the “solver” thing. You can use something faster like mamba or miniconda, but then you still have to deal with package availability being poor, as the anaconda repos don’t have everything, and much of what they have is often too old.

    Docker desktop: It’s proprietary. I mean you can use it, but you seem to be interested in open source stuff. Also see caveats to podman desktop below.

    Podman Desktop: Technically this will work. But podman desktop is really designed more for development of containerized applications, rather than developing in containers.

    Nix: Nix doesn’t work on Windows, so you would have to require WSL or something like that.

    Fedora VM: I recommend enlightenment as a desktop environment. Very small, but also modern and clean looking. You’ll have to configure it to be a bit more similar to windows, but it’s a lot more intuitive to use than i3.

    There are some other caveats to your environment. “The right .Net Sdks version” — however, the best extensions for C# development are proprietary and cannot be freely used in the fully FOSS versions of vscode.

    it also requires users to learn i3wm and possibly use the command line, which may not be ideal for everyone.

    Yeah, don’t do this. I agree with @[email protected], work with them, rather than forcing them to work with you. Collaboration goes both ways.

    Another recommendation I have is to just see how people in a similar circumstance do what you do. There are plenty of people who do software and game development on twitch, and you can just go on their streams and ask how they collaborate. One method I saw is using trello, a task management software, and artists would upload models there as deliverables. They already have their own workflow, which they probably work efficiently with. And it’s not really the job of an artist to integrate models and art into the game, that’s the programmers job.