• Jakeroxs@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I have been in for a couple months now, Proxmox cluster with two machines.

    1. Self built pc that was my daily driver for a while, rtx 3080ti 32gb ram, ryzen 7 3700x, runs the heavy stuff like a Mac VM, LLM stuff, game servers
    2. Rando open box mini pc I picked up on a whim from Bestbuy, Intel 300 (didn’t even know these existed…) with igpu, 32gb of ram, hosts my dhcp/dns main traefik instance and all the light services like dozzle and such.

    Works out nicely as I crash the first one too often and the DHCP going down was unacceptable, wish I got a slightly better cpu for the minipc but meh, maybe I can upgrade it later.

  • truxnell@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I have literally been on this exact journey. Mind you I’m on NixOS across two boxes so not quite a raspi… Perhaps my downsizing is not yet complete

  • Buelldozer@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’ve been enjoying Jeff Geerling’s ongoing experiments with his 10" Raspberry Pi mini rack.

    It doesn’t work for me since all of my network equipment is 19" and there’s no point in having two racks but having a 10" standard is still a great idea!

  • RatzChatsubo@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’m actually just about to start up my server again on a rp4. It’s been like 5 years since I’ve used it. Is dietpi still the best way to go about making a Plex media server/bare bones desktop environment that I can access with ‘no-machine’?

    I sear no machine just broke my autoboot setup one day and I never got around to fixing it. What do you nerds think?

    I’m not interested in video streaming, just hosting my music collection and audiobooks. I remember FTP being a pain to transfer music files from my phone

  • utopiah@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Same, in fact you can also went down in RPi models. Basically the more you know, the less you need, e.g. going from Plex to Kodi to minidlna…

  • Diurnambule@jlai.lu
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    So close. Started on raspberry pi. Went for a cluster with dpckrt swarm. Finished with a nas and a 10years old game computer as a mediacenter. (That the electricity bill whoch made me stop the cluster)

  • Badabinski@kbin.earth
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I spend all day at work exploring the inside of the k8s sausage factory so I’m inured to the horrors and can fix basically anything that breaks. The way k8s handles ingress and service discovery makes it absolutely worth it to me. The fact that I can create an HTTPProxy and have external-dns automagically expose it via DNS is really nice. I never have to worry about port conflicts, and I can upgrade my shit whenever with no (or minimal) downtime, which is nice for smart home stuff. Most of what I run tends to be singleton statefulsets or single-leader deployments managed with leases, and I only do horizontal for minimal HA, not at all for perf. If something gives me more trouble running in HA than it does in singleton mode then it’s being run as a singleton.

    k8s is a complex system with priorities that diverge from what is ideal for usage at home, but it can be really nice. There are certain things that just get their own VM (Home Assistant is a big one) because they don’t containerize/k8serize well though.

    • Justin@lemmy.jlh.name
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yup, same here. being able to skip all the networking and DNS hassle and have it automated for you is so nice.

      Having databases fully managed with cnpg is AMAZING

      I just have renovate set to auto update my argocd, so everything just runs itself with zero issues. Only the occasional stateful container that has breaking changes in a minor version.

      If something OOMs or crashes, it all just self heals, I never need to worry about it. I don’t have any HPAs (nor cluster scaling obv), though I do have some HA stuff set up just to reduce restart times and help keep the databases happy.

      The main issue with Kubernetes is that a lot of self-hosted software makes bad design decisions that actively make kubernetes harder, eg sqlite instead of postgres and secrets stored in large config files. The other big issue is that documentation only supports docker compose and not kubernetes 90% of the time so you have to know how to write yaml and read documentation.

      Moving my hass from a statefulset to kubevirt sounds tempting. Did you have better reliability/ergonomics? I have been looking into moving my Hass automation to NodeRed, so that I can GitOps it all, since NodeRed supports git syncing.

  • targetx@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Do you run Docker in a VM or on the host node? I’m running a lot of LXC at home on Proxmox but sometimes it’d be nice to run Docker stuff easily as well.

    • wildbus8979@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Just create an LXC container to run your dockers, all you have to do is make sure you run the LXC as privileged and enable nesting.

      • Dran@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        There are security performance and capability concerns with that approach, apparmor on the first layer lxc probably being the most annoying.

        If you want to isolate your docker sandbox from your main host, you should use a vm not a container.

        • wildbus8979@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          OP’s already running LXC on the host, so… Namespaces are namespaces…

          I don’t see what performance issues there would be with that.

      • targetx@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Thanks for the tip, for some reason I assumed I couldn’t run docker in LXC but never actually tried… I prefer to avoid the overhead of a full VM and I find LXCs way easier to manage from the host system. Guess I’ll have something to test this weekend. Cheers!

  • teuto@lemmy.teuto.icu
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    See, I don’t pay for the electric bill to keep my collection of old enterprise equipment running because I need the performance. I keep them running because I have no resistance to the power of blinkenlights.

    • highball@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I use a DO droplet with docker compose. Filthy dev here too. Much cheaper overtime than buying and hosting home server equipment.

  • kekmacska@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    i think the best choice is a cheap used pc or laptop, or server. Reduces electric waste. I also host my own server on a 19 year old Dell Insprion 1300

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        This is why rack mounts were made. Hell, I’ve seen a lot of custom builds where people have mapped out the server on their wall and it takes up no floor space. Something like this: https://i.xno.dev/kG9Wx.jpg

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          A rack takes up as much space as a fridge though, and mounting things to the wall is risky. You better make sure you really got it into the stud in the wall. Also, don’t do that if you live in an earthquake zone.

          • Xanza@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Only full size racks. You don’t need to buy a full size rack. You can get very small racks these days that are smaller than a little chest cooler. And why are you under the impression that you have to mount it on the wall?

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Think centre tiny here

      Low consumption, two ddr4 slots, one 2.5" slot and one nvme slot! Lots of outside slots.

      Costed less used than a new pi too. They have gotten too expensive IMO.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Yesss I have a m910q as my main with (IIRC) a 6500T 4 cores.

          And a m710 with the CD contraption for backup (the CD is just for fun, the PC is the backup) :-p

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Just add dell micro to the list and you have what I run - 9 tiny/mini/micro PCs run everything here. Though I may move a few things to a VPS soon.

          Edit:

          • (4) Dell Micros
          • (3) Lenovo Tinys
          • (2) HP Minis
          • Valmond@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            How would you class them, if you think you could/would/should? I’m so impressed with the thinkcentre tiny I wonder if it can get better at all.

            • curbstickle@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              Mostly equitable.

              Ive had a slightly higher failure rate with the Dells, but the sample size is too small to be relevant.

              The Lenovos more often than others ive found outfitted with a dGPU which comes in handy in some scenarios, but I think that comes down more on which enterprises more often purchase Lenovos and want the dGPU, and that its just what ive come across in the used/decommissioned territory.

              Short answer - they are basically all the same.

    • null@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Reduces electric waste

      A lot of older equipment actually wastes more electricity.

      But it will cut down on electronic waste.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Not necessarily.

        A i5-6500 has a TDP of 65W while a i5-13600K has a TDP of 150W.

        If you get something modern that has the performance of a i5-6500 it will be a little bit more efficient. The key is that more performance uses more power.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          13600K

          If you buy a high watt CPU, that’s on you. Ryzen 7 also came out in 2022 and had many 65 watt cpus that could outperform an i5-6500.

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          TDP ≠ power draw. TDP is literally the Thermal Design Power aka what is the amount of thermal load a system designer should account for. Yes it can give you a rough and dirty idea of maximum power draw, but real world power draw can be entirely different because that depends on load.

          For example, if your i5-6500 runs at 50-70% load while the newer processor only runs at 20-30% load due to IPC and instruction improvements the newer processor might very well use less power over the course of month than the older one despite the newer one being capable of drawing more

          You’re also comparing a 4c4t part to one with 14c/20t not to mention comparing a mass market part to a gaming specific part. The 6600k (which is targeting the same market segment as the 13600k) has a 91w TDP. Go compare your 6500 to the i5-13500 except again it’s still comparing apples to oranges when you just look at raw specs and TDP ≠ real world power consumption

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yes, but also no. Older hardware is less power efficient, which is a cost in its own right, but also decreases backup runtime during power failure, and generates more noise and heat. It also lacks modern accelerated computing, like ai cores or hardware video encoders or decoders, if you are running those appd. Not to mention lack of nvme support, or a good NIC.

      For me a good compromise is to recycle hardware upgrades every 4-5 years. A 19 year old computer? I would not bother.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I have a Lenovo M710q with a i3 7100T that uses 3W at idle. I’m not mining bitcoin, server is idle 23h a day if not more.

      • kekmacska@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        my 19 year old laptop runs the web server just fine, and only needs 450 mb ram even with many security modules. it produces minimal noise

    • Zink@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yeah what I’ve always done is use the previous gaming/workstation PC as a server.

      I just finished moving my basic stuff over to newer old hardware that’s only 6-7 years old, to have lots of room to grow and add to it. It’s a 9700k (8c/8t) with 32GB of ram and even a GTX 1080 for the occasional video transcode. It’s obviously overkill right now, but I plan to make it last a very long time.

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Docker is so bad. I don’t think a lot of you young bloods understand that. The system is so incredibly fragmented. Tools like Portainer are great, but they’re a super pain in the ass to use with tools/software that include a dockerfile vs a compose file. There’s no interoperability between the two which makes it insurmountably time-consuming and stupid to deal with certain projects because they’re made for a specific build environment which is just antithetical to good computing.

        Like right now, I have Portainer up. I want to test out Coolify. I check out templates? Damn, not there. Now I gotta add my own template manually. Ok, cool. Half way done. Oops. It expects a docker-compose.yml. The exatorrent repository only has a Dockerfile. Damn, now I have to make a custom template. Oh well, not a big deal. Plop in the Dockerfile from the repository, and click “deploy.” OOPS! ERROR: “failed to deploy a stack: service “soketi” has neither an image nor a build context specified: invalid compose project.” Well fuck… Ok, whatever. Not the biggest of deals. Let me search for an image of “soketi” using dockerhub. Well fuck. There are 3 images which haven’t been updated in several years. Awesome. Which one do I need? The echo-server? The network-watcher? PWS?

        Like, do you see the issue here? There’s nothing about docker that’s straightforward at all. It fails in so many aspects it’s insane that its so popular.

        • QuarterSwede@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Maybe I’m an idiot (high possibility) but I couldn’t even get the instance of homebridge docker to work. However, it had zero problems running on hypervisor. You aren’t alone.

          • Xanza@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I’m coming to appreciate Hyper-V more and more to be honest. It’s a very mature virtualization environment. The only issue I have with it is the inability to do GPU-passthrough. Once they figure that one out, I probably won’t bother with anything else.