why am i here if not to babble about things i'm doing?
right now i'm working on a rebuild of a home server. it's an HP DL580 Gen8. right now it's just single-tenant and has a couple of 3060s in it for AI shenanigans.
currently i'm taking a backup to an external drive, which is going slowly because it's USB 2.0 speeds. but it'll be done eventually.
once it's backed up, i'll wipe it and install Proxmox and actually slice things into VMs properly.
at some point i should change my internet service to the new fiber service that's available in my area so i can have a decent upstream. then i can do Plex and all that jazz.
That sounds cool! I'm curious what kind of AI shenanigans?
I also ideally (when I have the money which idk when) would like to have a home server too. Not quite versed in servers.
for AI shenanigans: i do text and image generation. ultimately i want to set up some autonomous AI agents to Do Stuff for me. as for what kind of stuff, i'm not sure. the idea of having automated stock traders running on my server amuses me. so maybe something like that, but with very small amounts of money because i'm not trying to go broke goofing around.
i should note i don't make anything to sell using AI tools. i have strong feelings about the ethics of such things. i think using them to make code and perform tasks is OK-ish, and the carbon footprint of doing it on my own hardware is much lower than ChatGPT (which is astronomically awful in that regard.) i would love to see it get more efficient, though. i would also love to see techbros stop thinking it can be used to eliminate all creative jobs, because it can't meaningfully do that (and nobody should even try.)
ok, monologue over.
oh yeah no worries. I was curious because I do want a computer strong enough to dabble in making my own AI stuff. I have done image generation for the purpose of brainstorming ideas or creating a reference, but of course I hire artists to do the real thing. I am interested in using it for coding things too. I think you may recall from a short while back my desire to automate editing for my podcast. I put that on the backburner for a bit but I do want to get back to that.
taking a crack at the server rebuild today. (not this server, my home one.)
backup is good. trying to get the top disk backplane working.
visual aid:
[
attachment=22]
that was an incredible pain in the ass!!!
ok, to sum up:
- i had to get the upper backplane of disks to work. they had never worked, and i believe it's because they were never configured at the machine level (not OS level)
- so i needed the Smart Storage Administrator app, which was no longer on the server (for some reason)
- this was an astonishing pain in the ass to get going, but ultimately i had to get a several-years-old version of an ISO, extract a specific OS image from that, then make that image a bootable thumb drive
- once i did all that, i was able to see all the drives i added! woohoo!
- it took 5 fucking hours to get through this because of constant dead ends
i've now got Proxmox installed. i am working on configuring that. this is a piece of cake by comparison, though i had some network hiccups in getting it to work. i have a /24 where i let the router have the lower half for DHCP and the upper half always belongs to "me" for lab purposes, so i'll just give the upper half to Proxmox.
right now i'm just letting Proxmox update. next step after that is getting the AI tools working because that was the main thing i was doing before. i have everything backed up so it shouldn't be an issue to move it over, but setting up the OS side of things will no doubt be annoying.
i've seriously had it with how many times i've had to reboot this thing today

eventually figured out the issue at like 12:30am... there's a field during proxmox setup to specify its IP. there's also a CIDR field with it. i put /32 because why would i want it to have more than one IP?
well, turns out the point of the CIDR is to tell it which IPs will be able to access it! so by putting /32, i was telling it that it can only access itself. d'oh. changed it to a /24 and of course it's fine. might go as broad as a /22 but i don't know if i'll need that many VMs.
i was trying to get a vGPU setup working, as well, but i think that may not be worth the trouble.
I hadn't posted about this for a couple days because I didn't have time to mess with it further. But at this point I've got Proxmox up and running. I set up a storage pool and now I'm going through the LXC script installers:
https://community-scripts.github.io/ProxmoxVE/scripts
These are little snippets you run on a Proxmox server to install specific things, such as particular types of server apps (Plex, etc.) It can also install what you might call turnkey appliance managers, like CasaOS, which run as their own sort of supervisory platform for installing various pre-packaged apps.
I'm not exposing any of this to the Internet and probably won't anytime soon, not until I'm comfortable with how it's all working. Ideally I wire this up to dynamic DNS and then have certbot ensuring everything gets certificates. Again, stretch goals.
Some of these tools may ultimately interact with the forum so it's all of a piece.
The learning experience continues: what I
wanted to do, I can't, so I will have to compromise.
Long and short, I wanted to be able to share my two GPUs with VMs on the server. There are two ways to do this:
- A vGPU setup which virtualizes them -- this is not supported and is extremely finicky to get working in any case.
- Direct passthrough of GPU to a VM -- this is doable but means the GPU is inaccessible to the host machine and requires a bunch of annoying extra setup anyhow.
After learning the hard way the limitations of both, I think I will just run whatever needs the GPU (AI models, mainly) directly on the main host. I can use VMs to talk to the APIs those serve, so that'll be fine. But trying to use the GPUs within VMs is just too much damn hassle.
Any AI stuff I do that's purely CPU-bound would also work fine in VMs, so it's not as if I'm truly limited there. I have a terabyte of RAM to play with here...
So, I wasn't 100% sold on CasaOS as a way to deploy containerized apps on my server, but after finding out you can add third-party app stores... I now have like 600 apps to choose from. I think this is fine!
Writing out the current problem I'm trying to solve and how I think I'll solve it:
At this point the server is running Proxmox and I've settled on using a CasaOS LXC container to run most apps. This means installing most things is one-click. Everything gets set up on its own port, so there's no conflicts. I have dynamic DNS set up. But there's 2 problems:
- I want to be able to call the dynamic DNS name from inside my network; due to my network configuration, this is apparently not possible and can't easily be changed.
- I don't want to have to put everything on different ports. I'm just not gonna remember that.
So I think what I will need to do is
two levels of nginx reverse proxies:
- Level 1: set up a reverse proxy on one of my existing public servers. There will be a list of hostnames. Each hostname will be for a different service. So, I'll have to set it up in DNS and in the reverse proxy config, but all it will really do is forward the request to my dynamic DNS host, passing along the original hostname in the header so the next layer will know what to do.
- Level 2: I'll run nginx proxy manager on my server. All requests that hit my public server hostname will be proxied to my router, which will port forward to my server's reverse proxy, which will use the provided hostname from level 1 to figure out which backend service to hit.
I've done one layer of reverse proxying before, and I previously had a reverse proxy inside my network which then routed to an nginx ingress on an internal Kubernetes cluster, which routed to backend services in that cluster. Theoretically, what I want to do should work, I just haven't done it this specific way before.
Hoping I can give this a shot today and have it all working. One of the big benefits of this is that I'll have TLS on everything, too, which is a requirement in this day and age!
OK, I confirmed I really can't make this work without the first layer, because trying to do it based on the dynamic DNS name alone results in the following behaviors:
- Calling the proxy manager from outside my network works (more or less; doesn't quite do what I want but I can definitely reach the proxy manager and get past my router)
- But calling it from inside my network via the DDNS name gets a straight refusal.
So, I'll have to contend with that to make this thing work, but should be doable.
Man I always suck at DNS stuff. Finally got it working how I want:
- Each service will require a separate hostname with a CNAME pointing to the backend hostname.
- I just put an entry into the nginx proxy manager to wire it up, but then it's good to go!
This was way more hassle than it needed to be because I'm kind of a dumbass, but hey, it works.
So I have a nice little dashboard now. I didn't mind CasaOS' one but it has some limitations about what it can usefully show me.
But this is nice and simple and works perfectly:
[
attachment=62]
I plan to write up one or more blog posts about how I did all this because some of it was not very intuitive or obvious at all.