header-logo-left

The Lone Geek Blog

One geek in a sea of nerds.

header-logo-left

Unexpected Systemd Gotchas

It can drive a person mad...

| Comments

Today I learned systemd has a role in the disk mounts. You can’t just add a disk to fstab, mount it, and expect things to be hunky dory. Noooo…

If you change a mount point to another disk like I did to swap disks around in your directory mounts for whatever the reason. I was renaming and adjusting the mounts of a disk to move it up in the disk numbering scheme I use. From /media/DataArchive5 to /media/DataArchive3 to add a new 5th disk to DataArchive5. #5 was origionally meant to expand capacity till #3 died and had to take it’s place since it was empty. Now the newest disk will be #5.

Systemd didn’t like me changing the fstab file to work with this new arrangement. Ended up having to use a work-around listed here. Comment mount entry, run systemctl daemon-reload and systemctl reset-failed, uncomment entry, run systemctl daemon-reload again. Annoying but it works.

I discovered this snafu when I tried to mount the new disk to an existing mount point that was associated with the now dead disk because it’s fstab entry was still there and uncommented. The more you know, eh?

I suppose I knew this was a thing for some time now but didn’t really think it could get ya till it does. Like, “Why isn’t my disk mounting?”. checks dmesg log Logs show disk is immediately unmounted when mounted. “huh? why u do dis computer?”. To the googles and that’s how I found that bug post. One might note I use ubuntu for my server but it’s still systemd that’s running the show. Normally it just works till you do something it doesn’t like and silently blocks you. lol

Maybe things like this is what has some people hating on systemd. oh well, it’s part of the system. might as well get used to it. I prefer it over init.d. hides from the pitchforks

Anyway, just thought I’d mention. Cheers.

I Am a Linux Gamer

and these are my thoughts...

| Comments

I never really considered myself a gamer till I started playing more often but not playing those cheasy web and mobile games. I play on a PC, a Linux powered PC. Pretty much all my games are provided by Steam. I haven’t had much luck out of the others. I can get the launchers working via Lutris but that’s about it. The games are a hit or miss. Usually a miss. The game might launch but the FPS sucked so badly that they were unplayable. I assume the game would be just fine with Windows, their native platform, but I don’t want windows. I don’t mind it in a VM to be used for a few desktop apps I find myself needing but it just doesn’t play nice with my hardware so Linux it is. With the current economy and rarity to find new parts, I’m stuck with this.

I mean, I’ve been gaming on linux since before the global short happened but that’s beside the point. Linux was my solution for when I had no money to buy new hardware. Now it costs as much as one month’s rent on your average home to buy a graphics card. A card that MSRPs a 3rd of the sell price. That’s not counting the cost of the rest of the bits. While I can now buy a new PC, I don’t really want to. I would like a new GPU though. My current Nvidia GTX 1060 6G does it’s best to maintain 60FPS @ 1080p on High or Ultra graphics. I could lower the quality but I want pretty graphics too. :) Such is the life of a gamer I suppose. The persuit of high frame rates and pretty imagary. My current target GPU is a Nvidia GTX 3060 12G. I could get a 2060 but I want the newer tech. If I get the 3060, it’d likely motivate me to upgrading the rest of the PC. Would I be better off doing that, I can’t say. I might even run Windows again for the games. Just not crazy about Windows 11. I’m not a fan of the psudo mac look.

Anyway, Linux Gaming.

Steam.

My current games I’ve played or have been playing in the years 2020-2021 include and in no particular order;

  • Satisfactory
  • Car Mechanic Simulator 2021
  • Industrial Petting
  • No Man Sky
  • BeamNG.Drive
  • Wreckfest
  • Tomb Raider
  • Mud Runner
  • Shadow of the Tomb Raider
  • Just Cause 3
  • Two Point Hospital
  • Subnautica Below Zero
  • Slime Rancher
  • Cities Skylines
  • Subnautica
  • American Truck Simulator
  • Astroneers, and more.

All of them windows name games, running on a Linux PC. All thanks to Steam’s Proton. Two of those have Linux native ports but the rest are Windows “only”.

There’s more on the list, you can see them on my Steam Profile.

There are some here and there that I’ve tried but failed to get going beyond just hitting the Play button and waiting for Steam to do it’s thing. A couple years ago, I had to do some stuff following a github page for Space Engineers. It’s still not perfect but it runs. Can’t really alt-tab out so I can do something real quick outside of the game. It doesn’t release the mouse and keyboard. Thankfully, I can just keep the wine environment for the game intact and it’ll just keep working, even through a distro change. (if you’re wondering what a distro is, I suggest hiting up the ol google till I add a link to that.)

All the games I played are stable enough for me to have fun. There have been a few crashes here and there but aren’t really major game stoppers. I can’t really say what the cause might be. The good news is, my desktop itself doesn’t crash. I’m not forced to power cycle my pc because the video driver had a hiccup like I delt with with Windows 10. Nothing spoils the fun like rebooting because Windows has a problem you can’t control. lol Some will argue it’s a hardware problem and it could very well had been. I ended up changing my motherboard for another identical one when Linux started crashing too. Since then, it’s been stable. I’ve done nothing different software wise beyond updates but I’ve also not booted Windows once or twice since then.

I’ve been gaming almost exclusively on Linux since June of 2019 with perodic testing of something on Windows. For the most part, for my daily use, I’m on Linux. Started with Linux Mint, now I’m on KDE Neon. Both use the same Ubuntu LTS Base but with different desktop environments.

My current PC specs;

  • Dell Precision T3610
  • Intel Xeon E5-1650v2
  • 24GB of DDR3 1600 ECC
  • EVGA Nvidia GTX 1060 6G
  • 1TB Samsung Evo 980 NVME via PCI-E Adaptor
  • 1TB Samsung Evo 860 Sata SSD
  • 480GB Sandisk Plus
  • 512GB Samsung Evo 860 Sata SSD for Windows
  • 2TB WD Green HDD
  • 625W OEM PSU

And that’s it for now. Cheers!

I Bought a Managed Switch

Turns out, it's bottom tier but has just enough features and fanless... :)

| Comments

I bought a HP ProCurve 1800-24G (J9028B) to replace a gigabit switch, a 100mbit switch, and an old router turned AP turned poor-mans switch (I forget the models and don’t really care). It only has a webui and no means of console management, I had kinda hoped for ssh access but oh well. The fact that it was fanless with 24 gigabit ports really sold me and it being cheaper on ebay than a lot of the “normal” consumer gear helped. I didn’t want something else screeching in the background while I’m in my room and wasn’t sure about the fan mods for other more power hungry models, like say a 24-port one with POE.

It looks like this

hp j9028b

I overestimated the ports I needed but if I ever get a chance to use it to run a whole house of stuff or at least part of it then it’d be more occupied. At least until I start playing with POE stuff, like maybe a small stack of Raspberry Pis with POE hats but I suppose then I could find a small, fanless 8-port switch that can do POE.

I’m curious about putting some physical things on a vlan and not just my VMs. Like maybe an isolated vlan for a rando computer I might fix, internet only and no access to anything else. Or stashing some Pis on my vlan I use for VMs and LXC containers.

Update: Server Adventures

| Comments

Reconstuction and Planning

My home servers need some attention and this is my thought process.

The plex box

I’ve noticed that my plex server has been running low on ram after deciding to deploy radarr and sonarr to help organize things. Between a small handful of docker apps, plex, and a 4GB transcoding ramdisk; things feel a little tight. I had to add some extra swap space to avoid crashing and wind up using almost 4GB of the 5ish GB it has total; I added a 4GB swap on top of the 1gig or so the ubuntu installer added or maybe I did. idk.

The system has about 7.67GB of usable ram. I might be able to tweak the iGPU vram in the bios when I can bothered to reboot it to do so.

I’ve noticed when a 1080p video is playing, the ramdisk approached 75% full for one show. That alone tells me it could use more ram.

I need to find some reasonably priced pair of 8GB DDR3 Low Density RAM.

I also did some tweaking and ram usage seems more reasonable but I’m not sure if htop is measuring the ram usage from docker properly. Atm, plex in docker is using like 1.6GB but htop says 2.92GB ram, 1GB swap. More research required.

The lab server

I have a 2.2TB RAW ZFS volume across 4x 600GB SAS 10K Disks setup as RAIDZ1 and research tells me that 4 disk in RAIDZ1 is not a ideal setup. Appearently z1 likes odd number disks. Maybe I’ll buy another one or two and rebuild the array. add the 5th one for z1 or 2 disks for z2. Either way, I should gain some space while I’m at it. :)

In conjunction to this, I’ve learned that 32GB of the 64GB ram has gone to ZFS in the form of ARC or some sort of ram cache. Which is neat but I didn’t expect half the ram to go to ZFS when reading forums posts suggest it generally wants 1GB per TB of space. I guess it’s that plus the cache and by default on proxmox it must only be set to consume half the avaliable ram because another post or that same one suggested that it can potentially consume almost all of the ram bar 1GB for the system. Oof. ZFS sounds cool but can be costly it seems. Gonna keep using it for a bit but it’s tempting me into buying more ram to increase the cache. I’m thinking an extra 64GB in the form of 4x16GB DDR3 ECC sticks ought to do.

I’m also expecting some neat little pcie enterprise class ssds in the mail soon. I’m thinking of trying one as ZFS cache and the other a lvm volume and delete default lvm volume (local-lvm, I think) from the boot disk. It’s a lab box. I’m experimenting with what I can do on a single box.

One day I wanna play with clusters but that requires one or two more machines, one more compute and a storage box to share data between them. And my parents don’t like that idea so much. Something about power consumption but they could always run a dedicated power line into my room or setup a building outside to do this in. :/

Update to the server - Again. :) (August 13, 2020)

So like a couple months ago I ran across a ad on reddit for some 900GB drives for like $25 each and I bought 4. 3 arrived functional but useless as they refused to write any data, couldn’t even partition them. So I waited a bit to think on it and decided to grab a H200 in IT Mode off ebay for $45 to allow direct access to all the hard drives. After testing the 3 bad drives to try and get them to do anything only to fail, I reached out the reddit seller and he agreed to send 3 replacements. Win. Dunno what to do with the bad ones atm.

Replacements came in, installed, and now all 8 drives are in a big ZFS pool. 4 mirror sets of 2 drives each giving me a total of 2.62TB of storage. Idk how optimal it is or if it’s even a good idea but at least there’s redundency.

I’m using the pcie ssds to run the VMs and Containers with LVM Thin Privisioning and the hard drives for backups, larger vm storage, and misc data.

I also added some spare dimms for an extra 8GB of ram. Not that it needs it but ehh. It does throw some warnings on POST but otherwise seems to work.

Also found out the H200 doesn’t work in the storage slot with it’s current firmware. Not sure how to modify it to work in the storage slot atm while keeping it in IT mode.

Software

I’m also experimenting with grafana and figuring out how and what sort of data I want to feed into it. I’m a little confused on how to make dashboards and planning my own layout so I’m using premade ones and piping data into it.

Adding radarr and sonarr prompted me to reorganize my shows on the video disks so they’re not so spread out. Entire shows on one disk, not seasons here and there. That way sonarr can see all I have and help me find what I’m missing. As well as ingesting new content from a download folder and putting them whereever the show happens to live. These programs live in docker.

Grafana and the few data collection apps. Varken, and something else. Using influxdb to store and retrieve data. Still unsure how the db doesn’t just grow as data comes in. Best way to learn is to watch it in action I guess.

At least I can add a few buzz words to my resume soon.

Also have proxmox setup to feed data into grafana for funsies. And data from pfsense… also for funsies. :)


Stay safe my fellow nerds. :)

Nginx Proxy Sandwitch With NAT

Nginx proxying to docker inside nat

| Comments

Public Nginx Server using a proxy connection

V

Router using nat forwarding on a high port

V

Home Server running docker + nginx proxy

V

Docker container running codimd


My latest project, a proxy-nat sandwich. It comprises of a nginx proxy on both the public server and the local server sitting behind a router and the traffic is entirely encrypted with standard ssl web certs.

If you don’t know what these are, here’s a primer. The proxy just takes advantage of nginx’s native features, what it’s generally known for doing. Proxing connections to frontend web apps and backend services. I’m not sure how best describe it so here’s a quote from wikipedia.

In computer networking, port forwarding or port mapping is an application of network address translation that redirects a communication request from one address and port number combination to another while the packets are traversing a network gateway, such as a router or firewall.

The router part is simply a nat port forwarding some port on the wan interface to a local server. I just picked a number between 1000 and 10000 that I didn’t expect to need for anything else.

The proxy on the local server takes that request and sends it onto a docker container running a service, codimd and likely to be others, just depends on what I want exposed to the public.

Below are some snippets of the nginx configs and I’ll drop a link to port forwarding on pfsense to save a Google process. :)

This is part of my local nginx docker config. It goes in the nginx.conf file due to the upstream bits. The whole config could probably be adjusted to allow for each site to have it’s own config. Codimd is running in a container within the same network as nginx on my fileserver.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
        # Codimd
        upstream codimd {
                server        codimd:3000;
        }

                server {
                listen        80;
                server_name   codimd.localservices;

                location / {
                        proxy_pass  http://codimd;
                }
        }

        server {
                server_name   codimd.localservices;
                listen 443 ssl;
                ssl_certificate /config/keys/home-certificate.crt;
                ssl_certificate_key /config/keys/home-certificate.key;

                location / {
                        proxy_pass  http://codimd;
                }
        }

This part goes on the public nginx server. It serves as the relay and connects to a ssl only port with pre-established ssl certs on both servers. A port has to be forwarded and open on the firewall between them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
server {
        server_name codimd.mydomain.com;
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_ssl_trusted_certificate /home/michael/certs/home-certificate.crt;
        proxy_ssl_verify       on;
        proxy_ssl_verify_depth 2;
                proxy_pass https://home.mydomain.com:12345; # not a valid location so no funny business.
        }


    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/codimd.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/codimd.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = codimd.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


        server_name codimd.mydomain.com;
    listen 80;
    return 404; # managed by Certbot


}

Purpose?

So I can setup some handy services that may not be feasible to run entirely on a public server. Something for family and friends or just me so I wouldn’t need to open a VPN back home just to use the app. I’m going to use docker for most of it. Those are generally easy to setup, well, at least the ones I find easy to configure. lol. If docker proves complicated for a app then a proxmox container or VM should do.

Yup, that should be it. Cheers my fellow nerds.

Scripting on Proxmox

A few scripts I put together to do some things on proxmox

| Comments

May add more later at some point here or in another post

I wrote a script to download all the avaliable templates for proxmox.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash
# Script to download all proxmox templates

SAVEIFS=$IFS                                # Save current IFS
IFS=$'\n'                                   # Change IFS to new line
list=`pveam available | awk '{print $2}'`   # Get a list of currently available templates
list=($list)                                # Not sure what that does but it works. Thanks stackoverflow users.
IFS=$SAVEIFS                                # Restore IFS

# Do the loop
for (( i=0; i<${#list[@]}; i++ ))
do
    pveam download local "${list[$i]}" # This does the download process to the local volume.
done
exit

My New Moto G6

| Comments

A few weeks ago I bought a new phone, the Moto G6. It runs Android 9 and has a 8 core snapdragon cpu, 3gb of ram, 32gb of internal storage. All for like $80 on sale from TracFone.

I’ve got most of my apps installed, could be some I’m missing though. Far better than my old LG phone. :)

It seems to have some sort of oled display and depending on the app, black is really black. It’s kinda amazing.

The specs are here.

Summary

  • OS - Android 8.0 (Oreo), Currently running Android 9.0 (Pie)
  • Chipset - Qualcomm SDM450 Snapdragon 450 (14 nm)
  • CPU - Octa-core 1.8 GHz Cortex-A53
  • GPU -Adreno 506
  • Storage - 32GB - 64GB SD Card Installed
  • RAM - 3GB

Few weeks later

It’s turning out to be a good phone. It’s nice and responsive. My podcasts don’t stop playing when I want to check a website or social media. The camera pics are crisp if I manage to hold it steady long enough. lol. The battery lasts the entire day while playing podcasts and audiobooks while it sits in it’s belt clip or anywhere near me really. I use my bluetooth transceiver so it’s completely wireless and doesn’t make things awkward when I pull it out to check something.

I don’t really have any complaints about it other than my carrier won’t let me tether with it but that’s not the phone’s fault. :/

5/5 would recommend.

Backing Up a Disk to Another With Rsync

| Comments

Putting this here for future reference from future me if he thinks to look here. Heeey! It’sa me, your past. :)

Copied the contents of one disk formatted with XFS to another bigger disk with XFS using Rsync.

rsync -arvluht -pog -P /media/Source/ /media/Destination/

Breakdown of the switches;

  • -a for archive
  • -r for recursive
  • -v for verbose
  • -l for links
  • -u for update
  • -h for human-readable
  • -t for preserving times
  • -p for preserving permissions
  • -o to preserve ownership
  • -g to preserve group
  • -P to show progress

Trying Out Proxmox on My Lab Server

| Comments

Yesterday I decided to try something different on my lab server. I just got tired of the limits of ESXI with it’s free license. Was gonna put it on another flash drive but after trying and experiencing the excruciatingly slow install of the Citrix Xenserver onto a flash drive only to be left with no webui to manage it, I ended up putting proxmox on the 1.2TB 4 disk RAID-10 array. Felt faster that way and more reliable.

First impressions

The webui looks nice. I like that it’s built on debian because makes it easier for me to manage. :) I notice it supports LXC containers and that presents some interesting possibilities. Because it’s on debian, I can run docker on the host. VMs, LXC, docker… yes. 24 threads, 64GB of ram, 1.2TB HDD array.

Networking

Ran into an annoyance with the networking side of things. I tried bridging two of my server’s physical nics for fail-over and was hoping to double it’s bandwidth (just because) but then none of the VMs and containers could connect to the network. I’ve no idea why or how to troubleshoot when the host had no issues connecting to the network. Then there’s the whole needing a reboot to reload the config. I didn’t like that idea, especially once I get a bunch of stuff running on it. I found this post that describes manually setting the network config in /etc/network/interfaces OR configure within the web-ui then copy the new config over the old cp /etc/network/interfaces.new /etc/network/interfaces then bring up and down the changed nics. That’s fine I guess but seems prone to breakage and I’d rather not risk breaking the ssh session to the server. Hopefully that change isn’t needed very often so, keep a monitor handy? idk, we’ll see.

Containers

LXC containers are interesting, they feel kinda like docker in that it’s isolated but weirdly, I can view the processes in the LXC containers from htop on the host. Not sure if that’s normal because I cannot do that with docker containers. Docker, you have your host commands and then everything inside the container is hidden from the host. Only way to look inside is with a docker exec or docker run command to get a shell for top and htop or one-off ps aux or any of the typical linux tools for doing that.

I don’t know much about LXC at the moment, I created a few from templates on the lab server and there’s quite a few templates to choose from. Obviously not the tons and tons of images docker has but still. There’s a good number of templates to play with and probably more on the web somewhere.

Thoughts

I’m not sure I like the idea of proxmox on baremetal, at least not for that machine. Maybe on something smaller. I’m gonna put ESXI back on my lab server and run proxmox in a VM for further testing.

several hours later

Welp, even in a VM the networking for containers seems broken. :/ VMs run but they can’t even get an IP from my router. .-.

one day later

Hmm, a quick inquiry on a facebook group brings up something I overlooked. Allowing promiscuous mode on ESXI’s vSwitch and rebooting proxmox just to be sure, finally allowed the containers to work.

one week later

So since the networking was resolved. I still gotta work out how to get the containers to register their hostnames. I’m not sure why they won’t. seems like they would. idk. I started to just assign static IPs to them for now and looking at some of the turnkey templates it offers. I had to expand the data volume because apparently a 74GB lvm volume gets full pretty fast with a bunch of 8gb volumes for the containers. :) I’m gonna keep proxmox in a VM for now.

Building a Pfsense Install on a Physical Disk for Deployment

with VirtualBox. :)

| Comments

A few days ago, I have a brilliant idea to build pfsense on a hard drive from within virtualbox and it worked surprisingly easy once I worked out how to get the network interfaces to work with my config.

Hardware bit

The setup was something like this; a WD 250GB HDD attached to a SATA to USB bridge with a VMDK placeholder file pointing to the block device. VirtualBox didn’t care, all it had was the VMDK file attached. I don’t know the science behind how it works, just that it does. The VBox machine file needed 5 nics setup because my physical computer it is meant for has that many; a quad nic and the integrated one.

VBoxManage internalcommands createrawvmdk -filename physical_pfsense.vmdk -rawdisk /dev/sdd

Because of the way my router pc arranged it’s numbering of the nics, I have to work in reverse with em4(nic5) being the wan port. vboxmanage modifyvm "pfsense" --nic5 nat --nictype5 82543GC. For some reason, it’s just how the quad nic numerates it’s ports. The integrated port first, then number from the outside port to the inside. My modem is connected to the inside one followed by the lan. (Could have used a single nic and use the integrated one for lan or wan but I was thinking I’d have isolated physical lans at some point.)

Now the VBox machine got 2GB of RAM and 2 cores to keep things happy (just in case). The first 4 nics were pointed to independent internal networks to avoid any ip collisions set about from my config.

Software bit

I installed pfsense from the iso, as you do, while paying attention to the partitioning; I didn’t want it to take the whole disk just yet so it got 8GB for now. The flash drive it’s on right now is 8GB but the partition is slightly smaller. I wanted to just copy the partition from the flash drive to the disk with DD but I didn’t think that’d work out too well so I exported the existing config and manually installed it with pfsense’s file editor in the webui then rebooted the vm. I found that the restore function wouldn’t accept my config as a whole so manual it was. I did create a separate package config to install the packages I use and wouldn’t you know it, the shotgun package installer button also didn’t work. smh. It took me a while to work out how to get that to work including manually copying the package files from a backup image of the existing install to the external disk from within a freebsd vmdk install from the freebsd website. (that didn’t work, maybe I missed something. idk)

In the end and from the package manager on the webui, I installed the first package I knew I had and the others just magically appeared in the list for me to reinstall one by one. Kinda annoyed at that but whatever. I got it done.

My Reasoning

Because my pfsense box is running on a flash drive, I worry about that drive quitting on me so I needed a more reliable solution. It started with a Sandisk SSD but I guess from me trying out squid and other heavy disk apps, it just couldn’t take it. Then I installed a flash drive that also failed so it’s on it’s second flash drive because I didn’t have a suitable replacement solution at the time. The two failures was just the drives going to read-only mode so at least I was able to save my config. The flash drive just needed a simple disk copy to another one and it was good to go. I couldn’t figure out how to recover from the SSD so I reinstalled and copy the config. Partition size differences and me being new to BSD. Now with cloud backups, restoring should be easier. I need to test that in a VM at some point to get some ideas.

Hopefully this $20 hard drive from Amazon will hold up for a few years or more. I went with solid state thinking it’d be more reliable but apparently the ones I chose just wasn’t suitable for it. If I could justify the cost of Samsung SSDs in a router, I’d probably use them but it’s hard to find one smaller than 250GB nowadays. Pfsense doesn’t need much on it’s own unless I use squid. Oh well, spinning rust it is. It’s even hard to find new small hard drives at decent prices too. Weird. shrug

If this made any sense, great. If not, well, I don’t know what to tell ya. Cheers.

TL;DR: I installed pfsense from within a VM to a physical disk for deployment on bare metal.

UPDATE 8/12: Hard Drive was installed and booted with no problems as if nothing changed. On the plus side, I now have plenty of space to try things without worrying about exceeding writes on flash media and the webui loads and changes faster. :)