header-logo-left

The Lone Geek Blog

One geek in a sea of nerds.

header-logo-left

Update: Server Adventures

| Comments

Reconstuction and Planning

My home servers need some attention and this is my thought process.

The plex box

I’ve noticed that my plex server has been running low on ram after deciding to deploy radarr and sonarr to help organize things. Between a small handful of docker apps, plex, and a 4GB transcoding ramdisk; things feel a little tight. I had to add some extra swap space to avoid crashing and wind up using almost 4GB of the 5ish GB it has total; I added a 4GB swap on top of the 1gig or so the ubuntu installer added or maybe I did. idk.

The system has about 7.67GB of usable ram. I might be able to tweak the iGPU vram in the bios when I can bothered to reboot it to do so.

I’ve noticed when a 1080p video is playing, the ramdisk approached 75% full for one show. That alone tells me it could use more ram.

I need to find some reasonably priced pair of 8GB DDR3 Low Density RAM.

I also did some tweaking and ram usage seems more reasonable but I’m not sure if htop is measuring the ram usage from docker properly. Atm, plex in docker is using like 1.6GB but htop says 2.92GB ram, 1GB swap. More research required.

The lab server

I have a 2.2TB RAW ZFS volume across 4x 600GB SAS 10K Disks setup as RAIDZ1 and research tells me that 4 disk in RAIDZ1 is not a ideal setup. Appearently z1 likes odd number disks. Maybe I’ll buy another one or two and rebuild the array. add the 5th one for z1 or 2 disks for z2. Either way, I should gain some space while I’m at it. :)

In conjunction to this, I’ve learned that 32GB of the 64GB ram has gone to ZFS in the form of ARC or some sort of ram cache. Which is neat but I didn’t expect half the ram to go to ZFS when reading forums posts suggest it generally wants 1GB per TB of space. I guess it’s that plus the cache and by default on proxmox it must only be set to consume half the avaliable ram because another post or that same one suggested that it can potentially consume almost all of the ram bar 1GB for the system. Oof. ZFS sounds cool but can be costly it seems. Gonna keep using it for a bit but it’s tempting me into buying more ram to increase the cache. I’m thinking an extra 64GB in the form of 4x16GB DDR3 ECC sticks ought to do.

I’m also expecting some neat little pcie enterprise class ssds in the mail soon. I’m thinking of trying one as ZFS cache and the other a lvm volume and delete default lvm volume (local-lvm, I think) from the boot disk. It’s a lab box. I’m experimenting with what I can do on a single box.

One day I wanna play with clusters but that requires one or two more machines, one more compute and a storage box to share data between them. And my parents don’t like that idea so much. Something about power consumption but they could always run a dedicated power line into my room or setup a building outside to do this in. :/

Update to the server - Again. :) (August 13, 2020)

So like a couple months ago I ran across a ad on reddit for some 900GB drives for like $25 each and I bought 4. 3 arrived functional but useless as they refused to write any data, couldn’t even partition them. So I waited a bit to think on it and decided to grab a H200 in IT Mode off ebay for $45 to allow direct access to all the hard drives. After testing the 3 bad drives to try and get them to do anything only to fail, I reached out the reddit seller and he agreed to send 3 replacements. Win. Dunno what to do with the bad ones atm.

Replacements came in, installed, and now all 8 drives are in a big ZFS pool. 4 mirror sets of 2 drives each giving me a total of 2.62TB of storage. Idk how optimal it is or if it’s even a good idea but at least there’s redundency.

I’m using the pcie ssds to run the VMs and Containers with LVM Thin Privisioning and the hard drives for backups, larger vm storage, and misc data.

I also added some spare dimms for an extra 8GB of ram. Not that it needs it but ehh. It does throw some warnings on POST but otherwise seems to work.

Also found out the H200 doesn’t work in the storage slot with it’s current firmware. Not sure how to modify it to work in the storage slot atm while keeping it in IT mode.

Software

I’m also experimenting with grafana and figuring out how and what sort of data I want to feed into it. I’m a little confused on how to make dashboards and planning my own layout so I’m using premade ones and piping data into it.

Adding radarr and sonarr prompted me to reorganize my shows on the video disks so they’re not so spread out. Entire shows on one disk, not seasons here and there. That way sonarr can see all I have and help me find what I’m missing. As well as ingesting new content from a download folder and putting them whereever the show happens to live. These programs live in docker.

Grafana and the few data collection apps. Varken, and something else. Using influxdb to store and retrieve data. Still unsure how the db doesn’t just grow as data comes in. Best way to learn is to watch it in action I guess.

At least I can add a few buzz words to my resume soon.

Also have proxmox setup to feed data into grafana for funsies. And data from pfsense… also for funsies. :)


Stay safe my fellow nerds. :)

Nginx Proxy Sandwitch With NAT

Nginx proxying to docker inside nat

| Comments

Public Nginx Server using a proxy connection

V

Router using nat forwarding on a high port

V

Home Server running docker + nginx proxy

V

Docker container running codimd


My latest project, a proxy-nat sandwich. It comprises of a nginx proxy on both the public server and the local server sitting behind a router and the traffic is entirely encrypted with standard ssl web certs.

If you don’t know what these are, here’s a primer. The proxy just takes advantage of nginx’s native features, what it’s generally known for doing. Proxing connections to frontend web apps and backend services. I’m not sure how best describe it so here’s a quote from wikipedia.

In computer networking, port forwarding or port mapping is an application of network address translation that redirects a communication request from one address and port number combination to another while the packets are traversing a network gateway, such as a router or firewall.

The router part is simply a nat port forwarding some port on the wan interface to a local server. I just picked a number between 1000 and 10000 that I didn’t expect to need for anything else.

The proxy on the local server takes that request and sends it onto a docker container running a service, codimd and likely to be others, just depends on what I want exposed to the public.

Below are some snippets of the nginx configs and I’ll drop a link to port forwarding on pfsense to save a Google process. :)

This is part of my local nginx docker config. It goes in the nginx.conf file due to the upstream bits. The whole config could probably be adjusted to allow for each site to have it’s own config. Codimd is running in a container within the same network as nginx on my fileserver.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
        # Codimd
        upstream codimd {
                server        codimd:3000;
        }

                server {
                listen        80;
                server_name   codimd.localservices;

                location / {
                        proxy_pass  http://codimd;
                }
        }

        server {
                server_name   codimd.localservices;
                listen 443 ssl;
                ssl_certificate /config/keys/home-certificate.crt;
                ssl_certificate_key /config/keys/home-certificate.key;

                location / {
                        proxy_pass  http://codimd;
                }
        }

This part goes on the public nginx server. It serves as the relay and connects to a ssl only port with pre-established ssl certs on both servers. A port has to be forwarded and open on the firewall between them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
server {
        server_name codimd.mydomain.com;
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_ssl_trusted_certificate /home/michael/certs/home-certificate.crt;
        proxy_ssl_verify       on;
        proxy_ssl_verify_depth 2;
                proxy_pass https://home.mydomain.com:12345; # not a valid location so no funny business.
        }


    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/codimd.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/codimd.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = codimd.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


        server_name codimd.mydomain.com;
    listen 80;
    return 404; # managed by Certbot


}

Purpose?

So I can setup some handy services that may not be feasible to run entirely on a public server. Something for family and friends or just me so I wouldn’t need to open a VPN back home just to use the app. I’m going to use docker for most of it. Those are generally easy to setup, well, at least the ones I find easy to configure. lol. If docker proves complicated for a app then a proxmox container or VM should do.

Yup, that should be it. Cheers my fellow nerds.

Scripting on Proxmox

A few scripts I put together to do some things on proxmox

| Comments

May add more later at some point here or in another post

I wrote a script to download all the avaliable templates for proxmox.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash
# Script to download all proxmox templates

SAVEIFS=$IFS                                # Save current IFS
IFS=$'\n'                                   # Change IFS to new line
list=`pveam available | awk '{print $2}'`   # Get a list of currently available templates
list=($list)                                # Not sure what that does but it works. Thanks stackoverflow users.
IFS=$SAVEIFS                                # Restore IFS

# Do the loop
for (( i=0; i<${#list[@]}; i++ ))
do
    pveam download local "${list[$i]}" # This does the download process to the local volume.
done
exit

My New Moto G6

| Comments

A few weeks ago I bought a new phone, the Moto G6. It runs Android 9 and has a 8 core snapdragon cpu, 3gb of ram, 32gb of internal storage. All for like $80 on sale from TracFone.

I’ve got most of my apps installed, could be some I’m missing though. Far better than my old LG phone. :)

It seems to have some sort of oled display and depending on the app, black is really black. It’s kinda amazing.

The specs are here.

Summary

  • OS - Android 8.0 (Oreo), Currently running Android 9.0 (Pie)
  • Chipset - Qualcomm SDM450 Snapdragon 450 (14 nm)
  • CPU - Octa-core 1.8 GHz Cortex-A53
  • GPU -Adreno 506
  • Storage - 32GB - 64GB SD Card Installed
  • RAM - 3GB

Few weeks later

It’s turning out to be a good phone. It’s nice and responsive. My podcasts don’t stop playing when I want to check a website or social media. The camera pics are crisp if I manage to hold it steady long enough. lol. The battery lasts the entire day while playing podcasts and audiobooks while it sits in it’s belt clip or anywhere near me really. I use my bluetooth transceiver so it’s completely wireless and doesn’t make things awkward when I pull it out to check something.

I don’t really have any complaints about it other than my carrier won’t let me tether with it but that’s not the phone’s fault. :/

5/5 would recommend.

Backing Up a Disk to Another With Rsync

| Comments

Putting this here for future reference from future me if he thinks to look here. Heeey! It’sa me, your past. :)

Copied the contents of one disk formatted with XFS to another bigger disk with XFS using Rsync.

rsync -arvluht -pog -P /media/Source/ /media/Destination/

Breakdown of the switches;

  • -a for archive
  • -r for recursive
  • -v for verbose
  • -l for links
  • -u for update
  • -h for human-readable
  • -t for preserving times
  • -p for preserving permissions
  • -o to preserve ownership
  • -g to preserve group
  • -P to show progress

Trying Out Proxmox on My Lab Server

| Comments

Yesterday I decided to try something different on my lab server. I just got tired of the limits of ESXI with it’s free license. Was gonna put it on another flash drive but after trying and experiencing the excruciatingly slow install of the Citrix Xenserver onto a flash drive only to be left with no webui to manage it, I ended up putting proxmox on the 1.2TB 4 disk RAID-10 array. Felt faster that way and more reliable.

First impressions

The webui looks nice. I like that it’s built on debian because makes it easier for me to manage. :) I notice it supports LXC containers and that presents some interesting possibilities. Because it’s on debian, I can run docker on the host. VMs, LXC, docker… yes. 24 threads, 64GB of ram, 1.2TB HDD array.

Networking

Ran into an annoyance with the networking side of things. I tried bridging two of my server’s physical nics for fail-over and was hoping to double it’s bandwidth (just because) but then none of the VMs and containers could connect to the network. I’ve no idea why or how to troubleshoot when the host had no issues connecting to the network. Then there’s the whole needing a reboot to reload the config. I didn’t like that idea, especially once I get a bunch of stuff running on it. I found this post that describes manually setting the network config in /etc/network/interfaces OR configure within the web-ui then copy the new config over the old cp /etc/network/interfaces.new /etc/network/interfaces then bring up and down the changed nics. That’s fine I guess but seems prone to breakage and I’d rather not risk breaking the ssh session to the server. Hopefully that change isn’t needed very often so, keep a monitor handy? idk, we’ll see.

Containers

LXC containers are interesting, they feel kinda like docker in that it’s isolated but weirdly, I can view the processes in the LXC containers from htop on the host. Not sure if that’s normal because I cannot do that with docker containers. Docker, you have your host commands and then everything inside the container is hidden from the host. Only way to look inside is with a docker exec or docker run command to get a shell for top and htop or one-off ps aux or any of the typical linux tools for doing that.

I don’t know much about LXC at the moment, I created a few from templates on the lab server and there’s quite a few templates to choose from. Obviously not the tons and tons of images docker has but still. There’s a good number of templates to play with and probably more on the web somewhere.

Thoughts

I’m not sure I like the idea of proxmox on baremetal, at least not for that machine. Maybe on something smaller. I’m gonna put ESXI back on my lab server and run proxmox in a VM for further testing.

several hours later

Welp, even in a VM the networking for containers seems broken. :/ VMs run but they can’t even get an IP from my router. .-.

one day later

Hmm, a quick inquiry on a facebook group brings up something I overlooked. Allowing promiscuous mode on ESXI’s vSwitch and rebooting proxmox just to be sure, finally allowed the containers to work.

one week later

So since the networking was resolved. I still gotta work out how to get the containers to register their hostnames. I’m not sure why they won’t. seems like they would. idk. I started to just assign static IPs to them for now and looking at some of the turnkey templates it offers. I had to expand the data volume because apparently a 74GB lvm volume gets full pretty fast with a bunch of 8gb volumes for the containers. :) I’m gonna keep proxmox in a VM for now.

Building a Pfsense Install on a Physical Disk for Deployment

with VirtualBox. :)

| Comments

A few days ago, I have a brilliant idea to build pfsense on a hard drive from within virtualbox and it worked surprisingly easy once I worked out how to get the network interfaces to work with my config.

Hardware bit

The setup was something like this; a WD 250GB HDD attached to a SATA to USB bridge with a VMDK placeholder file pointing to the block device. VirtualBox didn’t care, all it had was the VMDK file attached. I don’t know the science behind how it works, just that it does. The VBox machine file needed 5 nics setup because my physical computer it is meant for has that many; a quad nic and the integrated one.

VBoxManage internalcommands createrawvmdk -filename physical_pfsense.vmdk -rawdisk /dev/sdd

Because of the way my router pc arranged it’s numbering of the nics, I have to work in reverse with em4(nic5) being the wan port. vboxmanage modifyvm "pfsense" --nic5 nat --nictype5 82543GC. For some reason, it’s just how the quad nic numerates it’s ports. The integrated port first, then number from the outside port to the inside. My modem is connected to the inside one followed by the lan. (Could have used a single nic and use the integrated one for lan or wan but I was thinking I’d have isolated physical lans at some point.)

Now the VBox machine got 2GB of RAM and 2 cores to keep things happy (just in case). The first 4 nics were pointed to independent internal networks to avoid any ip collisions set about from my config.

Software bit

I installed pfsense from the iso, as you do, while paying attention to the partitioning; I didn’t want it to take the whole disk just yet so it got 8GB for now. The flash drive it’s on right now is 8GB but the partition is slightly smaller. I wanted to just copy the partition from the flash drive to the disk with DD but I didn’t think that’d work out too well so I exported the existing config and manually installed it with pfsense’s file editor in the webui then rebooted the vm. I found that the restore function wouldn’t accept my config as a whole so manual it was. I did create a separate package config to install the packages I use and wouldn’t you know it, the shotgun package installer button also didn’t work. smh. It took me a while to work out how to get that to work including manually copying the package files from a backup image of the existing install to the external disk from within a freebsd vmdk install from the freebsd website. (that didn’t work, maybe I missed something. idk)

In the end and from the package manager on the webui, I installed the first package I knew I had and the others just magically appeared in the list for me to reinstall one by one. Kinda annoyed at that but whatever. I got it done.

My Reasoning

Because my pfsense box is running on a flash drive, I worry about that drive quitting on me so I needed a more reliable solution. It started with a Sandisk SSD but I guess from me trying out squid and other heavy disk apps, it just couldn’t take it. Then I installed a flash drive that also failed so it’s on it’s second flash drive because I didn’t have a suitable replacement solution at the time. The two failures was just the drives going to read-only mode so at least I was able to save my config. The flash drive just needed a simple disk copy to another one and it was good to go. I couldn’t figure out how to recover from the SSD so I reinstalled and copy the config. Partition size differences and me being new to BSD. Now with cloud backups, restoring should be easier. I need to test that in a VM at some point to get some ideas.

Hopefully this $20 hard drive from Amazon will hold up for a few years or more. I went with solid state thinking it’d be more reliable but apparently the ones I chose just wasn’t suitable for it. If I could justify the cost of Samsung SSDs in a router, I’d probably use them but it’s hard to find one smaller than 250GB nowadays. Pfsense doesn’t need much on it’s own unless I use squid. Oh well, spinning rust it is. It’s even hard to find new small hard drives at decent prices too. Weird. shrug

If this made any sense, great. If not, well, I don’t know what to tell ya. Cheers.

TL;DR: I installed pfsense from within a VM to a physical disk for deployment on bare metal.

UPDATE 8/12: Hard Drive was installed and booted with no problems as if nothing changed. On the plus side, I now have plenty of space to try things without worrying about exceeding writes on flash media and the webui loads and changes faster. :)

Moved to Linux

Givin up on windows for now

| Comments

Welp, after months of dealing with windows crashing my pc I’ve resorted to using Linux Mint 19.1. I’ve switched my drives from NTFS to EXT4 but left room to maybe install windows just for my games when I can be bothered to try again. All I wanted was a stable system and windows wasn’t havin it. lol No amount of troubleshooting helped, the symptom remained the same; a video driver crash at the most inconvient of times and after gaming for a few hours.

Posting on reddit reveiled some sort of issue with VT-D and Windows 10 but only after I deleted it. XD Something to consider when I try windows again. I’m not sure why it’d cause my problem but that’s something to explore in the future.

Redshiftin the Desktop

Using redshift to automatically change the monitor's temperature color

| Comments

Save your eyes, use RedShift on linux and windows’ builtin night mode.

Description

Redshift adjusts the color temperature according to the position of the sun. A different color temperature is set during night and daytime. During twilight and early morning, the color temperature transitions smoothly from night to daytime temperature to allow your eyes to slowly adapt. At night the color temperature should be set to match the lamps in your room. This is typically a low temperature at around 3000K-4000K (default is 3700K). During the day, the color temperature should match the light from outside, typically around 5500K-6500K (default is 5500K). The light has a higher temperature on an overcast day.

Some tips

Use the config on the site if you encounter issues with the gtk applet. Or do if you don’t. :)

Set your location if redshift has issues retrieving it from the web with a 3 digit latitude and longitude value. I found mine by looking at the gtk applet’s info window when it did connect. OR you can use this url I found in the geoclue.conf file https://location.services.mozilla.com/v1/geolocate?key=geoclue.

If you want it to affect just one monitor; put this at the end of the config that site lists. Just omit the similar line listed on the site.

1
2
[randr]
crtc=0 #this is the primary monitor

Tech Update: Bluetooth Transceiver and Battery

| Comments

Tech update:

Some time ago, I purchased a bluetooth audio transceiver that mostly went unused till I had the idea to use it for my phone and run ear buds off it since the headphone jack on my phone became iffy, the left audio channel would cut in and out if the plug moved ever so slightly. The device was sold by Anker on Amazon for I dunno, 28 bucks. A tiny thing, fits in the mini right pocket of my jeans. Battery lasts about 8 hours fully charged. Lets out a loud beep beep beep when something is playing from the phone. It’s loud and startling. I kinda wish I knew how to hack firmware but I don’t think the computer even knows it’s connected. Doesn’t show up in any sort of device list. I mean, why would it but still.

In comes a portable 10,000 mAh battery, also by Amazon that recently sold for about 36 bucks. I tuck it in my right pocket next to the phone and bluetooth device to charge up when the device gets low or just keep it topped off. I have no idea how many fully charges the big battery will cover, I haven’t done the math yet or can be bothered.

The device looks like this;
image1

On top of that, I can top off my phone as needed for those long work days away from any power sockets. :)

I like to play podcasts and audio books while I work.

The product page is here but the main seller, Amazon, no longer carries it sadly.