CentOS 8

So awhile ago I started looking at options to update my MariaDB servers & mail server to newer versions than the OS-provided versions they were currently running. Initially I was looking at using IUS like I did for PHP, but CentOS 8 hit RTM including updated versions of all of the above so when an image was available on my hosting provider, I went that route. I’ve converted the database cluster and have both front-end web servers (Nginx w/ caching) migrated over. I was planning on slowly working my way through the rest of the environment, and then I find out CentOS 8 is getting discontinued at the end of 2021. Pretty damn short notice, considering CentOS 6 just went end-of-life on November 30th, and CentOS 7’s end-of-life isn’t until sometime in 2024. Based on the description, the replacement – CentOS Stream – is going to serve as a beta for RHEL, not exactly useful when looking for stability. Guess it’s time to look for a new OS for my servers.

NAS Build 2.0?

So I built a new temporary NAS out of an old PowerEdge 2950 G3 I had in my recycle pile, a HBA with external ports, and a 15 bay external drive enclosure. This seems to have resolved the issue I was having with FreeNAS dropping the WD Red pool, even if I haven’t pinpointed the exact cause. At some point I would like to replace the 2950 as it is a bit of a potential bottleneck, and the upgradability is limited (I upgraded it to 40 GB of RAM, but the system maxes at 64). Plus, the drives installed in the system are all run through the onboard RAID controller as single-disk RAID 0s which is less than desirable. As that pool contains my VMs I’ll need to move them elsewhere before I replace the box. I was looking at adding to the WD Red pool but the matching drives aren’t built for a more than 8 drive use case, and the ones that are don’t come in 3TB. So I’m looking at other options as well, considering building a new 5 disk pool with larger drives, and adding additional 5 disk vdevs later. But for the moment this will all have to remain hypothetical, hopefully drive prices come down further.

Dead Ends

I was finding nothing but dead ends looking for a job before this started. I got laid off back in March from my part time job temporarily, but now looking most likely indefinitely. Guess I really, really screwed up somewhere along the line, I just don’t know where.

Gluster Troubleshooting

So the backend storage for the stuff I’m serving to the internet resides on a pair*of CentOS 7 hosts running GlusterFS 6.7 with a single mirrored brick. Recently I’ve started getting disk space warnings on one node but not the other and the cause appears to be the log file for the brick just exploding (>3GB) in size. I can’t figure out what the issue is, so far though it doesn’t appear to be preventing access.

(*Yes I know it should probably be at least 3 nodes or 2 + arbiter, at some point when I have a bigger budget I’ll look at fixing it but this also isn’t a business production environment either so I can deal with an occasional split-brain issue.)
The logs all seem to be filling with entries like glusterfs brick log

Lab Updates

So late last year I picked up a refurbished Dell PowerEdge R610 to use as a VM host to replace the 2950 Gen3. I still need to get more memory for it as I only have it outfitted with 64 GB currently. I still want to get 1-2 more R610s so I can get rid of the R900. I might get a 4th if I get into anything requiring super heavy lifting that i can dedicate to said task. Originally I was going to scrap the 2950 but have since decided to try doing a FreeNAS build around it. It only has 6 bays which I’ve filled with 2TB drives and replaced a couple of the 4GB memory modules with 8GB modules bringing the total up to 40GB (I’ll probably replace the remaining 6 later in the year). So far it’s handling my iSCSI stuff for the XenServer boxes, I’d like to move everything else over but that would require a suitable controller and some sort of 3.5″ disk shelf/enclosure that I haven’t located yet (at least without having to pay $$$$$, short of going on eBay and hoping for the best). If I can get that done I’ll probably take the AMD system that is in my original NAS build and use it for a new workstation.

The Week Ahead..

Welcome to severe weather season everybody. Make sure your weather radio is plugged in & has a good battery (along with a backup method (or methods) to receive warnings).

Today (5/16/19)

Tomorrow (5/17/19)

Saturday (5/18/19)

Tuesday (5/21/19)


So I’d rather be in Florida the rest of the week but it’s not happening obviously. So for anyone on my friends list who are headed down to MCO (have fun) here’s a short forecast I tossed together. I meant to do this earlier in the week but due to computer problems I didn’t get much done this weekend. I may try to do an updated one tonight that runs out a little further if my meetings don’t run late.

Today (All times EDT)
Dinner (5 PM): Low 70s
Evening (8 PM): Upper 60s
Overnight (1 AM): Mid 60s

Breakfast (8 AM): Mid 60s
Lunch (11 AM): Low – Mid 70s
Dinner (5 PM): Upper 70s
Evening (8 PM): Low 70s
Overnight: Around 70

Breakfast (8 AM): Upper 60s
Lunch (11 AM): Upper 70s – Low 80s
Dinner (5 PM): Mid – Upper 80s
Evening (8 PM): Upper 70s – Low 80s
Overnight: Low – Mid 70s

Breakfast (8 AM): Upper 60s
Lunch: Upper 70s – Low 80s
Dinner (5 PM): Upper 80s

Projects II

So I discovered the CPU I was going to use in building a Hyper-V or VMWare test box has only a single core. I think I’ll end up using it for a pfSense or OPNsense box instead and do something else for the hypervisor test box, maybe use the hardware from my current NAS after I build a replacement for it.

Occasionally I’ve run into some bottlenecks with the NAS which I think are I/O rather than CPU, I just haven’t figured out if I need more RAM, more disks or some combination thereof. I’ve wondered if I need to add a SSD or two but I’m not sure if that would help or not either. Current config has most of my data, including the XenServer storage on a 6x3TB RAID-Z2 using the WD Red drives. Contemplating if adding SSDs for either SLOG and/or L2ARC would help. Thinking if I can build a new machine I’ll put in 1 6×3 vDev and migrate everything over and then add the 6×3 from the existing machine as a 2nd vdev?

Meet the new boss same as the old boss…

So the Madrid City Council punted the public safety building down the road yet again, again citing funds. (While somehow having $7 MILLION for Public Works?) Guess instead of a new and improved mayor & city council we got the same mayor & council we had just with different bodies in the seats. Seriously, how many years does it take to build ONE BUILDING. The mayor and council need to just come clean and admit they never had any intention of building the new public safety building because they keep letting the Public Works Director go on spending sprees every year. Save everyone the trouble, quit making empty promises you have no intention of ever keeping.

2020? Projects

(If you’re wondering about the title it’s that I doubt I’ll have the budget for any of this in the coming year so maybe next year..maybe.)

I think going forward I’m going to go back to building my PCs instead of buying machines, except obviously for if I replace my going on 10 year old laptop. Maybe buy a refurbished machine for a server depending on the use case but otherwise probably just go back to custom build – I could probably use the practice anyway since I think my last build was my NAS and that was in 2015. In the mean time I’ve been slowly adding/upgrading the PC I use as my primary what I can, which I’m still needing to add more memory and an upgraded network card since for some strange reason the onboard from HP is only 10/100 despite being on a new enough board to support an AMD Phenom II X4 840T. At some point though I’d like to build a new machine so I can support better graphics cards and more memory. (HP shipped with Intel graphics I think and I added a 1GB Radeon 5570 PCI Express card to be able to handle my 3 monitor setup.)

NAS Upgrades / FreeNAS Mk II
So in 2015 I built a new storage box to handle all my file storage and to be a central storage for my home lab virtual machines instead of chunks of storage spread out all over the place. However I did this more on a “how fast/cheap can I get this done?” method than planning for expandability/performance. I’m considering building a second one to use just for my XenServer VMs and leaving the first unit for file storage. I’m not sure how much I can upgrade to increase performance other than maybe additional memory – I’m currently only running 16 GB while the board supports 64 GB. I currently have all 12 bays filled, 7 3TB WD Red, 4 1TB Seagate Enterprise (migrated from old fileserver) and a 400 GB drive I threw in as it was laying around. I don’t have any SSDs installed, and none of the drives are assigned to L2ARC or ZIL so maybe replacing 1-2 of the 1TB drives with SSDs for those could help maybe. Any capacity increase would have to come from swapping out drives with larger capacity drives. I’ve debated about building a second unit to handle just the VM storage but I’m not sure if I will do that yet. I should note I’m not saturating the NICs yet so any bottleneck is likely in the storage itself (short of something in the switch – Cisco 2948G-GE-TX – preventing use of the full gigabit links).

Additional VM Host(s)
I currently have 2 Dell PowerEdge servers being used to run VMs however due to time between purchases they are far from matched.. one is a 2950 and the other is a R900. At some point I’d like to maybe build (or maybe buy) some closer to the R900 to give some room and allow for fail-overs as the R900 occasionally reboots itself. The 2950 doesn’t have the memory to handle all of the VMs failing over from the R900 (32 GB vs 128 GB). The reboots might just be a bad memory module but I haven’t torn the machine apart to try and diagnose this. Perhaps also adding some higher speed (10/40Gbit) network cards for future use (ie storage)

I’m sure I will probably come up with more going down the road but thought this would work for a post since the weather has not been noteworthy outside of being unusually warm for January.

(EDIT 1/6/19 12:55 AM – I realized I forgot a build)

VMWare/Hyper-V Host

I should probably build a small-ish host to try out running Hyper-V and/or VMWare that I can blow up if need be just to get some hands-on time since my main environment is built on XenServer. I think I have some parts towards this I can use so mainly probably just finding a motherboard that fits.