Lab Update: Lab Version..3? I lost count..
So, I finally got around to finishing out my plans to try and do a HA setup in my lab, although I had to resort to eBay as the model of server I was using is not available from the refurb shop I was using before (not surprising really, given the age..) so I picked up 2 more R610s to add to the pool and a third to replace the 2950 as my OpnSense box. Initially I was looking for identical CPUs to the existing hosts (X5650s) but I stumbled upon the listing with 2 hosts with X5660s so I opted for those, repurposing the RAM out of the remaining unit as I don’t think I need 128+ GB of RAM for OpnSense. However, as luck would have it, the unit also included what appears to be a 10Gbps NIC in one of the PCIe slots so that could be handy someday.
The 2950 has been fully replaced, and the other new hosts have been added to my XCP-ng pool. I ended up tearing everything apart and redoing all of the port assignments on my switches as I had only planned for three VM hosts, not four. Most everything is up and running now except I came up a few cables short to be able to connect all of the NICs, at least to keep my color coding straight. This should hopefully be resolved this weekend provided my cable order arrives.
On a side note, forecast for Friday suggesting that we may see some severe weather, at least in the far eastern part of the state. Unfortunately I will be at work all day but I guess I should break the gear out this weekend on the off chance something pops up when I’m off work, plus to get ready for the road trip later in April.
It’s Done?
Finally got around to filling out (mostly? apparently it might support more than i thought) the RAM in the TrueNAS box. However, once everything booted back up I cannot get the XCP-ng hosts to connect to it over iSCSI for their shared storage. A simple reboot should not break things this badly, unless there was some glaring patch issue on the XCP-ng side. At the moment since things are still down I’m patching TrueNAS and will see what happens. Unfortunately the documentation on some of these things seems a bit lacking..
Short Storage Update
So one of the WD Red drives in the pool hosting my VMs started displaying errors in TrueNAS (pool degraded) recently. Now, like the previous NAS the errors did seem to go away after a reboot so not sure if this is truly a drive starting to fail or not, but to be on the safe side (drives are getting old) I picked up a few more 6TB IronWolf drives from NewEgg (this time the ‘Pro’ versions, I think the original 5 were just standard IronWolf drives). Prior to their arrival I built a new zvol on the other pool and migrated the VMs across as I would need to remove part of the WD pool to install the additional drives. So now I’m running two five disk RAID-Z2 vdevs, I’m thinking I’ll add one more max (as the JBOD is only 15 bays, anything else requires either 2.5″ drives or an additional JBOD). I haven’t decided on if I should start looking for SSDs to add to this as well. I did pick up 128GB of RAM to add to this system as well since it has plenty of room (I think it maxes out at 768 GB) but haven’t got around to installing it yet.
Storage Thoughts
So, I’m contemplating making some additions to my TrueNAS box (which was previously discussed here) either in the form of additional RAM or drives. The current pool configuration is the 5 drive RAID-Z2 of the IronWolf drives and the WD Reds have been rearranged into a 3 vdev mirror, with one drive just floating empty. Unfortunately it appears the 6TB IronWolf drives are no longer available so I’m not sure what I’m going to do for adding to that pool at the moment. I was consdering adding a pair of drives to the other pool which houses my VM store, but that pool is now showing one of the drives faulted, I have not been able to determine if the drive is having an issue or if its a repeat of the v1 build where it just goes back to normal after a reboot.
I need to work out a backup mechanism for the VMs, but other than the IronWolf pool I dont have a good one at the moment.
Tuesday Night Project
This was my project on Tuesday night this week. I’ll possibly make another post with a photo or three once I’ve started installing equipment.
So this is a thing…
Yep, definitely did not see this one coming. First in December.
Moderate risk area carries a 45% hatched for damaging winds. Much of the area also has a 10% tornado probability.
You can walk across the river..
might be a bit of an understatement. The lack of rainfall has led to very low levels on area rivers, here is a look at the measurements around 4 PM this afternoon:
Stage (Feet) | Flow Rate (CFS) | |
Des Moines River / Saylorville Lake / Beaver Creek | ||
Des Moines River near Stratford | 5.6 | 1000 |
Saylorville Lake | 836.07 | |
Des Moines River at Saylorville Tail (Johnston / Below Saylorville Dam) | 3.21 | 870 |
Beaver Creek Near Grimes | 3.03 | 44.1 |
Des Moines River at Des Moines 2nd Avenue | 15.49 | 1370 |
Des Moines River at Des Moines SE 6th Street (Below Raccoon River) | 10.23 | 1530 |
Raccoon River / Walnut Creek | ||
North Raccoon River near Perry | 3.34 | 181 |
South Raccoon River at Redfield | 2.49 | 141 |
Raccoon River at Van Meter | 2.61 | 377 |
Raccoon River near West Des Moines | 26.5 | 323 |
Raccoon River at Iowa 28 / 63rd Street | 20.25 | 391 |
Walnut Creek at Des Moines 63rd Street | 3.02 | 6.80 |
Raccoon River at Fleur Drive | 0.71 | 263 |
Saylorville Lake’s normal pool is 836.00.
Fourteen
Here we are, fourteen months later, and I’m still trying to make it make sense. But instead finding only dead end after dead end. Trying to figure out where the mistake was that made everything go wrong. Was it leaving the call center job in 2015? Not leaving it sooner? Taking that job at all? Or was it something else entirely not job related that I missed before that in college?
Worse, I seem to have lost a friend in the middle of all this. Some other mistake I made unintentionally, presumably something I said since we don’t see each other very often IRL without a pandemic, but now they seem to be gone and no way for me to repair the damage.
It seems the only thing I can succeed at, is being an utter failure.
Zpools
So my FreeNAS/TrueNAS box has gone through a couple iterations now due to trying to resolve issues that came up with the original build from 2015 or so. Iteration 1.5 “Vault” involved an old Dell PowerEdge 2950 III that I hadn’t taken to be recycled yet, which I outfitted with six 2 TB drives (max capacity for the PERC controller according to documentation) and 40 GB of DDR2. These drives were set up in a 3 vdev mirror primarily to house the VM data so that they wouldn’t have to be rebooted, while the rest resided on the original box. I eventually procured a 15 bay chassis and a PCI-Express HBA which was installed in the 2950. Unfortunately when I joined Vault to Active Directory ahead of bringing over the original pool it didnt use the same user/group id mappings which resulted in permissions issues. I was looking at expanding the pool, but WDs documentation talks about a max of 8 drives in the enclosure for the 3TB CMR WD Reds, and the higher end model doesn’t come in 3TB. I picked up 5 6TB Seagate IronWolf drives which are configured in RAID-Z2 with the plan to eventually buy 10 more drives to add to the pool. I’ve since migrated everything to the 2.0 box which was a refurbished R720xd w/ 128GB of RAM which will eventually be added to. All of the data has largely been migrated off of the WD Reds with the exception of the system dataset. Which is where I’m now running into an issue as I get an error about Active Directory if I try to move the system dataset back to the boot pool (which is now a hard disk instead of thumb drives) or to the IronWolf pool. To carry out my original plan I need to be able to remove the Red drives from the system as they are occupying about half the enclosure (the system bays are all 2.5″ with the idea being those are used for SSDs eventually). I’m not sure if theres a way to get everything off the Reds or if I should just alter the plan and just replace the Reds a disk at a time and add the IronWolf drives as an additional vdevs (once I pick up a couple to add to that as the WD Red pool is a 7 disk). Since I’m also using this system as the storage for my VMs running in XCP-ng I may end up having to build an additional pool in a mirror for the VMs (the 2TB drives are currently unused so I could reconstitute it with those once I add an additional enclosure). Things to ponder..
CentOS 8
So awhile ago I started looking at options to update my MariaDB servers & mail server to newer versions than the OS-provided versions they were currently running. Initially I was looking at using IUS like I did for PHP, but CentOS 8 hit RTM including updated versions of all of the above so when an image was available on my hosting provider, I went that route. I’ve converted the database cluster and have both front-end web servers (Nginx w/ caching) migrated over. I was planning on slowly working my way through the rest of the environment, and then I find out CentOS 8 is getting discontinued at the end of 2021. Pretty damn short notice, considering CentOS 6 just went end-of-life on November 30th, and CentOS 7’s end-of-life isn’t until sometime in 2024. Based on the description, the replacement – CentOS Stream – is going to serve as a beta for RHEL, not exactly useful when looking for stability. Guess it’s time to look for a new OS for my servers.