Short Storage Update

So one of the WD Red drives in the pool hosting my VMs started displaying errors in TrueNAS (pool degraded) recently. Now, like the previous NAS the errors did seem to go away after a reboot so not sure if this is truly a drive starting to fail or not, but to be on the safe side (drives are getting old) I picked up a few more 6TB IronWolf drives from NewEgg (this time the ‘Pro’ versions, I think the original 5 were just standard IronWolf drives). Prior to their arrival I built a new zvol on the other pool and migrated the VMs across as I would need to remove part of the WD pool to install the additional drives. So now I’m running two five disk RAID-Z2 vdevs, I’m thinking I’ll add one more max (as the JBOD is only 15 bays, anything else requires either 2.5″ drives or an additional JBOD). I haven’t decided on if I should start looking for SSDs to add to this as well. I did pick up 128GB of RAM to add to this system as well since it has plenty of room (I think it maxes out at 768 GB) but haven’t got around to installing it yet.



Storage Thoughts

So, I’m contemplating making some additions to my TrueNAS box (which was previously discussed here) either in the form of additional RAM or drives. The current pool configuration is the 5 drive RAID-Z2 of the IronWolf drives and the WD Reds have been rearranged into a 3 vdev mirror, with one drive just floating empty. Unfortunately it appears the 6TB IronWolf drives are no longer available so I’m not sure what I’m going to do for adding to that pool at the moment. I was consdering adding a pair of drives to the other pool which houses my VM store, but that pool is now showing one of the drives faulted, I have not been able to determine if the drive is having an issue or if its a repeat of the v1 build where it just goes back to normal after a reboot.

I need to work out a backup mechanism for the VMs, but other than the IronWolf pool I dont have a good one at the moment.

Tuesday Night Project

This was my project on Tuesday night this week. I’ll possibly make another post with a photo or three once I’ve started installing equipment.

So this is a thing…

Yep, definitely did not see this one coming. First in December.

SPC Day 1 Severe Outlook IWN

Moderate risk area carries a 45% hatched for damaging winds. Much of the area also has a 10% tornado probability.

You can walk across the river..

might be a bit of an understatement. The lack of rainfall has led to very low levels on area rivers, here is a look at the measurements around 4 PM this afternoon:

Stage (Feet)Flow Rate (CFS)
Des Moines River / Saylorville Lake / Beaver Creek
Des Moines River near Stratford5.61000
Saylorville Lake836.07
Des Moines River at Saylorville Tail (Johnston / Below Saylorville Dam)3.21870
Beaver Creek Near Grimes3.0344.1
Des Moines River at Des Moines 2nd Avenue15.491370
Des Moines River at Des Moines SE 6th Street (Below Raccoon River)10.231530
Raccoon River / Walnut Creek
North Raccoon River near Perry3.34181
South Raccoon River at Redfield2.49141
Raccoon River at Van Meter2.61377
Raccoon River near West Des Moines26.5323
Raccoon River at Iowa 28 / 63rd Street20.25391
Walnut Creek at Des Moines 63rd Street3.026.80
Raccoon River at Fleur Drive0.71263
Stream gage data via NWS / USGS / US Army Corps of Engineers

Saylorville Lake’s normal pool is 836.00.


Here we are, fourteen months later, and I’m still trying to make it make sense. But instead finding only dead end after dead end. Trying to figure out where the mistake was that made everything go wrong. Was it leaving the call center job in 2015? Not leaving it sooner? Taking that job at all? Or was it something else entirely not job related that I missed before that in college?

Worse, I seem to have lost a friend in the middle of all this. Some other mistake I made unintentionally, presumably something I said since we don’t see each other very often IRL without a pandemic, but now they seem to be gone and no way for me to repair the damage.

It seems the only thing I can succeed at, is being an utter failure.



So my FreeNAS/TrueNAS box has gone through a couple iterations now due to trying to resolve issues that came up with the original build from 2015 or so.  Iteration 1.5 “Vault” involved an old Dell PowerEdge 2950 III that I hadn’t taken to be recycled yet, which I outfitted with six 2 TB drives (max capacity for the PERC controller according to documentation) and 40 GB of DDR2. These drives were set up in a 3 vdev mirror primarily to house the VM data so that they wouldn’t have to be rebooted, while the rest resided on the original box. I eventually procured a 15 bay chassis and a PCI-Express HBA which was installed in the 2950. Unfortunately when I joined Vault to Active Directory ahead of bringing over the original pool it didnt use the same user/group id mappings which resulted in permissions issues. I was looking at expanding the pool, but WDs documentation talks about a max of 8 drives in the enclosure for the 3TB CMR WD Reds, and the higher end model doesn’t come in 3TB. I picked up 5 6TB Seagate IronWolf drives which are configured in RAID-Z2 with the plan to eventually buy 10 more drives to add to the pool. I’ve since migrated everything to the 2.0 box which was a refurbished R720xd w/ 128GB of RAM which will eventually be added to. All of the data has largely been migrated off of the WD Reds with the exception of the system dataset. Which is where I’m now running into an issue as I get an error about Active Directory if I try to move the system dataset back to the boot pool (which is now a hard disk instead of thumb drives) or to the IronWolf pool. To carry out my original plan I need to be able to remove the Red drives from the system as they are occupying about half the enclosure (the system bays are all 2.5″ with the idea being those are used for SSDs eventually). I’m not sure if theres a way to get everything off the Reds or if I should just alter the plan and just replace the Reds a disk at a time and add the IronWolf drives as an additional vdevs (once I pick up a couple to add to that as the WD Red pool is a 7 disk). Since I’m also using this system as the storage for my VMs running in XCP-ng I may end up having to build an additional pool in a mirror for the VMs (the 2TB drives are currently unused so I could reconstitute it with those once I add an additional enclosure). Things to ponder..

CentOS 8

So awhile ago I started looking at options to update my MariaDB servers & mail server to newer versions than the OS-provided versions they were currently running. Initially I was looking at using IUS like I did for PHP, but CentOS 8 hit RTM including updated versions of all of the above so when an image was available on my hosting provider, I went that route. I’ve converted the database cluster and have both front-end web servers (Nginx w/ caching) migrated over. I was planning on slowly working my way through the rest of the environment, and then I find out CentOS 8 is getting discontinued at the end of 2021. Pretty damn short notice, considering CentOS 6 just went end-of-life on November 30th, and CentOS 7’s end-of-life isn’t until sometime in 2024. Based on the description, the replacement – CentOS Stream – is going to serve as a beta for RHEL, not exactly useful when looking for stability. Guess it’s time to look for a new OS for my servers.

NAS Build 2.0?

So I built a new temporary NAS out of an old PowerEdge 2950 G3 I had in my recycle pile, a HBA with external ports, and a 15 bay external drive enclosure. This seems to have resolved the issue I was having with FreeNAS dropping the WD Red pool, even if I haven’t pinpointed the exact cause. At some point I would like to replace the 2950 as it is a bit of a potential bottleneck, and the upgradability is limited (I upgraded it to 40 GB of RAM, but the system maxes at 64). Plus, the drives installed in the system are all run through the onboard RAID controller as single-disk RAID 0s which is less than desirable. As that pool contains my VMs I’ll need to move them elsewhere before I replace the box. I was looking at adding to the WD Red pool but the matching drives aren’t built for a more than 8 drive use case, and the ones that are don’t come in 3TB. So I’m looking at other options as well, considering building a new 5 disk pool with larger drives, and adding additional 5 disk vdevs later. But for the moment this will all have to remain hypothetical, hopefully drive prices come down further.

Dead Ends

I was finding nothing but dead ends looking for a job before this started. I got laid off back in March from my part time job temporarily, but now looking most likely indefinitely. Guess I really, really screwed up somewhere along the line, I just don’t know where.