You can walk across the river..

might be a bit of an understatement. The lack of rainfall has led to very low levels on area rivers, here is a look at the measurements around 4 PM this afternoon:

Stage (Feet)Flow Rate (CFS)
Des Moines River / Saylorville Lake / Beaver Creek
Des Moines River near Stratford5.61000
Saylorville Lake836.07
Des Moines River at Saylorville Tail (Johnston / Below Saylorville Dam)3.21870
Beaver Creek Near Grimes3.0344.1
Des Moines River at Des Moines 2nd Avenue15.491370
Des Moines River at Des Moines SE 6th Street (Below Raccoon River)10.231530
Raccoon River / Walnut Creek
North Raccoon River near Perry3.34181
South Raccoon River at Redfield2.49141
Raccoon River at Van Meter2.61377
Raccoon River near West Des Moines26.5323
Raccoon River at Iowa 28 / 63rd Street20.25391
Walnut Creek at Des Moines 63rd Street3.026.80
Raccoon River at Fleur Drive0.71263
Stream gage data via NWS / USGS / US Army Corps of Engineers

Saylorville Lake’s normal pool is 836.00.

Fourteen

Here we are, fourteen months later, and I’m still trying to make it make sense. But instead finding only dead end after dead end. Trying to figure out where the mistake was that made everything go wrong. Was it leaving the call center job in 2015? Not leaving it sooner? Taking that job at all? Or was it something else entirely not job related that I missed before that in college?

Worse, I seem to have lost a friend in the middle of all this. Some other mistake I made unintentionally, presumably something I said since we don’t see each other very often IRL without a pandemic, but now they seem to be gone and no way for me to repair the damage.

It seems the only thing I can succeed at, is being an utter failure.

 

Zpools

So my FreeNAS/TrueNAS box has gone through a couple iterations now due to trying to resolve issues that came up with the original build from 2015 or so.  Iteration 1.5 “Vault” involved an old Dell PowerEdge 2950 III that I hadn’t taken to be recycled yet, which I outfitted with six 2 TB drives (max capacity for the PERC controller according to documentation) and 40 GB of DDR2. These drives were set up in a 3 vdev mirror primarily to house the VM data so that they wouldn’t have to be rebooted, while the rest resided on the original box. I eventually procured a 15 bay chassis and a PCI-Express HBA which was installed in the 2950. Unfortunately when I joined Vault to Active Directory ahead of bringing over the original pool it didnt use the same user/group id mappings which resulted in permissions issues. I was looking at expanding the pool, but WDs documentation talks about a max of 8 drives in the enclosure for the 3TB CMR WD Reds, and the higher end model doesn’t come in 3TB. I picked up 5 6TB Seagate IronWolf drives which are configured in RAID-Z2 with the plan to eventually buy 10 more drives to add to the pool. I’ve since migrated everything to the 2.0 box which was a refurbished R720xd w/ 128GB of RAM which will eventually be added to. All of the data has largely been migrated off of the WD Reds with the exception of the system dataset. Which is where I’m now running into an issue as I get an error about Active Directory if I try to move the system dataset back to the boot pool (which is now a hard disk instead of thumb drives) or to the IronWolf pool. To carry out my original plan I need to be able to remove the Red drives from the system as they are occupying about half the enclosure (the system bays are all 2.5″ with the idea being those are used for SSDs eventually). I’m not sure if theres a way to get everything off the Reds or if I should just alter the plan and just replace the Reds a disk at a time and add the IronWolf drives as an additional vdevs (once I pick up a couple to add to that as the WD Red pool is a 7 disk). Since I’m also using this system as the storage for my VMs running in XCP-ng I may end up having to build an additional pool in a mirror for the VMs (the 2TB drives are currently unused so I could reconstitute it with those once I add an additional enclosure). Things to ponder..

CentOS 8

So awhile ago I started looking at options to update my MariaDB servers & mail server to newer versions than the OS-provided versions they were currently running. Initially I was looking at using IUS like I did for PHP, but CentOS 8 hit RTM including updated versions of all of the above so when an image was available on my hosting provider, I went that route. I’ve converted the database cluster and have both front-end web servers (Nginx w/ caching) migrated over. I was planning on slowly working my way through the rest of the environment, and then I find out CentOS 8 is getting discontinued at the end of 2021. Pretty damn short notice, considering CentOS 6 just went end-of-life on November 30th, and CentOS 7’s end-of-life isn’t until sometime in 2024. Based on the description, the replacement – CentOS Stream – is going to serve as a beta for RHEL, not exactly useful when looking for stability. Guess it’s time to look for a new OS for my servers.

NAS Build 2.0?

So I built a new temporary NAS out of an old PowerEdge 2950 G3 I had in my recycle pile, a HBA with external ports, and a 15 bay external drive enclosure. This seems to have resolved the issue I was having with FreeNAS dropping the WD Red pool, even if I haven’t pinpointed the exact cause. At some point I would like to replace the 2950 as it is a bit of a potential bottleneck, and the upgradability is limited (I upgraded it to 40 GB of RAM, but the system maxes at 64). Plus, the drives installed in the system are all run through the onboard RAID controller as single-disk RAID 0s which is less than desirable. As that pool contains my VMs I’ll need to move them elsewhere before I replace the box. I was looking at adding to the WD Red pool but the matching drives aren’t built for a more than 8 drive use case, and the ones that are don’t come in 3TB. So I’m looking at other options as well, considering building a new 5 disk pool with larger drives, and adding additional 5 disk vdevs later. But for the moment this will all have to remain hypothetical, hopefully drive prices come down further.

Dead Ends

I was finding nothing but dead ends looking for a job before this started. I got laid off back in March from my part time job temporarily, but now looking most likely indefinitely. Guess I really, really screwed up somewhere along the line, I just don’t know where.

Gluster Troubleshooting

So the backend storage for the stuff I’m serving to the internet resides on a pair*of CentOS 7 hosts running GlusterFS 6.7 with a single mirrored brick. Recently I’ve started getting disk space warnings on one node but not the other and the cause appears to be the log file for the brick just exploding (>3GB) in size. I can’t figure out what the issue is, so far though it doesn’t appear to be preventing access.

(*Yes I know it should probably be at least 3 nodes or 2 + arbiter, at some point when I have a bigger budget I’ll look at fixing it but this also isn’t a business production environment either so I can deal with an occasional split-brain issue.)
The logs all seem to be filling with entries like glusterfs brick log

Lab Updates

So late last year I picked up a refurbished Dell PowerEdge R610 to use as a VM host to replace the 2950 Gen3. I still need to get more memory for it as I only have it outfitted with 64 GB currently. I still want to get 1-2 more R610s so I can get rid of the R900. I might get a 4th if I get into anything requiring super heavy lifting that i can dedicate to said task. Originally I was going to scrap the 2950 but have since decided to try doing a FreeNAS build around it. It only has 6 bays which I’ve filled with 2TB drives and replaced a couple of the 4GB memory modules with 8GB modules bringing the total up to 40GB (I’ll probably replace the remaining 6 later in the year). So far it’s handling my iSCSI stuff for the XenServer boxes, I’d like to move everything else over but that would require a suitable controller and some sort of 3.5″ disk shelf/enclosure that I haven’t located yet (at least without having to pay $$$$$, short of going on eBay and hoping for the best). If I can get that done I’ll probably take the AMD system that is in my original NAS build and use it for a new workstation.

The Week Ahead..

Welcome to severe weather season everybody. Make sure your weather radio is plugged in & has a good battery (along with a backup method (or methods) to receive warnings).

Today (5/16/19)

Tomorrow (5/17/19)

Saturday (5/18/19)

Tuesday (5/21/19)

Florida

So I’d rather be in Florida the rest of the week but it’s not happening obviously. So for anyone on my friends list who are headed down to MCO (have fun) here’s a short forecast I tossed together. I meant to do this earlier in the week but due to computer problems I didn’t get much done this weekend. I may try to do an updated one tonight that runs out a little further if my meetings don’t run late.

Today (All times EDT)
Dinner (5 PM): Low 70s
Evening (8 PM): Upper 60s
Overnight (1 AM): Mid 60s

Thursday
Breakfast (8 AM): Mid 60s
Lunch (11 AM): Low – Mid 70s
Dinner (5 PM): Upper 70s
Evening (8 PM): Low 70s
Overnight: Around 70

Friday
Breakfast (8 AM): Upper 60s
Lunch (11 AM): Upper 70s – Low 80s
Dinner (5 PM): Mid – Upper 80s
Evening (8 PM): Upper 70s – Low 80s
Overnight: Low – Mid 70s

Saturday
Breakfast (8 AM): Upper 60s
Lunch: Upper 70s – Low 80s
Dinner (5 PM): Upper 80s