Some explanation on what we're even up to. The process for this is fairly straight forward:
1. mount SSD storage
2. rsync w/ MariaDB running
3. stop MariaDB
4. rsync w/o MariaDB running
5. move /var/lib/mysql away
6. bind-mount SSD storage to there
7. start MariaDB
If we're lucky and no data changes between 2. and 3., step 4 is pretty much instant. That means almost no downtime. If we're unlucky, a bunch of stuff changed which leads to >20 minutes of downtime :(
aaand about 7 hours ago at 4:30 reality kicked in.
Linux has got these /dev/sdb things you can use to talk to your disks, right? Well, their names can randomly change or reorder sometimes. If you're handling multiple disks, that's a thing you should know, right? Despite knowing that, I still managed to use /dev/sdb and .../sdc directly. Two colleges colleagues even signed off on the related ansible playbook.
So I got the whole thing deployed and went to sleep at 2am.
Guess what happened next.
After a reboot at 4:30, sdb and sdc switched places on machholz.uberspace.de. MySQL was subsequently very unhappy about suddenly not seeing any data anymore. That made our monitoring very unhappy. Which in turn lead to two of us trying to figure out what the heck happened.. at 4:45 am. So 30 minutes of debugging and MySQL downtime later, we fixed the problem on that host and all other ones by using UUIDs instead of /dev/sdc. The obvious way to go in the first place.
So, what did we learn?
You can throw two seasoned and one kinda-seasoned admin at an almost trivial task and still manage to make rookie mistakes. And that's okay. Everyone missteps sometimes. Sometimes people even mess up really, really badly. Try to not beat yourself up about it.
We fixed the problem quickly, nobody was blamed and luckily at 4am hardly anyone cares anyway.
... aand I even managed to get some sleep after 4:30!
@dev story of my linux installations. Did not find disks on boot or needed VEEEEERY long to resolve something on boot. Then I put on verbose, switched in as consequences of the log output to uuids and are a happy dev since then. It can happen everyone. And I am still wondering why uuids arent the default when creating fstab (in this case)
@tux0r well, I wouldn't call it that. Referring to devices by UUID seems safer and more stable. The partitions have unique IDs, we should use them. I'm not sure if systemd made the change, but it changed at some point in the "recent" past, yes.