Double-entry accounting doesn’t have much in the way of rules. But one is paramount: all debits in a transaction must equal all credits. But there is no limit to the number of debits or credits you can have. No limit to how many accounts you’re referencing.
Income and revenue transactions are many source to one destination. Transfers are one source to one destination. (Splits are allowed, but they must all have the same source and destination.) Expense transactions are one source to many destination.
The thing is…. anyone’s paycheck breaks this.
Your paycheck is your gross pay (income), deductions (expenses), and your net pay (asset). You can’t have all of that in a single revenue transaction in Firefly III. Firefly’s documentation says the source accounts for the splits in a revenue transaction must be revenue accounts, and the destination is an asset or liability account.
One income (possibly more) to many expenses and at least one asset, the latter of which could include splitting deposits between multiple accounts (e.g. direct deposit to savings as well as checking) and/or retirement account contributions.
The only way around this is several transactions involving an intermediate asset account, which we’ll call simply “Paycheck” for this example.
Deductions. Expense transaction – source: “Paycheck”, destinations: accounts for your deductions (e.g., taxes, insurance, etc., but NOT including retirement account contributions)
Net pay. Transfer transaction – source: “Paycheck”, destination: bank account
Split deposit. Expense transactions – source: “Paycheck”, destination: other bank accounts
And whatever other transactions you’d need to account for everything. If you have employer-paid benefits or an employer 401(k) match, you could include that as separate splits on the main “gross pay” transaction.
In my case, my paycheck has three incomes: salary, employer 401(k) match, and employer-paid benefits.
Anything that breaks the one-to-many or many-to-one rule in Firefly III requires using intermediate accounts. And, as already mentioned, anyone’s paycheck is a ready item showing this. And on the expense front, if you’ve ever split payment on an expense, such as using a gift card or gift certificate to cover part of it, you’re breaking the one-source, many-destination rule for expense transactions.
This goes against double-entry accounting.
There is no rule in double-entry accounting that expense transactions must be only from a single source. There is no rule that revenue or income transactions must be only single destination. So Firefly III shouldn’t have this limitation if they’re going to say it “features a double-entry bookkeeping system”.
But I can… somewhat live with that. Cloning transactions means you really only need to enter those transactions once. But… why does cloning not open the transaction editor with everything pre-populated rather than creating a new transaction that you then have to edit, generating unnecessary audit entries?
The user interface, though, definitely leaves something to be desired.
I’ll admit I’ve been spoiled by GnuCash’s simple, straightforward, spreadsheet-like interface that makes it stupid-easy to enter transactions. It’s really easy to navigate through the transaction editor using the keyboard, much like Microsoft Money, which I used before GnuCash. And getting something like that in a mobile or web-based application is going to be hard to come by.
Firefly III’s transaction editor is far from “stupid-easy”.
One tenet of user interface design and crafting the user experience is to make your software easy to use and intuitive as best you can. Keyboard shortcuts are the easiest way to do this. The less someone has to use the mouse to perform an operation, the better. And with GnuCash, I can tab or arrow-key around the transaction editor. No mouse clicking required.
Sure learning keyboard shortcuts can be a pain in the ass. But once you learn them, you’ll never not use them again since not using them slows you down.
So why does Firefly III not have any keyboard shortcuts? If anything, that should be priority. Usability is of paramount importance in any system. Doubly-so with financial management. Consulting with a UI/UX professional for ways to improve the user interface, hopefully without having to gut it and start over, would be beneficial.
On the plus side, it is easy to set up. Especially if you use the script I provided in a previous article to set it up in a Docker container.
Firefly III is an open source financial management system. I’m trying it out in renewing my search for an accounting system I can use from a browser. Am I ultimately going to keep using it? I’m still kind of playing around with it, but it’s likely I’ll look for something else. I guess when it comes to ease of use, nothing really compares to GnuCash’s simplicity. And that’s largely what I’m looking for.
Firefly III, sorry to say, doesn’t even come close. I might write a review for it later, contrasting it with GnuCash and comparing it against the rules (or, largely, lack thereof) with double-entry accounting.
Anyway, copy the installation script to a .sh file on your Docker machine and give it the requisite permissions, then run it. Make sure to copy off everything from output at the end and store it off somewhere, preferably in a password manager like KeePass, since you’ll need it for running the update script.
Like with the install script for Guacamole, pay attention to the subnet and IPs this will be using and change those if necessary. If you don’t select a port number at the first prompt, it’ll randomly select a port between 40,000 and 60,000. And unless you have a need for this to be on a specific port, I suggest just letting the script pick one at random.
Having a separate script that is run periodically – e.g. as a cron job – to back up the volumes for the MySQL and Firefly containers would also be a good idea.
Installation script
#!/bin/bash
read -p "Port number for Firefly web interface [Enter to pick random port]: " firefly_port_number
echo
if [ -z $firefly_port_number ]; then
firefly_port_number=$(curl -s "https://www.random.org/integers/?num=1&min=40000&max=60000&col=1&base=10&format=plain&rnd=new")
fi
# Random passwords and app key generated using Random.org so you don't have to supply them
root_secure_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
firefly_mysql_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
firefly_app_key=$(curl -s "https://www.random.org/strings/?num=1&len=32&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
sudo docker pull mysql
sudo docker pull fireflyiii/core
# Creating the volumes for persistent storage
sudo docker volume create firefly_iii_upload
sudo docker volume create firefly_mysql_data
# Creating the network. This allows the containers to "see" each other
# without having to do some odd port forwarding.
#
# Change the subnet and gateway if you need to.
firefly_network_subnet=192.168.11.0/24
firefly_network_gateway=192.168.11.1
mysql_host_ip=192.168.11.2
firefly_host_ip=192.168.11.3
# Remove the ".local" if necessary. Override entirely if needed.
full_hostname=$(hostname).local
sudo docker network create \
--subnet=$firefly_network_subnet \
--gateway=$firefly_network_gateway \
firefly-net
# Setting up the MySQL container
sql_create="\
CREATE DATABASE firefly; \
\
CREATE USER 'firefly_user'@'%' \
IDENTIFIED BY '$firefly_mysql_password'; \
\
GRANT ALL PRIVILEGES \
ON firefly.* \
TO 'firefly_user'@'%'; \
\
FLUSH PRIVILEGES;"
echo Creating MySQL container
sudo docker run -d \
--name firefly-mysql \
-e MYSQL_ROOT_PASSWORD=$root_secure_password \
-v firefly_mysql_data:/var/lib/mysql \
--network firefly-net \
--ip $mysql_host_ip \
--restart unless-stopped \
mysql
# Sleep for 30 seconds to allow the new MySQL container to fully start up before continuing.
echo Let\'s wait about 30 seconds for MySQL to completely start up before continuing.
sleep 30
echo Setting up MySQL database
sudo docker exec firefly-mysql \
mysql --user=root --password=$root_secure_password -e "$sql_create"
echo Creating Firefly-III container
sudo docker run -d \
--name firefly \
-v firefly_iii_upload:/var/www/html/storage/upload \
-e DB_HOST=$mysql_host_ip \
-e DB_DATABASE=firefly \
-e DB_USERNAME=firefly_user \
-e DB_PORT=3306 \
-e DB_CONNECTION=mysql \
-e DB_PASSWORD=$firefly_mysql_password \
-e APP_KEY=$firefly_app_key \
-e APP_URL=http://$full_hostname:$firefly_port_number \
-e TRUSTED_PROXIES=** \
--network firefly-net \
--ip $firefly_host_ip \
--restart unless-stopped \
-p $firefly_port_number:8080 \
fireflyiii/core
echo Done.
echo
echo MySQL root password: $root_secure_password
echo MySQL firefly password: $firefly_mysql_password
echo Firefly App Key: $firefly_app_key
echo Firefly web interface port: $firefly_port_number
echo
echo Store off these passwords as they will be needed for later container updates.
echo To access the Firefly III web interface, go to http://$full_hostname:$firefly_port_number
Update script
#!/bin/bash
read -s -p "MySQL Firefly user password: " firefly_mysql_password
echo
read -s -p "Firefly app key: " firefly_app_key
echo
read -p "Port number for Firefly web interface: " firefly_port_number
echo
# Make sure these match the IPs in the installation script
mysql_host_ip=192.168.11.2
firefly_host_ip=192.168.11.3
# Remove the ".local" if necessary
full_hostname=$(hostname).local
echo Stopping the containers.
sudo docker stop firefly-mysql
sudo docker stop firefly
echo Deleting the containers.
sudo docker rm firefly-mysql
sudo docker rm firefly
sudo docker pull mysql
sudo docker pull fireflyiii/core
echo Creating MySQL container
sudo docker run -d \
--name firefly-mysql \
-v firefly_mysql_data:/var/lib/mysql \
--network firefly-net \
--ip $mysql_host_ip \
--restart unless-stopped \
mysql
echo Creating Firefly-III container
sudo docker run -d \
--name firefly \
-v firefly_iii_upload:/var/www/html/storage/upload \
-e DB_HOST=$mysql_host_ip \
-e DB_DATABASE=firefly \
-e DB_USERNAME=firefly_user \
-e DB_PORT=3306 \
-e DB_CONNECTION=mysql \
-e DB_PASSWORD=$firefly_mysql_password \
-e APP_KEY=$firefly_app_key \
-e APP_URL=http://$(hostname).local:$firefly_port_number \
-e TRUSTED_PROXIES=** \
--network firefly-net \
--ip $firefly_host_ip \
--restart unless-stopped \
-p $firefly_port_number:8080 \
fireflyiii/core
echo Done.
With the last update, I mentioned upgrading from 990FX to X99 with the intent of eventually adding an NVMe carrier card and additional NVMe drives so I could have metadata vdevs. And I’m kind of torn on the additional NVMe drives.
No more NVMe?
While the idea of a metadata vdev is enticing, since it – along with a rebalance script – aims to improve response times when loading folders, there are three substantial flaws with this idea:
it adds a very clear point of failure to your pool, and
the code that allows that functionality hasn’t existed for nearly as long, and
the metadata is still cached in RAM unless you disable read caching entirely
Point 1 is obvious. You lose the metadata vdev, you lose your pool. So you really need to be using a mirrored pair of NVMe drives for that. This will enhance performance even more, since reads should be split between devices, but it also means additional expense. Since you need to use a special carrier card if your mainboard doesn’t support bifurcation, at nearly 2.5x the expense of carrier cards that require bifurcation.
But even if it does, the carrier cards still aren’t inexpensive. There are alternatives, but they aren’t that much more affordable and only add complication to the build.
Point 2 is more critical. To say the special vdev code hasn’t been “battle tested” is… an understatement. Special vdevs are a feature first introduced to ZFS only in the last couple years. And it probably hasn’t seen a lot of use in that time. So the code alone is a significant risk.
Point 3 is also inescapable. Unless you turn off read caching entirely, the metadata is still cached in RAM or the L2ARC, substantially diminishing the benefit of having the special vdev.
On datasets with substantially more cache misses than hits – e.g., movies, music, television episodes – disabling read caching and relying on a metadata vdev kinda-sorta makes sense. Just make sure to run a rebalance script after adding it so all the metadata is migrated.
But rather than relying on a special vdev, add a fast and large L2ARC (2TB NVMe drives are 100 USD currently), turn primary caching to metadata only and secondary caching to all. Or if you’re only concerned with ensuring just metadata is cached, set both to metadata only.
You should also look at the hardware supporting your NAS first to see what upgrades could be made to help with performance. Such as the platform upgrade I made from 990FX to X99. If you’re sitting on a platform that is severely limited in terms of memory capacity compared to your pool size – e.g. the aforementioned 990FX platform, which maxed out at 32GB dual-channel DDR3 – a platform upgrade will serve you better.
And then there’s tuning the cache. Or even disabling it entirely for certain datasets.
Do you really need to have more than metadata caching for your music and movies? Likely not. Music isn’t so bandwidth intense that it’ll see any substantial benefit from caching, even if you’re playing the same song over and over and over again. And movies and television episodes are often way too large to benefit from any kind of caching.
Photo folders will benefit from the cache being set to all, since you’re likely to scroll back and forth through a folder. But if your NAS is, like mine, pretty much a backup location and jumping point for offsite backups, again you’re likely not going to gain much here.
You can improve your caching with an L2ARC. But even that is still a double-edged sword in terms of performance. Like the SLOG, you need a fast NVMe drive. And the faster on random reads, the better. But like with the primary ARC, datasets where you’re far more likely to have cache misses than hits won’t benefit much from it.
So then with the performance penalty that comes with cache misses, is it worth the expense trying to alleviate it based on how often you encounter that problem?
For me, it’s a No.
And for most home NAS instances, the answer will be No. Home business NAS is a different story, but whether it’ll benefit from special devices or an L2ARC is still going to come down to use case. Rather than trying to alleviate any performance penalty reading from and writing to the NAS, your money is probably better spent adding an NVMe scratch disk to your workstation and just using your NAS for a backup.
One thing I think we all need to accept is simply that not every problem needs to be solved. And in many cases, the money it would take to solve a problem far overtakes what you would be saving – in terms of time and/or money – solving the problem. Breaking even on your investment would likely take a long time, if that point ever comes.
Sure pursuing adding more NVMe to Nasira would be cool as hell. I even had an idea in mind of building a custom U.2 drive shelf with one or more IcyDock MB699VP-B 4-drive NVMe to U.2 enclosures – or just one to start with – along with whatever adapters I’d need to integrate that into Nasira. Or just build a second NAS with all NVMe or SSD storage to take some of the strain off Nasira.
Except it’d be a ton of time and money acquiring and building that for very little gain.
And sure, having a second NAS with all solid-state storage for my photo editing that could easily saturate a 10GbE trunk sounds great. But why do that when a 4TB PCI-E 4.0×4 NVMe drive is less than 300 USD, as of the time of this writing, with several times the bandwidth of 10GbE? Along with the fact I already have that in my personal workstation. Even PCIE-3.0×4 NVMe outpaces 10GbE.
A PXE boot server is the only use case I see with having a second NAS with just solid-state storage. And I can’t even really justify that expense since I can easily just create a VM on Cordelia to provide that function, adding another NVMe drive to Cordelia for that storage.
Adding an L2ARC might help with at least my pictures folder. But the more I think about how I use my NAS, the more I realize how little I’m likely to gain adding it.
First I upgraded the SAS card from the 9201-16i to the 9400-16i. The former is a PCI-E 2.0 card, while the latter is PCI-E 3.0 and low-profile – which isn’t exactly important in a 4U chassis.
After replacing the drive cable harness I mentioned in the previous article, I still had drive issues that arose on a scrub. Thinking the issue was the SAS card, I decided to replace it with an upgrade. Turns out the issue was the… hot swap bay. Oh well… The system is better off, anyway.
Now given what I said above, that doesn’t completely preclude adding a second NVMe drive as an L2ARC – just using a U.2 to NVMe enclosure – since I’m using only 3 of the 4 connectors. The connectors are tri-mode, meaning they support SAS, SATA, and U.2 NVMe. And that would be the easier and less expensive way of doing it.
This also opens things up for better performance. Specifically during scrubs, given the growing pool. I also upgraded the Mellanox ConnectX-2 to a ConnectX-3 around the same time as the X99 upgrade to also get a PCI-E 3.0 card there. And swap it from a 2-port card down to a single port.
The other change is swapping out the remaining 4TB pair for a 16TB pair. I don’t exactly need the additional storage now. Nasira wasn’t putting up any warnings about storage space running out. But it’s better to say ahead of it, especially since I just added another about 1.5TB to the TV folder with the complete Frasier series and a couple other movies. And more 4K upgrades and acquisitions are coming soon.
One of the two 4TB drives is also original to Nasira, so over 7 years of near-continuous 24/7 service. The 6TB pairs were acquired in 2017 and 2018, so they’ll likely get replaced sooner than later as well merely for age. Just need to keep an eye on HDD prices.
To add the new 16TB pair, I didn’t do a replace and resilver on each drive. Instead I removed the 4TB pair – yes, you can remove mirror vdevs from a pool if you’re using TrueNAS SCALE – and added the 16TB pair as if I was expanding the pool. This cut the time to add the new pair to however long was needed to remove the 4TB pair. If I did a replace/resilver on each drive, it would’ve taken… quite a bit more than that.
Obviously you can only do this if you have more free space than the vdev you’re removing – I’d say have at least 50% more. And 4TB for me was… trivial. But that obvious requirement is likely why ZFS doesn’t allow you to remove parity vdevs – i.e. RAID-Zx. It would not surprise me if the underlying code actually allows it, with a simple if statement cutting it off for parity vdevs. But how likely is anyone to have enough free space in the pool to account for what you’re removing? Unless the pool is brand new or nearly so… it’s very unlikely.
It’s possible they’ll enable that for RAID-Zx at some point. But how likely is someone to take advantage of it? Whereas someone like me who built up a pool one mirrored pair at a time is a lot more likely to use that feature for upgrading drives since it’s a massive time saver, meaning an overall lesser risk compared to replace/resilver.
But all was not well after that.
More hardware failures…
After installing the new 16TB pair, I also updated TrueNAS to the latest version only to get… all kinds of instability on restart – kernel panics, in particular. At first I thought the issue was TrueNAS. And that the update had corrupted my installation. Since none of the options in Grub wanted to boot.
So I set about trying to reinstall the latest. Except the install failed to start up.
Then it would freeze in the UEFI BIOS screen.
These are signs of a dying power supply. But they’re actually also signs of a dying storage device failing to initialize.
So I first replaced the power supply. I had more reason to believe it was the culprit. The unit is 10 years old, for starters, and had been in near-continuous 24/7 use for over 7 years. And it’s a Corsair CX750M green-label, which is known for being made from lower-quality parts, even though it had been connected to a UPS for much of its near-continuous 24/7 life.
But, alas, it was not the culprit. Replacing it didn’t alleviate the issues. Even the BIOS screen still froze up once. That left the primary storage as the only other culprit. An ADATA 32GB SSD I bought a little over 5 years ago. And replacing that with an Inland 128GB SSD from Micro Center and reinstalling TrueNAS left me with a stable system.
That said, I’m leaving the new RM750e in place. It’d be stupid to remove a brand. new. power supply and replace it with a 10 year-old known lesser-quality unit. Plus the new one is gold rated (not that it’ll cut my power bill much) with a new 7-year warranty, whereas the old one was well out of warranty.
I also took this as a chance to replace the rear 80mm fans that came with the Rosewill chassis with beQuiet 80mm Pure Wings 2 PWM fans, since those had standard 4-pin PWM power instead of needing powered direct from the power supply. Which simplified wiring just a little bit more.
Next step is replacing the LP4 power harnesses with custom cables from, likely, CableMod, after I figure out measurements.
Add this code to the default.hbs file in your theme at the bottom before the {{ghost_foot}} helper. Don’t forget as well to include the fslightbox.js file. And to ensure the code is used only when needed, enclose both the line pulling in fslightbox.js and the below code block in the {{post}} helper.
// Lightbox: https://fslightbox.com/
// Adapted from: https://brightthemes.com/blog/ghost-image-lightbox
// Improved to make it so each gallery has its own lightbox
// unless the galleries are immediately adjacent to each other.
// Also removed using a lightbox for individual images since my
// current Ghost theme ("Edge") doesn't use a lightbox for
// individual images.
let galleries = document.querySelectorAll('.kg-gallery-card')
let galleryIdx = 0
let hasImages = false
let lastGallery = document
galleries.forEach(function (gallery)
{
// Ghost has a limit of 9 images per gallery. So if two or more
// galleries are placed one immediately after the other - no
// other blocks between them - then treat the galleries as if
// they are together.
if(lastGallery.nextElementSibling != gallery)
galleryIdx++
lastGallery = gallery
let images = gallery.querySelectorAll('img')
images.forEach(function (image)
{
hasImages = true
var wrapper = document.createElement('a')
wrapper.setAttribute('data-no-swup', '')
wrapper.setAttribute('data-fslightbox',
'gallery_' + galleryIdx.toString())
wrapper.setAttribute('href', image.src)
wrapper.setAttribute('aria-label', 'Click for Lightbox')
image.parentNode.insertBefore(wrapper,
image.parentNode.firstChild)
wrapper.appendChild(image)
});
});
if(hasImages)
refreshFsLightbox()
In a previous article, I described migrating Plex from one VM to another. In recently rebuilding my virtualization server, now called Cordelia, I decided against creating a VM for Plex. I have a few Docker containers running on it and decided to let Plex be another rather than constricting it to a set amount of memory and core count through a VM.
Migrating from a Plex VM to the Docker container is pretty straightforward. Just a few things to keep in mind. Along with creating a script you can run whenever there are server updates, since you can’t just… install a new version over the existing one like you could before.
Note: If you’re considering migrating Plex to Docker or running anything through Docker, make sure to install Docker CE from Docker’s repository. Don’t install the version of Docker from your distribution’s repositories. This will ensure you have the latest version – meaning also the latest security updates – and greatest compatibility.
I’ll also presume in this article that you know your way around a Linux setup, particularly the bash command line. You don’t need to be great with Docker containers, but some knowledge there will be helpful as well.
Backing up the server
First step is to back up the library on the original server. As root or an administrator, after stopping Plex Media Server:
cd /var/lib
sudo tar cvf plexmediaserver.tar plexmediaserver/
gzip plexmediaserver.tar
This should give you a .tar.gz backup of your Plex instance. I have a pretty large library – over 400 movies and specials, over 300 music albums, and 37 TV shows, most of which are complete series (and yes, I own physical copies or licenses to everything on it) – so my backup ended up being over 4GB. Compressed. Your mileage will vary.
My Plex server pulled from NFS shares on my NAS, so I made sure to also copy off the relevant fstab entries so I could restore them. Make note of however you have your media mounted to your Plex VM or physical server, the actual paths to the media. For example, on my Plex VM, I had the media mounted to these paths, meaning these paths are also what the Plex Docker container would be looking for:
/mnt/tv
/mnt/movies
/mnt/music
Transfer the backup file off the server somehow and shut it down.
Mount points
Here is where things get a little tricky. I think it best I just illustrate this using my directory mounts. To recap, these were the paths to the media I had with my Plex VM, meaning these are the paths the container will want:
/mnt/tv
/mnt/movies
/mnt/music
Paths work far different in containers compared to a virtual machine. When you install the Plex service on a virtual machine, it can see all paths it has permission to access.
Containers are a bit more isolated. This means you don’t have to worry about a container having access to more than you want it to, but it does mean you have to explicitly mount into the container whatever you want it to access.
There isn’t anything wrong, per se, with maintaining these mount points on the Docker host. It’s not like I’m going to have any other Docker containers using them. But I instead chose to consolidate those mount points under a subdirectory under /mnt on Cordelia:
/mnt/media/tv
/mnt/media/movies
/mnt/media/music
Why do this? It’s cleaner and means a simpler set of commands for creating the container.
Had I kept the same mount points as before – e.g., /mnt/tv, etc. – I would need a separate volume switch for each. Having everything under one subdirectory, though, means having just one volume switch that catches everything, as you’ll see in a little bit.
However you create the mount points, don’t forget to add them to your /etc/fstab file for your Docker host.
Your library
Now you’ll need another directory for your library files – i.e. the compressed archive you created above. Just find a suitable spot. You can even put it back at /var/lib/plexmediaserver if you want, following the restore commands in my previous Plex article. I have it on Cordelia’s NVMe drive.
Just remember that the archive you created above will create a directory called plexmediaserver when you extract it. And, obviously (hopefully), do NOT delete the archive until you confirm everything is working.
Copy and paste the above script into a shell file on your server – e.g. “update-plex.sh” – and give it proper permissions. Whenever Plex tells you there’s a new version available, just run the above script. Obviously (hopefully) the first time you run this, the commands to stop and remove the Plex container will print out errors because… the container doesn’t exist yet.
/path/to/plexmediaserver is the path where you extracted your backup archive
/path/to/media is, in my instance, the /mnt/media directory I mentioned above
If I had kept the separate mount points, I’d need individual -v switches for each path – e.g. -v /mnt/movies:/mnt/movies. Having all of them consolidated under /mnt/media, though, means I need just the one -v switch in the above script.
The latter volume mount is what ensures the Plex container has the same path for the library files. So when the library says the episodes for Game of Thrones, for example, are at /mnt/tv/Game of Thrones, it can still find them even though the Docker host has that path mounted as /mnt/media/tv/Game of Thrones.
After you create the container for the first time, you’ll want to interact with the container to make sure your mount points are set up properly:
sudo docker exec -it plex bash
Under /config you should see just one directory: Library. Under Library should be… everything else Plex will be looking for. And check your media mount point to make sure the directories there look as they did on your previous VM.
If any of the mount points don’t look right, you will need to massage the script above and re-run it to create a new container. Then just rinse and repeat with interacting with the container to validate the mount points.
Don’t forget to add the needed ports to your firewall: you must open 32400/TCP, and if you intend to use DLNA, you need to open 1900/UDP and 32469/TCP.
Making sure it all works
And lastly, of course, open one of your Plex clients and try playing something to verify everything works and that it doesn’t show the media item as “unavailable”. If anything is “unavailable”, you need to double-check your -v paths in your script. Use “Get Info” on the media item to see what path it’s trying to use to find it so you can double-check everything.
This quick post is for those with a 2.5Gb PoE device, such as a WiFi 6 access point, that either won’t run at 2.5GbE speeds at all or falls back to 1Gb or 100Mpbs after a short time.
On my home network I have several TP-Link devices. Relevant to this article is my EAP670 WiFi access point and TL-SG3210XHP-M2 2.5GbE PoE+ switch. And for some reason the EAP670 wouldn’t run faster than 100Mpbs.
Sound familiar?
Well there’s a simple solution I’m surprised I never thought of sooner: DON’T. USE. CAT5E! Don’t use Cat6 or Cat6A either.
To be sure your 2.5Gb PoE device will talk at 2.5GbE speed, use Cat7. When I switched out the Cat5E cable for Cat7, the access point had no problem staying at 2.5Gb. You might get away with Cat6A, but you’re better off using Cat7.
Cat5E will work fine for non-PoE 2.5GbE devices. But it won’t work for 2.5GbE POE. Again, use Cat7.
First, it is generally NOT true that workers at a business create value. Most workers, instead, transform existing materials with their labor into new products, or leverage existing products and materials to provide a service.
It’s only in the creative industries that employees actually create value, create something that didn’t previously exist in any form. I’m talking graphic design, web site and software development, photography, videography, architecture, etc. And in all those cases, the work product is also considered a “work made for hire” under copyright law, meaning it’s the intellectual property of the employer. And like in most every other line of work, the employees in creative industries aren’t using their own equipment.
As an example, I’m a professional software engineer. My employer provides everything I need to work: the computer (laptop, specifically), access to needed cloud services at their expense, software licenses, etc. The only thing I’m bringing to the table is my skill and expertise.
I am permanently work from home, so I do provide additional equipment for my own benefit: two 4K televisions I use as monitors (with needed adapters to connect to the laptop), mechanical keyboard, mouse, and work space. But those additions are divorced from the laptop my employer provided, so won’t change if I change employers.
In very, very few lines of work is an employee bringing anything more to the table than their skills, experience, and, where required, strength. The employer is providing everything else at their expense, not the employee’s expense: machinery and equipment, tools, materials, the space to do the work, additional personnel so the employee doesn’t need to do everything, etc. Before the employee even shows up to add their part to the equation, the employer has already sunk a lot of cost into production.
Yet the image above pretends the employer – the “capitalist” – isn’t putting anything into the production. Only taking. Whereas the reality is the “capitalist” is providing everything except what the employee is adding.
The easiest illustration of this is food service. Specifically those who are preparing the food.
The line cooks don’t provide the ingredients, and they didn’t purchase the equipment needed to cook and serve the food. The owners of the establishment provided that. Along with paying to have it all installed properly, paying for maintenance when required, replacements where necessary, cleaning equipment and supplies, the utility service (e.g. electricity, natural gas or liquid propane, water, etc.), insurance on the entire establishment and employees, and… pretty much everything else.
The employee just… shows up and does their job as agreed. Receiving for their time a paycheck.
Throwaway cause some of my in-laws use reddit. We are pregnant with our rainbow baby and we couldn’t be happier. On Friday we had our 12 weeks sonogram and got plenty of pictures to take home.
My MIL and FIL came to visit so they could see them and, eventually, the pictures disappeared. I asked them for help to find them but they were just nowhere to be found. My MIL was pretty eager to leave and that didn’t sit well with me, after they left I couldn’t stop thinking about it.
So on Sunday we went to their place for lunch and, when I went to the bathroom, I went into their bedroom and found the pictures in her nightstand. I was fuming. We were planning to give each side of the family copies of the pictures and a framed one (and we told them), but of course she just wanted them all. I confronted her when I came back and she just said “she thought they were for her” which is clearly a lie (I asked them for help to find them for crying out loud, and was visibly upset).
It’s not the first time she pulls something like this and, while my husband defended me in front of them when she protested for my snooping, he then told me I’d crossed a line when I opened her drawers. I know it wasn’t the nicest thing to do, I regret having to stoop to her level but I was just so angry. So AITA?
Small Update: I had a talk with my husband, he apologized for “scolding” me and confessed that, ever since he was a boy, he’s been terrified of his mother’s meltdowns and does anything he can to avoid them, so my confrontation kind of triggered him. He also said that he’s realised that some things like me and our baby will always come first.
He’s willing to go to therapy to learn how to establish and maintain strong boundaries with his mom, so we’ll see how it goes. Thanks everyone for your kind opinions, even if you think I was TA (I know I was).
And unsurprisingly the result is NTA – Not the Asshole. There were some more reasonable heads in the group, though, labeling this as ESH – Everything Sucks Here. And that’s my verdict.
On the one hand, the MIL stole from her. And I want to make it abundantly and explicitly clear that I in no way defend the MIL’s actions, and nothing I say herein should be interpreted as defending, excusing, or condoning what the MIL did.
But the OP then violated MIL’s privacy by snooping around. Sure it was with the intent to recover stolen property, but that can’t justify her actions. Why not? Simple. What if she didn’t find them in the nightstand? Would she have searched the bedroom? How much of the bedroom did she search before finding them in the nightstand? Would she have searched the entire house if she didn’t find them in the bedroom?
Finding the pictures doesn’t retroactively justify the search. Let me repeat that for those in the back: finding the pictures does NOT retroactively justify the search!
It’d be one thing if the pictures were in plain sight – though the MIL was clearly smart enough to ensure they weren’t. That OP had to go searching in closed drawers to find them puts her squarely in the wrong. Again, finding the pictures doesn’t retroactively justify the search!
* * * * *
“Defending principles is extremely difficult. But necessary.”
I said those words about 7 years ago with regard to Brock Turner, or rather the aftermath of his criminal trial. And herein those words ring true once again.
When I posted the above to Facebook, the feedback was… far from supportive, to say the least. And the responses, much like the responses on the Reddit thread, all took much the same position that the theft not only justified the search, but that the OP would’ve been in the right to show up at the MIL’s house, knock on the door or even just break in, and search the entire house till she found what she was looking for. That your protection against unreasonable searches – which the OP engaged in – means only the government is so enjoined.
And anyone who believes such really needs to study up on tort law.
Yes the Fourth Amendment enjoins the government. But underpinning the rights protected by the Bill of Rights (not created by them as many assert) are base principles without which the rights wouldn’t be a valid concept. The Second Amendment comes from the basic principle of self defense, that the only legitimate violence is to counter the immediate and illegitimate violence of others. And the Fourth Amendment comes from the basic principle that a person’s home is their castle, in which they have a very reasonable expectation of privacy. Inviting someone into your home doesn’t mean the invitee can just… snoop around, regardless of their rationale.
Rights govern how the government interacts with the people, but principles govern how we interact with each other. Something enshrined in criminal and civil law.
As such, the Fourth Amendment aside, the theft doesn’t justify the search. Finding the photos in the MIL’s bedroom does not retroactively justify her actions. If her actions can’t be justified by not finding the property, finding them doesn’t automatically make her actions justified. Again, it all comes down to base principles.
Starting with the Golden Rule when I said this in several variations:
As I said, how would you like it if someone just searched your home on the mere *allegation* you took something from them, even knowing you’re completely innocent? Would you just stand by and let them do it? Or would you demand they leave and call the police if they refused?
I think we all know the answer. But! it’s also clear you also expect to be able to freely search someone else’s home on the mere allegation they took something of yours… That’s why you don’t want to acknowledge that the OP is in the wrong. How dare someone invade your privacy, but Lord forbid if someone doesn’t let you invade theirs, or dares to say you’re in the wrong for doing so…
And the person to whom I made that comment said this in reply:
This isn’t a random “someone.” This is the MIL with a history of similar behavior. Perhaps you would have like for the DIL to call the cops on MIL, and have her arrested and the property searched pursuant to that arrest and warrant?
To which I said this:
Doesn’t matter that it’s the MIL, and it doesn’t matter that MIL has a prior history of similar behavior. None of that justifies the DIL’s invasion of the MIL’s privacy.
Again, would you let your neighbor search your home on the mere allegation you stole something from them? And would you search your neighbor’s home on the mere allegation they stole something from you?
Going through the police should only be a last resort. But DIL would’ve been free to threaten to bring in the police if the MIL didn’t return what she reasonably believed she had, then actually reporting the theft to the police if she failed to return the photos. The threat of and actually filing a lawsuit is also an option.
Searching the home is not a viable option, though. What if the pictures weren’t in the bedroom? Would OP have been justified in searching the house for them? Or is only searching the bedroom reasonable merely because she was able to do so somewhat covertly?
That they ultimately were found there is immaterial in determining if OP was in the right since, again, that OP found the photos in the nightstand cannot be used to retroactively justify her actions. You have to look at her actions in the moment, not in hindsight.
Often, unfortunately, the only reasonable action when you’ve been wronged in some way – e.g. something was stolen from you – is to just walk away from it. In this instance, it would’ve been to request new copies of the sonogram and the other photos from the physician, and then just cut ties with the ILs or, at the least, make sure they never again step foot in their home.
Part of the problem we all have is this desire to see all wrongs righted. Which unto itself is perfectly fine. The devil is in the details.
Reality, though, paints a far more bleak picture. There is no righting the vast majority of wrongs. No closure for the wronged. No justice for the wrongdoer. Crimes go unsolved. Many murder victims and missing individuals are never found, or forever remain unidentified when remains are recovered. Meaning family of the missing and unidentified are left with unanswered questions.
But let’s pull back from the bleak and think of the more common ways that we are “wronged” by someone.
Easily the most common anymore is being ghosted, or disconnected or outright blocked by someone on social media, regardless of the reason. For some reason it’s common to take that as a personal affront, where the blocked individual sees it as if they were insulted to their face. And as with everything else, there are healthy and unhealthy ways to respond.
And when someone blocks you on a social media platform, the healthy thing to do, the only reasonable thing to do, in my opinion, is… just walk away from it. Accept it and move on. Don’t contact the person in any other way. And as much as you’d want closure – to know why you were blocked – just walk away from it. Leave it one of the many unanswered questions of life.
And in the above case… the sonogram and other pictures can be reprinted. And the OP can deny the ILs access to the OP’s home since she apparently has a pattern of suspect behavior.
But searching their home for the pictures was beyond unreasonable. And it’s disgusting that many readily defend her behavior.
Seems kind of odd that just a few months after writing about giving my virtualization server a 2TB NVMe drive that I’m now writing about it again. And this time, it’s a platform upgrade. So what gives?
With taking pretty much everything else on my home network to X99 I decided to fast-track an upgrade on my virtualization server as well.
In terms of performance, I’ve tended to lean on the side of more cores and threads over single-threaded performance. Given the VMs I typically had running at any given time, there wasn’t much point in going for single-thread performance over thread count. With this X99 upgrade, though, I’m getting both more threads and better single-thread performance.
My first dedicated virtualization server was a refurbished HP Z600 with dual Xeon E5520 processors. This provided, overall, 8 cores and 16 threads. It had 3 memory slots per processor that could take up to 48 GB RAM max. It’s now completely retired, and I’ll be figuring out what to do with it later.
About 5 years ago I replaced that with the dual-Opteron 6278. This gave me double the threads – 32 overall, 16 per processor – and a lot more memory capacity. The mainboard I chose could take 16GB RDIMMs or 8GB UDIMMs, maxing out 256GB or 128GB, respectively. As of this writing, I had 64GB (8x8GB) Registered ECC.
“Cordelia” is the name I gave this server after migrating it to Fedora Server with the NVMe installation to run VirtualBox and Docker CE.
Current specs
So to recap, here are the specifications before the upgrade:
So DDR3-1600 to DDR4-2400. Dual CPU to single CPU with slightly more threads overall. Slightly lower clocks on the Xeon, but far newer platform. PCI-E 3.0. A lot more memory. And quad-channel!
Dual-CPU to single-CPU eliminates the NUMA node barrier and also reduces the server’s overall power consumption (dual 115W TDP vs single 140W TDP) – though adding in the GTX 1060 kind of offsets that.
Speaking of, while I am giving up onboard video for a dedicated video card, I’m actually not giving up much. Th eonboard video for the ASUS dual-Opteron board has only 8MB of VRAM. No, I didn’t make a typo. Only 8 megabytes. It works fine for a text console. Don’t try to use it for anything even remotely graphically intense.
I did consider the E5-2699 v4 (buy on eBay), which is 22-cores, 44 threads. But also about 3x the price on eBay. For just 4 cores and 8 threads more. I paid just 85 USD for the E5-2697 v4. And at the time I bought it, the E5-2699 v4 was going for 250 USD minimum. So no thanks.
An interesting addition to this server, though, is a Google Coral AI PCI-E module, which allowed me to migrate my home camera monitoring to Frigate. Which can do object detection instead of merely detecting motion. Which should vastly reduce how many false positives I get. While the Google Coral module isn’t required for Frigate, it’s highly, highly recommended. And to further aid Frigate’s functionality with hardware video decoding/encoding, I added a GTX 1060 I had laying around rather than just any graphics card.
I also had to change this over from Fedora to Ubuntu.
Fedora 38 was newly released when I first installed it. So new that Docker hadn’t been released for it. And was only released on April 26. So while I considered going with Fedora 37, which is what I was using prior to the migration, with the plan to eventually in-place upgrade it back to Fedora 38, I opted to install Ubuntu 22.04 LTS instead to get everything up and running sooner.
About the AIO and Micro Center’s error
Before the ThermalTake AIO, I had an NZXT M22 mounted to it. But the pump started making noise – likely due to coolant evaporation – and I needed to replace it. It was also… a week out of warranty, so I can’t RMA it.
So I went looking for a more-or-less direct replacement.
Micro Center had two options in stock: the ThermalTake MH120 and the CoolerMaster MasterLiquid ML120L. Both were listed on Micro Center’s website as supporting Intel 2011, 2011v3, and 2066 sockets. So I picked the MH120 since it was a little less expensive.
Only to discover when getting it home that there was no 2011v3 hardware included. And ThermalTake’s website does NOT list 2011v3 as one of the supported sockets.
But I was able to use the 2011v3 hardware from the NZXT M22 to mount this. And all indications are that it works fine. So the TH120 can support 2011v3. ThermalTake just is not including hardware for it. And the CoolerMaster cooler, though, does support 2011v3 out of the box according to their website.
And I went with the M22 initially as I just had it lying around unused. I didn’t have anything else readily available for 2011v3 that would fit into a 4U chassis. It was only a couple days into service that it started making noise.
So what’s different with this over other methods of setting up Apache Guacamole?
The main thing is it’s entirely hands-off. It’ll pull the images, set up the network, create the containers, initialize the MySQL database… Everything. Including generating secure random passwords for you using Random.org and writing those to the console for you to store off for later updates. (See sections below.) Just copy the script to a .sh file and run it.
And speaking of later updates, the script sets up the containers on their own network with static IPs assigned to each over using the “link” command. This allows for very easy updates down the line since the containers – especially the MySQL container – can be recreated onto the same IP address as before.
Change what you need to avoid conflicts with any existing Docker networks or if you want the main Guacamole container to be accessible on a different port. Hopefully you won’t need to extend out the 30-second wait for the MySQL container to initialize. Bear in mind as well that the gaucd container takes a few minutes to fully start up and its status to be “Healthy”.
Once everything is running, the default admin login (as of this writing) for the Guacamole web interface is guacadmin/guacadmin.
#!/bin/bash
echo Pulling latest Docker images.
sudo docker pull guacamole/guacamole
sudo docker pull guacamole/guacd
sudo docker pull mysql
echo Creating volumes for MySQL data
sudo docker volume create guac-mysql-data
echo Creating network the containers will use.
sudo docker network create \
--subnet=192.168.10.0/24 \
--gateway=192.168.10.1 \
guacamole-net
echo Contacting Random.org for new 16-character passwords for MySQL root and Guacamole users.
root_secure_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
guac_secure_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
sql_create="\
ALTER USER 'root'@'localhost' \
IDENTIFIED BY '$root_secure_password'; \
CREATE DATABASE guacamole_db; \
CREATE USER 'guacamole_user'@'%' \
IDENTIFIED BY '$guac_secure_password'; \
GRANT SELECT,INSERT,UPDATE,DELETE \
ON guacamole_db.* \
TO 'guacamole_user'@'%'; \
FLUSH PRIVILEGES;"
echo Creating MySQL container
sudo docker run -d \
--name guac-mysql \
-e MYSQL_ROOT_PASSWORD=$root_secure_password \
-v guac-mysql-data:/var/lib/mysql \
--network guacamole-net \
--ip 192.168.10.2 \
--restart unless-stopped \
mysql
echo Let\'s wait about 30 seconds for MySQL to completely start up before continuing.
sleep 30
echo Initializing MySQL database
sudo docker exec guac-mysql \
mysql --user=root --password=$root_secure_password -e "$sql_create"
sudo docker exec guac-mysql \
mysql --user=root --password=$root_secure_password \
--database=guacamole_db \
-e "$(sudo docker run --rm guacamole/guacamole /opt/guacamole/bin/initdb.sh --mysql)"
echo Creating guacd container
sudo docker run -d \
--name guacd \
--network guacamole-net \
--ip 192.168.10.3 \
--restart unless-stopped \
guacamole/guacd
echo Creating main Guacamole container
sudo docker run -d \
--name guacamole \
--network guacamole-net \
--ip 192.168.10.4 \
--restart unless-stopped \
-e GUACD_HOSTNAME=192.168.10.3 \
-e MYSQL_HOSTNAME=192.168.10.2 \
-e MYSQL_DATABASE=guacamole_db \
-e MYSQL_USER=guacamole_user \
-e MYSQL_PASSWORD=$guac_secure_password \
-p 8080:8080 \
guacamole/guacamole
echo Done.
echo MySQL root password: $root_secure_password
echo MySQL guacamole_user password: $guac_secure_password
echo Store off these passwords as they will be needed for later container updates.
Update Guacamole containers
Just copy off this script and keep it on your server to update the container with the latest Guacamole images.
You must be logged in to post a comment.