For a long time, I’ve installed MySQL within my development virtual machine (VM), loaded it with a minimal subset of production data, and pointed the database hostname to 127.0.0.1 through the VM’s
However, working from home, I realized that the my slower DSL connection did not affect the amount of time it took to pull a fresh subset of production data into the VM.
I finally went to the trouble of installing MySQL on the host side, and configuring everything to work with that, so that filling it does not require the data to cross any hypervisor boundaries at all. And I learned some things!
- The installer sets it up to launch as
- Using the wrong CA certificate causes padding errors
- MySQL 8.0.14+ allows a separate "administrative management" port
AES_ENCRYPT()function makes the worst possible choice by default
- The speedup is dominated by MySQL's disk access
Users and permissions
I initially created
/usr/local/mysql/etc with ownership of my user and group of staff, mode 2750. When I put a my.cnf file in there and enabled it through the control panel, the control panel could no longer start the instance.
I never did find the log specific to this problem. I just eventually realized that I had made it impossible for the
_mysql user to read the configuration, so mysqld was crashing very early in startup.
The wrong CA certificate
I chose to start using TLS to access our server in production; if a CA file exists at a specific path (which I chose and our library code knows), then our library code uses it to make a TLS connection to MySQL. This file may contain multiple CA certificates, and the verification process uses the first one that matches.
(A note if you’re building such a file: MySQL’s
ca.pem contains a NUL byte, which prevents a simple
cat from working to put the files together. I had to open it with Vim to delete the stray
The existing CA file on my VM had the certificates for production and the VM installation. I initially added the certificate for the host installation to the end of it, but nothing could connect to the database. Attempts failed with an error like: “RSA_padding_check_PKCS1_type_1:invalid padding”. The Internet wasn’t very helpful.
I finally solved it when I split the file back to individual certificates, so that I could examine with
openssl x509 -in some.pem -noout -text | less and see the common names. That's when I discovered that the two local MySQL CAs had the same name.
The CA file that MySQL generates uses a “common name” that only varies based on the MySQL version. Since I happened to have 8.0.21 on both sides, with the VM's CA file first, clients were attempting to use the VM’s CA file to verify the host’s certificates. Removing the VM's CA file from the mix solved the issue.
The administrative interface
I’m not using it, but reading documentation for other problems, I found out that an optional management port exists since MySQL 8.0.14. It’s disabled by default, so that’s good.
The astoundingly poor encryption default
AES is an acceptable cipher (especially on hardware with AES support). 128 bits is still a reasonable security level; attacks have reduced this to 126.1 bits for AES, but 120 bits and up are considered secure.
ECB is wrong, though. Purely, objectively, inarguably wrong. ECB shouldn’t even be an option in 2020. It should have been deprecated by 5.5; Sun should have known better. Every single team member developing for MySQL—perhaps even every open-source contributor—should know better.
Designing an API where critical crypto parameters are controlled by an external variable, and cannot be passed to the function, is another obvious mistake that should have been rectified a decade ago.
One should probably encrypt on the client side with something like Sodium or defuse/php-encryption instead of using
AES_ENCRYPT(), to prevent the latter from using weak settings.
Running the data-pull script on the VM, loading into MySQL on the VM, previously took 10 minutes on either cable or DSL internet.
Running the data-pull script on the VM, loading into MySQL on the host, takes four and a half minutes, and at least saturates the upstream on my DSL connection. This seems to be the bottleneck now; running the data-pull script on the host, loading into MySQL on the host, wasn’t any faster.
I don’t anticipate having solid numbers on the faster connection for some time, but the incomplete results are encouraging.