Lesson learned — if you are using EFS on production systems you want to be using provisioned throughput mode.
But, before we get into that, let’s go over the details of this implementation…
We utilize AWS EC2 instances to run multiple WordPress sites hosted in different directories. The configuration is fairly standard: 2+ servers configured as part of an load-balanced cluster. The servers run from the same image meaning they use the same underlying software stack.
Part of that image includes a mounted EFS (Elastic File Storage) directory , used to share WordPress resources between all nodes in the cluster. The original architecture was designed to host not only the typically-shared wp-content/uploads folder of WordPress via this EFS mount but also the code. The thought was that sharing the code in this way would allow a system admin to easily update WordPress core, plugins, or themes from the typical wp-admin web login. Any code updates would immediately be reflected across all nodes.
EFS Web App Code Hosting – A Bad Idea
Turns out this is a bad idea for a few reasons. First of all, EFS volumes are mounted using the NFS4 (network file storage) protocol — this defines how the operating system handles file read/write operations for a network mounted drive. While NFS4 is fairly robust, the throughput of ANY network drive, even on a high speed AWS data center backbone, is much slower than a local drive such as an EBS volume.
However, the bigger problem comes to light if you happen to choose the default, and pushed as “the mode to use” by Amazon, EFS throughput mode known as “Burst mode”.
Working with Varying Vagrant Vagrants today and having problems spinning up a new box? Don’t blame yourself. It appears that the PHP 7.2 libs… in fact ALL of the PHP libs for Ubuntu Trusty have gone away.
The ppa:ondrej/php repository that is cited everywhere has decided it is not going to serve up any PHP code to your Vagrant boxes today.
Maybe they’ll fix it soon. Maybe not. If anyone has a workaround please comment here.Read More
As the My Store Locator Plus® service continues to grow we are finding it more important than ever to fine tune our web server and PHP processes to better serve the larger data and network demands. A recent review of performance showed process timeouts happening during large data imports and side-loading; especially when the read and write endpoints hit the same server node. Here are some things we did to improve performance.
Get off faux sockets
PHP FPM is typically installed with file-based sockets. While this lessens traffic on the network hardware, most modern servers are equipped with fiber-ready network connections. These network ports and the TCP stack that interfaces with them can often handle a higher peak load of I/O requests than the file system can manage via the “pretend” sockets run through the operating system file I/O requests.Read More
I am completely baffled by this one and hope one of my techie friends can help.
I’m using a PHP class with magic methods to set and get the properties of that class. The idea is to use private properties in the class so that the PHP magic methods can take over and determine whether to update a WordPress user meta entry, blog entry, or standard option based on which proper of the class is being retrieved or stored.
One of the common processes that runs in Store Locator Plus is deleting locations. For sites with a few dozen locations the process runs smoothly. For sites with thousands of locations but deleting one or two at a time, not a big deal. But for sites that are deleting tens-of-thousands of locations at a time the process becomes painfully slow. A mere 2,500 locations can take up to a full minute to be removed on a fairly decent performance server. That’s not the type of performance I like to see from our product.
After digging into the performance of the PHP stack initial indicators point to the custom post types as the primary culprit. It turns out that deleting a single custom post type in the WP_Posts table runs through a dozen gyrations to delete the post. Multiple filters are called, associated taxonomies are delete, taxonomy meta is deleted. It is a TON of extra overhead. But even with removing records from a half-dozen tables the data queries seem out-of-control.
Removing just 9 locations generates over 190 data queries. If there is one thing that has not changed in decades of writing software it is that data queries are costly. They may run on solid-state drives with advanced memory caching but doing 190 data queries is still far slower than nearly any other part of the application.