You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 17, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+53-53Lines changed: 53 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,11 +20,11 @@ Accordingly, we're tuning ZFS and MariaDB for performance over durability, avoid
20
20
21
21
## Preparing the drives
22
22
23
-
Many modern storage drives can present different sector sizes (LBA formats) to the host system. Only one (or none) will be their internal, best-performing sector size. This is often the largest sector size they can natively support, e.g. "4Kn."<sup>[1](#fn1),[2](#fn2),[3](#fn3),[4](#fn4)</sup> We currently use Intel NVMe drives, which have a changeable "Variable Sector Size."<sup>[5](#fn5)</sup> Intel's online documentation and specifications don't list the sector size options for the P4610 model we use, but scanning it showed us two possible values: 0 (512B) or 1 (4KB). flashbench<sup>[6](#fn6),[7](#fn7)</sup> results strongly suggest that the internal sector size is 8KB.
23
+
Many modern storage drives can present different sector sizes (LBA formats) to the host system. Only one (or none) will be their internal, best-performing sector size. This is often the largest sector size they can natively support, e.g. "4Kn."[^1][^2][^3][^4] We currently use Intel NVMe drives, which have a changeable "Variable Sector Size."[^5] Intel's online documentation and specifications don't list the sector size options for the P4610 model we use, but scanning it showed us two possible values: 0 (512B) or 1 (4KB). flashbench[^6][^7] results strongly suggest that the internal sector size is 8KB.
24
24
25
25
### Implementation
26
26
27
-
We use the Intel Memory & Storage Tool<sup>[8](#fn8)</sup> to set the Variable Sector Size to 4,096, the best-performing of the available options.
27
+
We use the Intel Memory & Storage Tool[^8] to set the Variable Sector Size to 4,096, the best-performing of the available options.
28
28
29
29
**WARNING:** This erases all data.
30
30
```
@@ -42,15 +42,15 @@ for driveIndex in {0..23}; do
42
42
-intelssd ${driveIndex}
43
43
done
44
44
```
45
-
<sup>[5](#fn5),[9](#fn9)</sup>
45
+
[^5][^9]
46
46
47
47
## ZFS kernel module settings
48
48
49
-
* Almost all of the I/O to our datasets will be done by the InnoDB database engine, which has its own prefetching logic. Since ZFS's prefetching would be redundant and less well optimized, we disable it: `zfs_prefetch_disable=1`.<sup>[1](#fn1),[10](#fn10)</sup>
49
+
* Almost all of the I/O to our datasets will be done by the InnoDB database engine, which has its own prefetching logic. Since ZFS's prefetching would be redundant and less well optimized, we disable it: `zfs_prefetch_disable=1`.[^1][^10]
50
50
51
51
## Building the vdevs, pool & datasets
52
52
53
-
### Basic concepts, from the bottom up<sup>[11](#fn11)</sup>
53
+
### Basic concepts, from the bottom up[^11]
54
54
55
55
* ZFS acts as both a volume manager and a filesystem.
56
56
@@ -62,17 +62,17 @@ done
62
62
63
63
### Vdevs & pool
64
64
65
-
* We match our drives' best-performing 8KB sector size: `ashift=13`.<sup>[1](#fn1),[2](#fn2),[3](#fn3),[4](#fn4)</sup>
65
+
* We match our drives' best-performing 8KB sector size: `ashift=13`.[^1][^2][^3][^4]
66
66
67
-
* We want to automatically activate hot spare drives if another drive fails: `autoreplace=on`.<sup>[3](#fn3)</sup>
67
+
* We want to automatically activate hot spare drives if another drive fails: `autoreplace=on`.[^3]
68
68
69
-
* We use `/dev/disk/by-id/` paths to identify drives, in case they're swapped around to different drive bays or the OS' device naming schema changes.<sup>[3](#fn3)</sup>
69
+
* We use `/dev/disk/by-id/` paths to identify drives, in case they're swapped around to different drive bays or the OS' device naming schema changes.[^3]
70
70
71
-
* We use RAID-1+0, in order to achieve the best possible performance without being vulnerable to a single-drive failure.<sup>[3](#fn3),[10](#fn10),[12](#fn12),[13](#fn13),[14](#fn14)</sup>
71
+
* We use RAID-1+0, in order to achieve the best possible performance without being vulnerable to a single-drive failure.[^3][^10][^12][^13][^14]
72
72
73
-
* We balance vdevs across controllers, buses, and backplane segments, in order to improve throughput and fault tolerance.<sup>[3](#fn3)</sup>
73
+
* We balance vdevs across controllers, buses, and backplane segments, in order to improve throughput and fault tolerance.[^3]
74
74
75
-
* We store data in datasets, not directly in pools, in order to allow easier management of properties, quotas, and snapshots.<sup>[3](#fn3)</sup>
75
+
* We store data in datasets, not directly in pools, in order to allow easier management of properties, quotas, and snapshots.[^3]
76
76
77
77
#### Implementation
78
78
@@ -100,20 +100,20 @@ sudo zpool create \
100
100
101
101
* These properties are inherited by child datasets, unless overridden.
102
102
103
-
* We've no need to incur the overhead of tracking when files were last accessed: `atime=off`.<sup>[1](#fn1),[15](#fn15)</sup>
103
+
* We've no need to incur the overhead of tracking when files were last accessed: `atime=off`.[^1][^15]
104
104
105
-
* We currently use LZ4 compression, which is extremely efficient and may even improve performance by reducing I/O to the drives: `compression=lz4`.<sup>[1](#fn1),[2](#fn2),[3](#fn3),[4](#fn4),[11](#fn11),[13](#fn13),[14](#fn14)</sup>
105
+
* We currently use LZ4 compression, which is extremely efficient and may even improve performance by reducing I/O to the drives: `compression=lz4`.[^1][^2][^3][^4][^11][^13][^14]
106
106
107
107
* The performance effects may be more mixed since our record size is only twice the sector size, meaning that compression can prevent relatively few sector writes. We might re-evaluate this choice. See <ahref="https://github.com/letsencrypt/openzfs-nvme-databases/issues/9">#9</a>.
108
108
109
-
* Just like with prefetching, InnoDB has its own caching logic, so ZFS's caching would be redundant and less well optimized. We have ZFS cache only metadata: `primarycache=metadata`.<sup>[1](#fn1),[2](#fn2),[10](#fn10),[13](#fn13)</sup>
109
+
* Just like with prefetching, InnoDB has its own caching logic, so ZFS's caching would be redundant and less well optimized. We have ZFS cache only metadata: `primarycache=metadata`.[^1][^2][^10][^13]
110
110
111
-
* ZFS's default record size of 128KB is appropriate for medium-sequential writes, i.e. general use including database backups, which may also use this dataset. We set it explicitly - `recordsize=128k`<sup>[1](#fn1),[2](#fn2),[10](#fn10),[13](#fn13),[14](#fn14),[15](#fn15)</sup> - on this parent dataset, and override it on the InnoDB child dataset.
111
+
* ZFS's default record size of 128KB is appropriate for medium-sequential writes, i.e. general use including database backups, which may also use this dataset. We set it explicitly - `recordsize=128k`[^1][^2][^10][^13][^14][^15] - on this parent dataset, and override it on the InnoDB child dataset.
112
112
113
-
* We store extended attributes in inodes, instead of hidden subdirectories, to reduce I/O overhead for SELinux: `xattr=sa`.<sup>[1](#fn1),[4](#fn4),[16](#fn16),[17](#fn17)</sup> Use of this flag is further supported given that we rely on SELinux and POSIX ACLs in our systems. Without the flag, even the root user attempting to set an ACL on a folder/file on a ZFS mount will receive `Operation not permitted`. According to the zfs man page <sup>[21](#fn21)</sup>,
113
+
* We store extended attributes in inodes, instead of hidden subdirectories, to reduce I/O overhead for SELinux: `xattr=sa`.[^1][^4][^16][^17] Use of this flag is further supported given that we rely on SELinux and POSIX ACLs in our systems. Without the flag, even the root user attempting to set an ACL on a folder/file on a ZFS mount will receive `Operation not permitted`. According to the zfs man page,[^21]
114
114
> The use of system attribute based xattrs is strongly encouraged for users of SELinux or POSIX ACLs. Both of these features heavily rely of extended attributes and benefit significantly from the reduced access time.
115
115
116
-
* We also allow larger nodes, in order to accommodate this: `dnodesize=auto`.<sup>[4](#fn4)</sup> N.b. This does break our pools' compatibility with non-Linux ZFS implementations.
116
+
* We also allow larger nodes, in order to accommodate this: `dnodesize=auto`.[^4] N.b. This does break our pools' compatibility with non-Linux ZFS implementations.
117
117
118
118
#### Implementation
119
119
@@ -134,13 +134,13 @@ sudo zfs get acltype
134
134
135
135
### InnoDB child dataset
136
136
137
-
* Although we're not using a ZIL device because all our drives are the same (fast) speed, we still hint to ZFS that throughput is more important than latency for our workload: `logbias=throughput`.<sup>[1](#fn1),[2](#fn2),[14](#fn14)</sup>
137
+
* Although we're not using a ZIL device because all our drives are the same (fast) speed, we still hint to ZFS that throughput is more important than latency for our workload: `logbias=throughput`.[^1][^2][^14]
138
138
139
139
* ZIL may still have major benefits in this scenario. See <ahref="https://github.com/letsencrypt/openzfs-nvme-databases/issues/7">#7</a>.
140
140
141
-
* InnoDB's default page size is 16KB. (This would be interesting to experiment with.) We know every write will be that size, and it's a multiple of the drives' sector size. So, we set the tablespace dataset's record size to match: `recordsize=16k`.<sup>[1](#fn1),[2](#fn2),[10](#fn10),[13](#fn13),[14](#fn14),[15](#fn15)</sup>
141
+
* InnoDB's default page size is 16KB. (This would be interesting to experiment with.) We know every write will be that size, and it's a multiple of the drives' sector size. So, we set the tablespace dataset's record size to match: `recordsize=16k`.[^1][^2][^10][^13][^14][^15]
142
142
143
-
* ZFS stores an *extra* copy of all metadata by default, beyond the redundancy provided by mirroring. Because we're prioritizing performance for a write-intensive workload, we lower this level of redundancy: `redundant_metadata=most`.<sup>[2](#fn2),[13](#fn13)</sup>
143
+
* ZFS stores an *extra* copy of all metadata by default, beyond the redundancy provided by mirroring. Because we're prioritizing performance for a write-intensive workload, we lower this level of redundancy: `redundant_metadata=most`.[^2][^13]
144
144
145
145
#### Implementation
146
146
@@ -155,72 +155,72 @@ sudo zfs create \
155
155
156
156
## MariaDB settings
157
157
158
-
* ZFS has very efficient checksumming that's integral to its operation. So, we turn off InnoDB's checksums, which would be redundant: `innodb_checksum_algorithm=none`.<sup>[1](#fn1)</sup>
158
+
* ZFS has very efficient checksumming that's integral to its operation. So, we turn off InnoDB's checksums, which would be redundant: `innodb_checksum_algorithm=none`.[^1]
159
159
160
-
* Because ZFS writes are atomic and we've aligned page/record sizes, we disable the doublewrite buffer in order to reduce overhead: `innodb_doublewrite=0`.<sup>[1](#fn1),[2](#fn2),[10](#fn10),[14](#fn14),[15](#fn15)</sup>
160
+
* Because ZFS writes are atomic and we've aligned page/record sizes, we disable the doublewrite buffer in order to reduce overhead: `innodb_doublewrite=0`.[^1][^2][^10][^14][^15]
161
161
162
-
* We store tables in individual files, for much easier backup, recovery, or relocation: `innodb_file_per_table=ON`.<sup>[13](#fn13)</sup>
162
+
* We store tables in individual files, for much easier backup, recovery, or relocation: `innodb_file_per_table=ON`.[^13]
163
163
164
-
* We reduce writes by setting the redo log's write-ahead block size to match the InnoDB dataset's record size, 16KB: `innodb_log_write_ahead_size=16384`.<sup>[1](#fn1)</sup> Some articles suggest using a larger block size for logs, but MySQL caps this value at the tablespace's record size.<sup>[1](#fn1),[18](#fn18)</sup>
164
+
* We reduce writes by setting the redo log's write-ahead block size to match the InnoDB dataset's record size, 16KB: `innodb_log_write_ahead_size=16384`.[^1] Some articles suggest using a larger block size for logs, but MySQL caps this value at the tablespace's record size.[^1][^18]
165
165
166
-
* We disable AIO, which performs poorly on Linux: `innodb_use_native_aio=0`, `innodb_use_atomic_writes=0`.<sup>[2](#fn2)</sup>
166
+
* We disable AIO, which performs poorly on Linux: `innodb_use_native_aio=0`, `innodb_use_atomic_writes=0`.[^2]
167
167
168
-
* We disable proactively flushing pages in the same extent, because group writes are not an issue with aligned page/record sizes: `innodb_flush_neighbors=0`.<sup>[22](#fn22),[23](#fn23)</sup>
168
+
* We disable proactively flushing pages in the same extent, because group writes are not an issue with aligned page/record sizes: `innodb_flush_neighbors=0`.[^22][^23]
169
169
170
-
* We increase target & max IOPS above the defaults. We still use conservative values to avoid excessive SSD wear,<sup>[24](#fn24)</sup> but the defaults were tuned for spinning disks: `innodb_io_capacity=1000`, `innodb_io_capacity_max=2500`.<sup>[23](#fn23)</sup>
170
+
* We increase target & max IOPS above the defaults. We still use conservative values to avoid excessive SSD wear,[^24] but the defaults were tuned for spinning disks: `innodb_io_capacity=1000`, `innodb_io_capacity_max=2500`.[^23]
171
171
172
172
## Operations
173
173
174
-
* We'll run regular scrubs (integrity checks) of zpools.<sup>[3](#fn3),[11](#fn11),[19](#fn19)</sup>
174
+
* We'll run regular scrubs (integrity checks) of zpools.[^3][^11][^19]
175
175
176
-
* We'll monitor zpools' health using Prometheus' node_exporter.<sup>[20](#fn20)</sup>
176
+
* We'll monitor zpools' health using Prometheus' node_exporter.[^20]
0 commit comments