- 论坛徽章:
- 54
|
在 ulovko 分享的链接上,找到了原文,要不是我英文只是初中水平,坚决不看这本翻译的书:
Early IDE drives implemented a FLUSH CACHE call and were limited to 137 GB
in size. The ATA-6 specification added support for larger drives at the same time it
introduced the now mandatory FLUSH CACHE EXT call. That's the command you
send to a drive that does what filesystems (and the database) want for write cache
flushing currently. Any SATA drive in the market now will handle this call just fine;
some IDE and the occasional rare early SATA drives available many years ago did
not. Today, if you tell a drive to flush its cache out, you can expect it will do
so reliably.
SATA drives that support Native Command Queuing also can handle FUA. Note
that support for NCQ in Linux was added as part of the switch to the libata driver in
kernel 2.6.19, but some distributions (such as RedHat) have back ported this change
to their version of the earlier kernels they ship. You can tell if you're using libata
either by noting that your SATA drives are named starting with sda, or by running:
$ dmesg | grep libata
The exact system calls used will differ a bit, but the effective behavior is that any
modern drive should support the cache flushing commands needed for the barriers
to work. And Linux tests the drives out to confirm that this is the case before letting
you enable barriers, so if they're on, they are expected to work.
It's rather hard to find a CPU benchmark that is more representative of
database performance more useful than just using a database to do something
processor-intensive.
Both the insertion time and how long it takes to count each value are interesting
numbers. The latter also includes some CPU/memory-intensive work related
to updating the hint bit values PostgreSQL uses to track transaction visibility
information; |
|