Database Speed Comparison
(This page was last modified on 2002/08/24 18:24:58 UTC)
A series of tests were run to measure the relative performance of SQLite 2.7.0, PostgreSQL 7.1.3, and MySQL 3.23.41. The following are general conclusions drawn from these experiments:
The platform used for these tests is a 1.6GHz Athlon with 1GB or memory and an IDE disk drive. The operating system is RedHat Linux 7.2 with a stock kernel.
The PostgreSQL and MySQL servers used were as delivered by default on RedHat 7.2. (PostgreSQL version 7.1.3 and MySQL version 3.23.41.) No effort was made to tune these engines. Note in particular the the default MySQL configuration on RedHat 7.2 does not support transactions. Not having to support transactions gives MySQL a big speed advantage, but SQLite is still able to hold its own on most tests. On the other hand, I am told that the default PostgreSQL configuration is unnecessarily conservative (it is designed to work on a machine with 8MB of RAM) and that PostgreSQL could be made to run a lot faster with some knowledgable configuration tuning. I have not, however, been able to personally confirm these reports.
SQLite was tested in the same configuration that it appears on the website. It was compiled with -O6 optimization and with the -DNDEBUG=1 switch which disables the many "assert()" statements in the SQLite code. The -DNDEBUG=1 compiler option roughly doubles the speed of SQLite.
All tests are conducted on an otherwise quiescent machine. A simple Tcl script was used to generate and run all the tests. A copy of this Tcl script can be found in the SQLite source tree in the file tools/speedtest.tcl.
The times reported on all tests represent wall-clock time in seconds. Two separate time values are reported for SQLite. The first value is for SQLite in its default configuration with full disk synchronization turned on. With synchronization turned on, SQLite executes an fsync() system call (or the equivalent) at key points to make certain that critical data has actually been written to the disk drive surface. Synchronization is necessary to guarantee the integrity of the database if the operating system crashes or the computer powers down unexpectedly in the middle of a database update. The second time reported for SQLite is when synchronization is turned off. With synchronization off, SQLite is sometimes much faster, but there is a risk that an operating system crash or an unexpected power failure could damage the database. Generally speaking, the synchronous SQLite times are for comparison against PostgreSQL (which is also synchronous) and the asynchronous SQLite times are for comparison against the asynchronous MySQL engine.
Test 1: 1000 INSERTs
CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
SQLite must close and reopen the database file, and thus invalidate its cache, for each SQL statement. In spite of this, the asynchronous version of SQLite is still nearly as fast as MySQL. Notice how much slower the synchronous version is, however. This is due to the necessity of calling fsync() after each SQL statement.
Test 2: 25000 INSERTs in a transaction
When all the INSERTs are put in a transaction, SQLite no longer has to close and reopen the database between each statement. It also does not have to do any fsync()s until the very end. When unshackled in this way, SQLite is much faster than either PostgreSQL and MySQL.
Test 3: 100 SELECTs without an index
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;
This test does 100 queries on a 25000 entry table without an index, thus requiring a full table scan. SQLite is about half the speed of PostgreSQL and MySQL. This is because SQLite stores all data as strings and must therefore call strtod() 5 million times in the course of evaluating the WHERE clauses. Both PostgreSQL and MySQL store data as binary values where appropriate and can forego this conversion effort.
Test 4: 100 SELECTs on a string comparison
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';
This set of 100 queries uses string comparisons instead of numerical comparisions. As a result, the speed of SQLite is compariable to or better then PostgreSQL and MySQL.
Test 5: Creating an index
CREATE INDEX i2a ON t2(a);
SQLite is slower at creating new indices. But since creating new indices is an uncommon operation, this is not seen as a problem.
Test 6: 5000 SELECTs with an index
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;
This test runs a set of 5000 queries that are similar in form to those in test 3. But now instead of being half as fast, SQLite is faster than both PostgreSQL and MySQL.
Test 7: 1000 UPDATEs without an index
Here is a case where MySQL is over 10 times slower than SQLite. The reason for this is unclear.
Test 8: 25000 UPDATEs with an index
In this case MySQL is slightly faster than SQLite, though not by much. The difference is believed to have to do with the fact SQLite handles the integers as strings instead of binary numbers.
Test 9: 25000 text UPDATEs with an index
When updating a text field instead of an integer field, SQLite is slightly faster than MySQL.
Test 10: INSERTs from a SELECT
The poor performance of PostgreSQL in this case appears to be due to its synchronous behavior. The CPU was mostly idle the test run. Presumably, PostgreSQL was spending most of its time waiting on disk I/O to complete.
SQLite is slower than MySQL because it creates a temporary table to store the result of the query, then does an insert from the temporary table. A future enhancement that moves data directly from teh query into the insert table should double the speed of SQLite.
Test 11: DELETE without an index
DELETE FROM t2 WHERE c LIKE '%fifty%';
Test 12: DELETE with an index
DELETE FROM t2 WHERE a>10 AND a<20000;
Test 13: A big INSERT after a big DELETE
INSERT INTO t2 SELECT * FROM t1;
Earlier versions of SQLite would show decreasing performance after a sequence DELETEs followed by new INSERTs. As this test shows, the problem has now been resolved.
Test 14: A big DELETE followed by many small INSERTs
Test 15: DROP TABLE
DROP TABLE t1;
SQLite is slower than the other databases when it comes to dropping tables. This is not seen as a big problem, however, since DROP TABLE is seldom used in speed-critical situations.