1/17/2024 0 Comments Sqlite transaction increased speed![]() ![]() ![]() (so this means i had 643 GB in writes to create a 2GB file - WTF?) Sqlite db size is 2 GB after 90 minutes, server monitoring shows a constant 122Mb/sec in writes and 100kb/sec in reads. Things i am currently testing, using the value1 as a key, and not using rowidĬREATE TABLE test ('value1' TEXT PRIMARY KEY NOT NULL, 'value2' TEXT) WITHOUT ROWID i tried from 50 to 300 and it neither speeds up nor slows down. The Indexing is super slow again, so nothing is gained from this approach ( ALTER TABLE test1 RENAME TO t1_backup CREATE TABLE 'test1' ('mykey' TEXT PRIMARY KEY NOT NULL, 'myvalue' TEXT) INSERT INTO test1 SELECT mykey, myvalue FROM t1_backup )Įdit: Just to clear it up a bit more, the key values are already unique prior to import.Create table without index, insert all the data, then create a second table with an index, and move it over.Is there anything i can do to improve the insert speed? Meanwhile my disks are spinning hard: with around 1mb/s in reads and 90mb/s (peaks up to 120mb/s) in writes per second. The main problem is that this is causing me a lot of IO (presumably due to the key) and it becomes super slow to the point where it starts with around 30k rows per second and then slows all the way down to 300 rows /second over the course of an hour or so. Use Transactions (25k rows inserts each).Make sure to test your application on an actual device, preferably more than one.Currently i am trying to insert quite a lot of data in a sqlite database.Ī part of the problem is that i have a table that will be containing around 120 million rows with and the insert is super slow.Unless you’re doing a single insert or it is critical that rows gets written to disk right away, use transactions.When doing a batch insert of multiple rows (like the example above shows).When performing inserts to multiple tables and you want to make sure that either everything or nothing gets inserted.Transactions are useful for multiple reasons: I had to rerun the benchmarks and verify that all the data were actually being inserted – but luckily I was able to reproduce the results. The performance on the Galaxy was even more shocking: from the original 478 seconds down to 1.5 seconds! Reading up on SQLite, I discovered that with transactions I could keep the data in memory and commit the changes to filesystem with one operation instead of 6000:Ĭombining transactions with compiled statements yielded a tremendous performance boost: From the original 71 seconds on the emulator down to an incredible 5 seconds! What this means for our SQLite database is that it will perform an fsync at every insert to make sure it’s been written to disk… Testing this on the device resulted in encouraging results 361 seconds – almost two minutes faster than what we started with.įeeling I was on to the right track and suspecting the filesystem was slowing down the inserts, I found various references around the net mentioning problems with the performance of the I/O on the Samsung Galaxy S – all related to the RFS file system. This resulted in an insertion time of 50 seconds, compared to the 71 seconds we started with. I ran the test again and ended up with a similar result – I was clearly not on the right track here.ĭigging further, I found a documentation page for SQLite telling me that “PRAGMA synchronous = OFF” would tell SQLite to continue without syncing as soon as it has handed data off to the operating system. Testing the same operation on the device resulted in a total insert time of 487 seconds, 9 seconds slower than before. ![]() This brought the total time of the insertion on the emulator down from 71 to 56 seconds. expand_more Schibsted Group Management Team. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |