There was a question about optimization of this action, because the update takes at least 2 minutes.
Well, it's a well-known joke that by default SQlite makes a transaction for each insert. Start a transaction and committing it once and all rows inserted per second the maximum.
You started to invent some megaspore acceleration, not examining the root of the problem. But the root of that transaction SQLite creates a file rollback, and when the same transactions 40000, it will be the same number of operations re-create the file. It's just great! what a heavy operation for the filesystem, and neither the processor nor multithreading at all to do with it. Might have known themselves, since 40000 is a small record it is not a volume.
The decision of Google in 20 seconds and is written in the FAQ on SQLite: www.sqlite.org/faq.html#q19