There is a mysql database, it is innodb table of 120 million rows and all eeedddiiieee works.
Wrote a procedure, the cursor iterates through the data portion (1 million) from a large table, for each record making 10 simple queries by using the index keys in the same table if the condition passes, then an entry is inserted in another table (average of 1 every 200 records).
The first 1000 records shustrenko, then slowly, meelena and maaaaaaannnn...
I did not understand what this slowdown. I think it will be faster to rewrite it using jdbc. Although I was sure that if all native, it should be super fast. Maybe some feature with cursors or memory lacking, or settings where correct. Can break a large table into several small ones, although I think the indexes decide everything. I believe that the speed limit is the speed of reading data from the hard disk. Actually works 10 hours, mysqld fully loading one core.
And the difference with the oracle database for big data who-thread knows?