HELP: How to work with large amounts of data? Oracle or Mysql?

0 like 0 dislike
25 views
There is a mysql database, it is innodb table of 120 million rows and all eeedddiiieee works.


Wrote a procedure, the cursor iterates through the data portion (1 million) from a large table, for each record making 10 simple queries by using the index keys in the same table if the condition passes, then an entry is inserted in another table (average of 1 every 200 records).

The first 1000 records shustrenko, then slowly, meelena and maaaaaaannnn...


I did not understand what this slowdown. I think it will be faster to rewrite it using jdbc. Although I was sure that if all native, it should be super fast. Maybe some feature with cursors or memory lacking, or settings where correct. Can break a large table into several small ones, although I think the indexes decide everything. I believe that the speed limit is the speed of reading data from the hard disk. Actually works 10 hours, mysqld fully loading one core.


And the difference with the oracle database for big data who-thread knows?
by | 25 views

7 Answers

0 like 0 dislike
And you did not look, plugging in the IO have? iostat -dkx example 3. If there is plugging at IO (%util >90) then Oracle will not save you.
\r
In General, the procedures in the muscle as it sucks. They say that like they are precompiled and they are actually in the source plaintext lie...
by
0 like 0 dislike
The same problem was with cursors in MySQL. But I had a lot of inserts, thought that this is due to pregenerate indexes for each insert. But in your case, 1 insert for 200 parameters... so it's something else.
by
0 like 0 dislike
I would not hope that oracel will save you. In my experience, to slow down in such cases it can perfectly :)
\r
— You will have to spend a lot of time messing with it.
— And finally, he still paid — maybe for the grandmother to buy operatives, Geopol and SSD screws?
\r
I don't know how much RAM on the server, but look at the index files, their size or the muscle take, if possible, the size of memory for the indices slightly larger than the size of these index files
\r
Maybe 10 requests to combine into a single transaction/single request? ( If not done )?
\r
Table, you can probably break anything, but the special effect would be if different parts are on different screws.
\r
And maybe try drizzle or any other MySQL forks?
by
0 like 0 dislike
And yet, I almost forgot — if from the table a lot of reading, it may be poprobuyte MyISAM, tableu? But it's just to try, see what happens.
by
0 like 0 dislike
Couple of hundred million rows to Oracle database very modest. No degradation of performance at extended operations, even very complex, I did not notice.
\r
In any case, you need to try to pack everything into a standard DML statements with joyname, without additional code, the cursor cycles etc. At very large volume of changes you want to split the transaction into several to limit the use of undo tablespace. Naturally, to include concurrency and to carefully review the query execution plan.
by
0 like 0 dislike
>First 1000 records shustrenko, then slowly, meelena and maaaaaaannnn...
\r
Similar to the behavior at long transactions. Maybe just autocommit to do?
It was just a guess. If does not help, then you will have to explore thoroughly.
by
0 like 0 dislike
migration to MS SQL server is not considered?
by

Related questions

0 like 0 dislike
2 answers
0 like 0 dislike
5 answers
0 like 0 dislike
1 answer
0 like 0 dislike
3 answers
asked Mar 26, 2019 by nightw0rk
110,608 questions
257,186 answers
0 comments
32,718 users