Which DB is better to store the minute value of the cryptocurrency (+1500 records per minute)?


Warning: count(): Parameter must be an array or an object that implements Countable in /home/styllloz/public_html/qa-theme/donut-theme/qa-donut-layer.php on line 274
0 like 0 dislike
72 views
The system collects data from CoinMarketCap API for further analysis. While just collecting data, and displaying graphs.
What to choose as storage, if every minute is added 1500 records, that is a day more than 2 million?
Planning to store in MySQL, but I'm sure there is a solution more appropriate for these tasks.
So I need the advice of an experienced in database and BigData in human.
Thank you!
by | 72 views

7 Answers

0 like 0 dislike
For example, influxDB. Or any other time series DB. It often for all sorts of metrics and monitoring, but if your task requires binding to timestamp (for example for the visualization of graphs) - will lie perfectly.
by
0 like 0 dislike
ClickHouse, InfluxDB.
by
0 like 0 dislike
It all depends on the structure of data types. Operations that plan to spend. Their frequencies.
1500 records per minute, it's not a terrible number for mysql. Question any records. If the key is bigInt
in the database theoretically fit 4,294,967,295 records. This for 5.5 years with your frequency. But again - the number of rows tells us very little.
I think You previously rested in the volume of hardrive than the database resources.
by
0 like 0 dislike
1500 lines/minute you can easily get to any database if the insert not be performed in a separate transaction, and bundles of several pieces and may delay the write to the database. Even SQLite can get 100K rows / sec write.

The fastest way would be to use successive record values in a separate for each currency files without a date, you can calculate the position values in the file.

If strongly not to drive, then simply place the table in the index, see Clustered Index (in PostgreSQL and MySQL) or Index-Organized Tables (Oracle).

You can also do micro-optimization: if you know that data comes with a one minute interval, the storing time (date = 7байт) or unix-epoh (4байта) and the room dimensions.
by
0 like 0 dislike
Create the RAM drive and place the base on it. To hard disk do not apply.
by
0 like 0 dislike
Clickhouse. He just set up for this kind of data and their analysis. Yandex Metrika is working on it.
Clickhouse is optimized for large transaction 400-500 thousand rows per second.
It is well compresses large amounts of data so that the base will take up less disk space than other DBMS and good looking at millions of rows is much faster than MySQL, etc. in the RDBMS.
He's a good Karditsa.
And also one of the main advantages - a query language similar to SQL with minimal changes.
\rhttps://youtu.be/Ac2C2G2g8Cg
by
0 like 0 dislike
by

Related questions

0 like 0 dislike
1 answer
0 like 0 dislike
2 answers
asked May 20, 2019 by IT-Programmer
0 like 0 dislike
3 answers
0 like 0 dislike
1 answer
0 like 0 dislike
1 answer
110,608 questions
257,186 answers
0 comments
35,535 users