How to determine what operations cause a high disk load in elasticsearch?

0 like 0 dislike
121 views
In the cluster DC/OS have a container with elasticsearch 5.6.13(single), which gets the data from logstash, 300 indexes, 1400 shards. Indices from 5MB to 500MB.
After running for some time working correctly, then in the logs appears

[o.e.c.m.MetaDataMappingService] [-n4ixxd] [name of index created today] update_mapping [name of index created today]

Not sure what further to do with this event,but the number of processes inside the container grows strongly and they're all very actively writing/reading from disk. LA at 80, a couple of hours it still works, then start message from the GC, apparently slowly eaten off the memory and the container falls.

Settings elastic - almost default, increased the heap size to 4Gb (container allocated to the 8).
bootstrap.memory_lock: true
refresh_interval is increased to 5 minutes
by | 121 views

2 Answers

0 like 0 dislike
As the elastic is written in Java, and Java is jmx, mbeans and jconsole, the problems pomonitorit don't see any. It is even possible to remove this data through logstash and put it back in the elastic for viewing in kibana. pavanrp1.blogspot.com
by
0 like 0 dislike
Not quite on topic, of course, but I will say. Why do You need 1400 shards for a single instance of elastic, especially with the size of the index from 5 to 500MB? According to the manufacturer's recommendations, the size of the shard, which requires sortirovanie (sorry for the tautology), about 50GB... Again, for 4Gb 300 index is very weak, because the elastic keeps much information in memory. Optimize or give it more resources.
by

Related questions

0 like 0 dislike
1 answer
0 like 0 dislike
1 answer
0 like 0 dislike
1 answer
asked Aug 2, 2019 by symnoob
0 like 0 dislike
1 answer
110,608 questions
257,187 answers
0 comments
40,796 users