Practices of regulation of access to the cached (rarely changeable data) in terms of multithreading?

0 like 0 dislike
7 views
The title of the question could not fit the indication that there are certain conditions under which this should be implemented.

Are the following:
You have a server, which because of its tasks - very simple, written on the basis of the usual HttpListener. Average load (without critical delay) is approximately 1000 requests per second, while necessary (to the customer) maximum load should be approximately 100 rps, i.e. the supply of "strength" is. Query processing runs asynchronously. For the issuance of response to a particular query often needs to access data that is initially stored in the database and which can be called a common (shared) because they (or their part) in an unchanged form consumed for the purpose of forming a response to different queries. Because the structure of all data and types are strictly defined and do not change, when you first load the data from the database, they are Packed in a specific entity represented in the form of POCO classes. Of course, these entities are cached by the server using a simple ConcurrentDictionary and a rotating schedule of the lifetimes of its elements. The length of the answers 300-500 bytes, it is also cached, because the vast majority of requests is a duplicate, they just come from different clients. It all worked very fast and well for 5 years.

Not so long ago a need arose in the course of queries, not only to read the same "shared" data, but also occasionally to change them. Entities that provide these data are stored in the dictionary (cache), so when the request takes the essence he thereby takes it with a pointer to it. Each class that represents a particular entity that has fields and properties are read-only and there are only two ways that allow you to change field values is the function to update and delete method. With regards to these methods is clear: to avoid conflicts of access operations update, and delete uses asynchronous lock in write mode.
But it is unclear how fields and properties?, namely, for example, if at the moment of reading of one sort or another is updating, or deleting, should I wrap the lock in read mode, each field/property?... and maybe even the whole architecture of the cache, based on the new needs, demands hollow review?

There is the related issue field in some entities are lists (of type IList etc.), will occur when updating the data and in particular the contents of the list (by inserting and/or deleting items) and at this point someone will conduct a search of its elements, then you know what will happen...

Complement realities: due to some reasons, the customer is extremely conservative, I can't use any third party libraries except .NET Framework 4.7. (which a year ago was ported solution) with its standard set of libraries, In principle, to achieve tasks they are not required.

Any comments are welcome, links to practice (apart from Google) can you provide your experience to be able to put it all together and turn a simple decision. Thank you.
by | 7 views

2 Answers

0 like 0 dislike
if you read that property is updated or deleted, I need to wrap the lock in read mode, each field/property?

Something this cache. Maybe when you change the object from the cache remove it _posle_ changes. Then subsequent threads will get the correct object. And operations add and remove in the ConcurrentDictionary is thread-safe. Ie the idea is to not change the object in the cache, and only delete and add.

There is the related issue field in some entities are lists ...

Well, the same ConcurrentQueue, –°oncurrentStack support GetEnumerable that makes up the whole list. Then do not be afraid that someone will change the list.
Then it is possible without third-party libraries do. Generally, it is difficult to say something concrete without code.
by
0 like 0 dislike
If the data in the cache is invalid, then the cache is flushed and is re-read. To full globally is not reset, it is divided into layers and fold the right layer. To separate the object in the cache to update, this is not met.

If the layers of cache in a Concurrent lists, no matter who it is at this point reads. We use Lazy cache.

I understand that data though and will change, not often, otherwise there is no point in the cache. In our case, when you get new data or we update them, we hands called _cache.Layer.Reset()
by

Related questions

0 like 0 dislike
3 answers
0 like 0 dislike
3 answers
0 like 0 dislike
2 answers
0 like 0 dislike
1 answer
0 like 0 dislike
3 answers
asked Apr 14, 2019 by justslipknot
110,608 questions
257,186 answers
0 comments
33,673 users