Ensuring absolutely trouble-free operation of the server?

0 like 0 dislike
Situation 1:
The server program is very important demanding on the drives, speed and volume.
Here on server brick fell (struck through) or it shone in the priest, in a shared server died, too soft is not working. How to avoid this?
Situation 2:
The same server is important.
A storm came, one substation was fried by lightning, from the second severed wire. Two tricky guard poured in half a tank of diesel from each generator, the UPS worked fine and kept the servers and now have to pray that electricians repaired the light, looking at how rapidly decreasing % of the battery charge....
How to avoid and such?
by | 13 views

5 Answers

0 like 0 dislike
Option # 1 - create a failover cluster of two physical servers work in pairs, with one server doing the work, the second server is in reserve, it gets an actual copy of the data from the first server is done by different tools. In the case of the death of the first server, the second takes up the load.
Option # 2 - apply for web sites - user requests are routed to servers according to certain rules on multiple servers in case of failure of one of servers - the load grows for the remainder.
Option # 3 - distant duplicate services - the most reliable option, but the cluster is on the long distances make it very difficult - there are problems with bandwidth, delay transmission and a temporary interruption of communication - not all protocols operating in the local network is able to cope with this problem.
In General, the problem is solved using known solutions taking into account the specifics of the problem being solved and the existing architecture of the service.
A simple solution is a panacea for all problems.
0 like 0 dislike
Synchronization and distribution across regions. Service high availability or fault tolerance.
In any case, requires a minimum of double the excess capacity.
0 like 0 dislike
Just servers in different datacenters
0 like 0 dislike
Adult uncle endure a program (service) on a virtual machine, the virtual machine is raised on a cluster of physical servers, hypervisors, in case of failure of one physical server the virtual reality moves through a vmotion to another, well, Hard drives are made in a storage system, connected via fibrechannel. Naturally, each physical server is connected to different switches and routers. So working within one room (data center), for geographically dispersed data centers need to have to think on the basis of the infrastructure of these data centers. And if that's not enough, consider how many allowable downtime per month, what is the allowed recovery time, and on this basis performed a number of activities.
0 like 0 dislike
at 1-2 servers, you still will not do high occasionschoose which would force mageram/disaster encountered.
For this fault you need to:
1. To separate the software from iron using virtualization Packed in the container software.
2. Place the container in the cloud, there are such problems will automatically be solved at the level of "orchestration" of containers.
110,608 questions
257,186 answers
32,854 users