As a fast follow on to the work completed for the critical database outage the previous evening (9/24). Additional configuration of the database servers was required to ensure more reliable data replication moving forward.
The initial configuration of the new standby database system and replication method were suboptimal and through monitoring it was identified that load on the system started to cause replication delay and needing immediate resolution. The problems were causing excessive local WAL transfer and storage and the potential for less reliable replication. The standby was brought up-to-date and the database configuration was updated to ensure more reliable replication and availability.