Cross-Region Replication FAQ
You can run
gstatusgraph from the GSQL shell on both the primary and DR cluster.
The count for vertices and edges should match if data is in sync.
Note that if there are running loading jobs, DR might show a lower count, in which case check again when the loading job is done.
Loading jobs themselves are not replicated to the DR cluster. However, the data loaded by these loading jobs is replicated over DR.
DROP ALL command will stop cross-region replication (CRR), you will need to restore re-establish the feature again.
Here is a list of all commands and operations that will stop CRR:
gsql drop allwhich clears all data and schema
gsql clear graph storewhich clears only data
gsql --resetwhich clears all data, schemas and users, even resetting the password of the default tigergraph.
gsql import graph
gsql export graph
It’s most likely that primary and DR have different passwords for the same TigerGraph user. This sometimes happens when you enable CRR without restoring the GBAR backup in DR (since you did not have any data) but DR was installed with a different password than primary. Make sure DR and primary have the same TigerGraph password before enabling CRR.
Nothing will happen.
As soon as DR is back online, Kafka MirrorMaker will replicate the Kafka Topic and GSQL will start replaying the replicas from where it left off.
in order to DR automatically recover, it has to come back up within the Kafka Topic retention time limit.
By default, this is set to 168 hours (7 days).
You can tune this parameter based on your need by running
gadmin config set Kafka.RetentionHours <value_in_hrs>
Yes. You can set up multiple DR clusters to sync with one primary cluster. There is no hard limit to the number of DR clusters.
We suggest that you handle this with an application load balancer where you can configure the DR IP hosts list (e.g. if you are using NGINX you can add the DR hosts list in the upstream section). When the Load Balancer fails the health check on the current primary it will re-route the traffic to the DR host list. You should then manually fail over to the DR cluster.