Multi-Provider Setup
Discover how to set up multiple providers in your system to ensure redundancy and reliability in service delivery. Learn the benefits of having a multi-provider setup and how to implement it effectively for your organization's needs.
Table of Contents
Multiprovider replication technique uses Syncrepl to replicate data to multiple provider servers. It is useful as a setup for an automatic failover. If any of the providers fail, any other provider can continue to accept updates. Multi-provider setup does not provide load scaling and thus cannot be used in place of load balancer as all changes have to be processed by all of the servers.
Configuration
Each of the servers needs to be configured both as a consumer and provider.
- Give each server a unique ID - in the global configuration section
serverID <unique number>- Enable the syncprov and accesslog module in the global section
moduleload syncprov.la
moduleload accesslog.la- Create two database entries for the application and accesslog to enable delta sync.
- The application database defines the accesslog and syncprov overlays. It is also configured as a consumer (see syncrepl parametres below)
- Add the
multiproviderflag set to true
### application db ###
database mdb
suffix "dc=example,dc=com"
rootdn "dc=example,dc=com"
rootpw <password>
## Whatever other configuration bits for the replica, like indexing
# syncrepl specific indices
index entryUUID eq
directory /var/symas/openldap-data/example
maxsize 1073741824
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
syncprov-sessionlog-source cn=accesslog
overlay accesslog
syncrepl rid=001
provider=<URI>
searchbase="dc=example,dc=com"
bindmethod=simple
binddn="dc=example,dc=com"
credentials=<password>
type=refreshAndPersist
interval=00:00:01:00
retry="60 +"
multiprovider on- The accesslog database defines the syncprov overlay.
database mdb
suffix "cn=accesslog"
rootdn "dc=example,dc=com"
directory /var/symas/openldap-data/accesslog
...
overlay syncprov- Create the accesslog directory under /var/symas/openldap-data/symas.
- Define as many syncrepl stanzas as there are servers in the replication including the server where the app database is stored
syncrepl rid=1
provider=<URI1>
syncrepl rid=2
provider=<URI2>
syncrepl rid=3
provider=<URI3>Setup the syncrepl parametersfor more details on configuration go to slapd.conf manual
Setup the syncrepl parameters
- mandatory parametres:
rid=<replica ID>,provider=ldap[s]://<hostname>[:port],searchbase=<base DN>- If TLS not used, add the binddn and credentials for the syncrepl entry
binddn="dc=symas,dc=com", credentials=secret, bindmethod=simple- synchronization type
type=refreshAndPersist/refreshOnly- search specification - the consumer slapd will send search requests to the provider slapd accroding to these
searchbase, scope=sub, filter, attrs, attrsonly, sizelimit, and timelimit- if type refreshOnly is used, interval can be specified (1 day by default)
interval=00:00:01:00- if an error occurs, retry interval can be specified, default is 1 hour forever.
retry="60 10 300 3"(every 60 seconds first 10 times and then the next 3 times every 300 seconds)
- delta syncrepl specification
logbase="cn=accesslog",logfilter, syncdata=accesslog
- Load the servers with all data before starting slapd
- Use slapcat to backup database from first provider
- Use slapadd to copy database to second provider
Notes
In Multi-provider Replication each server is a consumer of the other provider and each server is a provider for the other provider.
Various Topologies
- With only 2 servers, this is really A <—> B. Both are providers, both are consumers.
- With 3-4 servers, this is A <—> B <—> C (<—> D). Updating A will update B which will update C (which will update D). One provider is connected to one other provider. A and C are only connected via B
- 4 servers each connected to all others (6 connections)
A <—> B
A <—> C B <—> C
A <—> D B <—> D C <—> D
Note: When you have more than 4 servers, this becomes problematic since the number of connections is growing very fast
That also means an entry being modified is going to be transmitted as many times as you have connections.