This isn't my idea, it's Steven Dake's, but I've chosen to "run with it"
as the saying goes.
The idea is to provide a TCP/IP implementation of the corosync IPC
system. This will allow remote clients to connect to corosync and do all
the things that a local user can do - though we might decide to restrict
this in some way.
This has a number of uses that I can see, and probably quite a few that
I can't. One is use of corosync as a remote lock manager for nodes that
are not part of a cluster but need to share some resources, eg clvmd.
Another is remote management using the CFG interface. Or perhaps remote
monitoring of services for stretch clusters.
Whatever it gets used for its important to realise that such clients are
NOT part of the cluster. They are simply being allowed access to some
selected cluster resources. As such they will not be subject to fencing
and this will restrict the sort of resources that they should be allowed
to access. This restriction should be in the client code rather than as
a policy in corosync itself though.
One issue that needs to be addressed carefully is that of security.
Remote access must be carefully secured otherwise it simply become a
hole through which the cluster can be compromised. I propose that the
security key that corosync already uses for inter-system network
communications could be used; the file holding this key can be suitable
protected on remote systems. I also think that it should be possible to
disable the feature on the server.
Selection of TCP instead of local communications would be selectable at
run-time using an environment variable. At least one variable would be
needed to determine the remote node and (perhaps) port so the presence
of that would be the trigger to use remote communications. Maybe we
should register a URL type: corosync://myhost anyone ??
I think there are a huge number of potential uses for this. There is
also the potential for abuse, I realise. But if carefully done I think
it will be a great addition to the clustering software.