lab 90 - Multicast DNS and Zeroconf

NAME

lab 90 - Multicast DNS and Zeroconf (client portion)

NOTES

While using synthetic file systems to publish services works great, you still need to know where your server is. This information can be provided from a shared ndb/local or even DHCP, but there are plenty of scenarios I have run into where I don't control the DHCP server and distributing ndb/local is tedious. In working with Blue Gene the problem becomes a bit worse in that we don't know apriori which portion of the machine (and therefore which IP address) we will get. Further complicating this is the fact that our front-end node (where we run Inferno) is established by a load balancer, and there are potentially 5 of us running our own file servers. So not only do we need to know how to get to the front end node, but how to get to the right front-end node.

While some form of simple broadcast service discovery may have been sufficient, I decided to take the time to see what it would take to add multicast DNS and service-discovery to the Plan 9 and Inferno DNS services. Multicast DNS and Service Discovery (aka Bonjour, aka Rendezvous, aka Zeroconf) is documented in plenty of places, here's a few good starting points for more information:

Multicast DNS resolution is fairly straightforward, it involves just sending the DNS request to a multicast address (224.0.0.251) and using port 5353 instead of port 53. The first thing I did was modify /appl/cmd/ndb/dns.b to use this address and port when looking for any domain ending in .local per the zeroconf convention.

Service discovery is a bit more problematic. It involves sending a slightly different type of request. Typical DNS requests use an type A style request which retrieves an IP address for a hostname. Service discovery uses a PTR style request which returns three types of records -- the PTR record contains the instance of the service which will include a more specific name of the service location, a SRV record which will contain the port number of the service, a TXT record which contains some protocol specific information and an A response which contains the IP address.

So, for example, if I send a PTR request for _presence._tcp.local, I get four records in response:

  1. ptr record -> crazyjim@arl137._presence._tcp.local
  2. srv record -> port: 5298
  3. txt record -> last=Peterson 1st=James msg=Away status=dnd
  4. A record -> 9.3.61.137

I added a flag to ndb/dnsrequest (-z) which forces sending a PTR request to DNS. Using the Plan 9 DNS service as a model, I modified the Inferno dns to be able to parse SRV and TXT records. The one things I changed was that the TXT response can contain several key value pairs. The Plan 9 DNS service just strings these together (with no seperator mind you). So I print the number of key value pairs, and then have the key=value pairs one per line prefixed with dual tabs to make it look nice.

The final remaining problem is that the current DNS daemon only returns a single response for each request, but multicast DNS may have several responses for a single request. This involves a much larger set of changes to dns.b. DNS will now accumulate responses and return them in one big set.

I probably need to do some work to get cs.b to play nice with such information. The whole dnsquery, and even the cs and ndb front-ends to it seem decidedly anti-Plan 9 in their layout -- particularly for something with lots of rich attribute information like zeroconf. As such I'm likely going to create a synthetic file server with which to browse zeroconf data. You register which types of zeroconf entities you are interested in by creating directories in a two level hierarchy -- the file server will then use my modified DNS to query the local net and will create nodes under those directories for responses with attribute information broken out into individual files. The other major thing that needs to be done is adding multicast server support to dns.b. But I think I'll write these up in a different lab entries as this one is getting long in the tooth already.

While this is sufficient to solve my initial problems, there are several additional aspects of zeroconf which might be nice to integrate for Inferno including support for link-local IP addresses (which I guess would only be important on native), proper uniqueness handling for claiming your local names, NAT-PMP support, DNS-LLQ support, and dynamic DNS updates.

EXAMPLE

% ./dnsquery -z _presence._tcp.local
_presence._tcp.local ptr ericvh@ericvh-desktop._presence._tcp.local
ericvh@ericvh-desktop._presence._tcp.local txt 10
txtvers=1
1st=Eric
last=Van Hensbergen
port.p2pj=5298
status=away
node=libpurple
ver=2.5.2
vc=!
email=bergevan@us.ibm.com
phsh=943420112a8b192466a802bedfe547041a62ea90

ericvh@ericvh-desktop._presence._tcp.local srv 0 0 5298 ericvh-desktop.local
ericvh-desktop.local ipv6
ericvh-desktop.local ip 9.3.61.77
npe@macintosh-16._presence._tcp.local srv 0 0 5298 macintosh-16.local
npe@macintosh-16._presence._tcp.local txt 13
ext=
phsh=f308675309a23fa653c269c95f57eb7eb84efc44
last=
AIM=
nick=
1st=Noah
port.p2pj=5298
txtvers=1
version=1
node=
jid=
email=
status=avail

macintosh-16.local ipv6
macintosh-16.local ip 9.3.61.73
_presence._tcp.local ptr npe@macintosh-16._presence._tcp.local

DISCUSSION

dns.b seems entirely too big, and I just made it bigger. It would seem better served if it were split up into a bunch of component modules. It seems like the marshalling and unmarshalling of DNS messages is a legitimate module, the cache is a module, local database/config access is another, and then a proper module interface for performing DNS queries and/or servicing DNS requests. The file and network servers could then be provided relatively cleanly. All in all it would clean the code up signifigantly and make the whole thing a lot more readable/extensible. Things like the registry or even cs could easily be implemented as plug-in modules versus discreet file servers (although also allowing them to use file services is desirable in certain scenarios so this should definitely be parameterized).

Its funny, looking back at Virgild, it was essentially a broadcast form of multicast name resolution, just with its own more simple protocol instead of DNS.

FILES

  • lab90/appl/cmd/ndb/dns.b
  • lab90/appl/cmd/ndb/dnsquery.b

Comments

Anonymous said…
Have you done any further development of the NDB DNS module such as the separation of functionality you mentioned in the Discussion? What are the results?

How do you (or would you - ?) multiplex incoming requests from clients and multicast DNS updates from other hosts which are asynchronous to the first ones? Especially without a synthetic DNS-SD file server? I'm not sure about the structure of processes/threads involved in this scenario and the interfaces among them. Maybe you meant a structure like this:

[front-end tool] -fs-calls-for-requesting-things-> [DNS-SD file server] <-fs-calls-for-updating-things- [mDNS listener]

P. S. I'm interested in Zeroconf in the context of Linux and plan9port. That was one of the reasons I started following your lab blog.
Unknown said…
I haven't made any progress. Unfortunately, current deadlines may keep me from getting back to it. I imagine I'd follow the model of the Plan 9 dns implementation to handle multiplexing and/or asynchronous requests.

Popular posts from this blog

lab 110 - inferno archive edition

lab 107 - midiplay

The Cartesian Theater of AI