Skip to content

Instantly share code, notes, and snippets.

@squarooticus
Last active January 31, 2024 06:44
Show Gist options
  • Save squarooticus/7b8c6cc5871213db6baa12eb3c01f036 to your computer and use it in GitHub Desktop.
Save squarooticus/7b8c6cc5871213db6baa12eb3c01f036 to your computer and use it in GitHub Desktop.
Use nftables to repeat mDNS/Bonjour packets across two different interfaces. Works for Google Cast/Chromecast groups!
table ip mangle {
chain prerouting {
type filter hook prerouting priority mangle; policy accept;
ip daddr 224.0.0.251 iif eth3 ip saddr set 192.168.2.1 dup to 224.0.0.251 device eth2 notrack
ip daddr 224.0.0.251 iif eth2 ip saddr set 192.168.3.1 dup to 224.0.0.251 device eth3 notrack
}
}
table ip6 mangle {
chain prerouting {
type filter hook prerouting priority mangle; policy accept;
ip6 daddr ff02::fb iif eth3 ip6 saddr set fd00:0:0:2::1 dup to ff02::fb device eth2 notrack
ip6 daddr ff02::fb iif eth2 ip6 saddr set fd00:0:0:3::1 dup to ff02::fb device eth3 notrack
}
}
@squarooticus
Copy link
Author

squarooticus commented Nov 8, 2020

This partial nft config repeats mDNS packets from eth2 to eth3 and vice versa. Some notes:

  • The saddr set actions change the source address in the repeated packets to a source address on the target interface, as is required for some clients to accept the advertisement (probably due to a sanity check since mDNS's 224.0.0.251 is a link-local multicast address).
  • notrack instructs the kernel not to invoke connection tracking for this rule. It might be vestigal; I haven't checked.
  • Avahi-daemon works fine across subnets for the actual Google Cast devices, but not for the groups. My suspicion is that the avahi-daemon model (ingest the advertisements and process them in some way; then generate new advertisements for each interface) is mangling the cast group advertisements in such a way that Android is not accepting them. This repeats the raw layer 3 packets with the exception of the altered source address, and so doesn't suffer from that problem.

@neontty
Copy link

neontty commented Jan 6, 2021

my two cents on what avahi-daemon might be doing:

after inspecting packets in wireshark on the two interfaces (lan / google_lan) I saw that the google_lan would contain an MDNS SRV Record response for the speaker group that had additional records attached to it (one of them as an "A" record pointing to the ipv4 address of the lead speaker on google_lan). On the normal LAN side the same MDNS SRV Record would be missing that additional "A" record.

Frame 325: 443 bytes on wire (3544 bits), 443 bytes captured (3544 bits) on interface -, id 0
Ethernet II, Src:
Internet Protocol Version 4, Src: 192.168.5.122, Dst: 224.0.0.251
User Datagram Protocol, Src Port: 5353, Dst Port: 5353
Multicast Domain Name System (response)
    Transaction ID: 0x0000
    Flags: 0x8400 Standard query response, No error
    Questions: 0
    Answer RRs: 1
    Authority RRs: 0
    Additional RRs: 3
    Answers
        _CC32E753._sub._googlecast._tcp.local: type PTR, class IN, Google-Cast-Group-c8b955c0a20344698f008838bcb11522._googlecast._tcp.local
    Additional records
        Google-Cast-Group-c8b955c0a20344698f008838bcb11522._googlecast._tcp.local: type TXT, class IN, cache flush
        Google-Cast-Group-c8b955c0a20344698f008838bcb11522._googlecast._tcp.local: type SRV, class IN, cache flush, priority 0, weight 0, port 32196, target ab6657df-431b-4bfb-dc7b-fe653488997b.local
        ab6657df-431b-4bfb-dc7b-fe653488997b.local: type A, class IN, cache flush, addr 192.168.5.122
    [Unsolicited: True]

missing record: ab6657df-431b-4bfb-dc7b-fe653488997b.local: type A, class IN, cache flush, addr 192.168.5.122

Seems like avahi is dropping that record? I'm not very familiar with MDNS to know .

@squarooticus
Copy link
Author

Good to know. Thanks, neontty.

In the meantime, I've replaced my janky nft config with a python mDNS repeater.

@BBaoVanC
Copy link

  • notrack instructs the kernel not to invoke connection tracking for this rule. It might be vestigal; I haven't checked.

If I'm understanding the wiki right, the mangle priority is -150, which is later than conntrack, which is -200.

Also, I can't get this to work. Here is my config:

table ip mdns {
    chain prerouting {
        #type filter hook prerouting priority raw; policy accept;
        type filter hook prerouting priority mangle; policy accept;

        ip daddr 224.0.0.251 meta nftrace set 1

        # repeat mDNS from IoT to main
        ip daddr 224.0.0.251 iif iot ip saddr set 10.0.0.1 dup to 224.0.0.251 device main notrack
        ip daddr 224.0.0.251 iif main ip saddr set 10.0.4.1 dup to 224.0.0.251 device iot notrack
    }
}

I'm seeing no output from nftrace, even if I uncomment the beginning line to change priority higher (to raw). I also don't see anything in Wireshark being repeated. Any ideas what I'm doing wrong, or should I switch to an actual mDNS repeater program?

@BBaoVanC
Copy link

Nevermind, it actually was working but I forgot to allow the regular traffic between VLANs. I didn't see anything in nftrace because the interface was not in promiscuous mode. Here's my working config:

table ip mdns {
    chain prerouting {
        type filter hook prerouting priority mangle; policy accept;

        # WARNING: nftrace does not work for this unless you put interface in promiscuous mode or
        # run tcpdump in the background
        # ip l set [iface] promisc [on/off]
        ip daddr 224.0.0.251 jump mdns
    }
    chain mdns {
        # repeat mDNS from IoT to main
        iif iot ip saddr set 10.0.0.1 dup to 224.0.0.251 device main
        iif main ip saddr set 10.0.4.1 dup to 224.0.0.251 device iot
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment