Plex Media Server in Docker
Docker is the reason I switched from SmartOS back to Linux. The ability to manage applications with a workflow similar to git is very attractive. It solves most of the problems I had with SmartOS including complicated backups and needing KVM for some applications which don’t run under Solaris zones.
For me, the biggest problem with Docker right now is that the networking support is inflexible. While I managed to run applications like SABnzbd, Sick Beard, Couch Potato, and even rTorrent + ruTorrent among others with ease, Plex Media Server uses Avahi and it’s own GDM network discovery protocols to broadcast its existence to clients. This requires the Plex server and clients to run on the same subnet which isn’t a configuration that Docker supports easily. After a few tries, I managed to get it to work with some ugly hacks.
Setting Up a Network Bridge on the Host
Ubuntu’s KVM networking guide comes in handy here. Following it, I created a br0
bridge with the following configuration added to the host’s /etc/network/interfaces
:
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
After restarting networking (service networking restart
), the new bridge showed up in ifconfig
:
host$ ifconfig
br0 Link encap:Ethernet HWaddr 1c:6f:65:d8:af:fd
inet addr:192.168.1.131 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::1e6f:65ff:fed8:affd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:710009 errors:0 dropped:0 overruns:0 frame:0
TX packets:664838 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:643136133 (643.1 MB) TX bytes:198627145 (198.6 MB)
Now we need some way to use this bridge in the Plex container.
Wrestling With Docker’s Networking Configuration
My first approach was to try using Pipeline for the network configuration. My approach was to use Pipework on the host as follows:
host$ docker run -d plex
bcb765ea1b73
host$ pipework br0 bcb765ea1b73 192.168.1.10
And from the point of view of the container, here is the network configuration before running Pipework:
root@bcb765ea1b73:/# ifconfig
eth0 Link encap:Ethernet HWaddr 82:48:50:71:6c:55
inet addr:172.17.0.8 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::8048:50ff:fe71:6c55/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:195 errors:0 dropped:0 overruns:0 frame:0
TX packets:83 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:267817 (267.8 KB) TX bytes:5723 (5.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)the point of view of the container:
And after running Pipework:
root@bcb765ea1b73:/# ifconfig
eth0 Link encap:Ethernet HWaddr 82:48:50:71:6c:55
inet addr:172.17.0.8 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::8048:50ff:fe71:6c55/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:195 errors:0 dropped:0 overruns:0 frame:0
TX packets:83 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:267817 (267.8 KB) TX bytes:5723 (5.7 KB)
eth1 Link encap:Ethernet HWaddr 06:15:b2:28:fe:a5
inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::415:b2ff:fe28:fea5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:238 (238.0 B) TX bytes:238 (238.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
The eth1
interface was added and given an IP address in the same subnet as my LAN. However, since Plex doesn’t seem to have any configuration to specify which network interface to broadcast on. I used route in the container to make eth1
the default interface and set the gateway:
root@bcb765ea1b73:/# route add default gw 192.168.1.1 eth1
Now I could open up a web browser to http://192.168.1.10/web
and see that Plex was indeed running. However, I couldn’t seem to publish the server to myPlex (UPnP not working?) and the server wouldn’t show up in Plex clients. There’s probably some proper way to get this to work, but since I’m not competent with networking I decided to do an ugly hack to make it all work.
Ugly Hack
Docker allows us to specify LXC configuration options directly with the -lxc-conf
parameter to docker run
. You can also disable Docker’s network configuration with the -n=false
parameter. Combining these two I got the following docker run
command:
docker run -d -n=false \
-lxc-conf="lxc.network.type = veth" \
-lxc-conf="lxc.network.flags = up" \
-lxc-conf="lxc.network.link = br0" \
-lxc-conf="lxc.network.ipv4 = 192.168.1.10" \
-lxc-conf="lxc.network.ipv4.gateway=192.168.1.1" \
plex
Now Plex would show up perfectly in my browser, Plex clients, and publishing to myPlex worked perfectly. The image I prepared stores the Plex configuration in /config
which I bind mount into persistent storage on the host. After bind mounting everything, this is the command I use to start Plex:
docker run -d -n=false \
-v /srv/media/videos:/videos \
-v /srv/media/music:/music \
-v /tank/virt/config/plex:/config \
-lxc-conf="lxc.network.type = veth" \
-lxc-conf="lxc.network.flags = up" \
-lxc-conf="lxc.network.link = br0" \
-lxc-conf="lxc.network.ipv4 = 192.168.1.10" \
-lxc-conf="lxc.network.ipv4.gateway=192.168.1.1" \
plex
You can find the Dockerfile to build the container on Github.