Droplet Networking (Part 2 of 2) Walls Of Fire
Sorry, it just amuses me that we use the term "firewall". Yup, I know it comes from the construction industry, as a way of saving people's lives and the integrity of the building. I sometimes wish hackers and other bad actors did have to walk-through fire as part of their punishment – but I guess that's against some crazy health and safety laws. If you don't know by now, I am joking. Besides which I felt "Walls of Fire" is a suitable "clickbaity” blogpost title that might garner me some additional views – and if that is the case, I’m very sorry for that.
So anyway, in the Droplet we have two firewalls – external inbound, and internal outbound. The important thing about the external inbound firewall is that is turned on by default and blocks all inbound traffic. There is no API or SDK – which means there are controls for the hacker leverage to facilitate an attack. That does have implications clearly for "push" based events, but so far in my experience the vast majority of networking activity is actually "pull" based – in that some software inside the container is responsible for initiating network activity. In that case, our triggers the internal outbound firewall…
The internal outbound is stateful by design – which is just firewall speak for saying that if a client apps open a TCP/UDP port to the network, then allow that to pass – and when communication ends or times out – then close that door. It's the basis of many firewalls for decades. By default, our outbound firewall doesn't block any traffic (remember ping and tracert do NOT work inside our container). The default configuration allows ANY:ANY. To a great degree, this is a deliberate choice on our part to deviate away from our usual stances of "all the doors are closed until you open them".
[Aside: It's the response to the reality in our time-pressed world, that almost no one has the time to RTFM these days. Heck, I'm surprised you even have time to read this blog post – but here you are. Thanks for that :-) ]
So, if we made our default BLOCK:BLOCK precisely zero packets would be able to leave the container, and we spend hours explaining why that was the case… So, if you look at our default firewall configuration when the container is powered off this is what you will see:
Changes to the firewall require access to the Droplet Administrator password, and that the container is shut down or the droplet service stopped. The changes made in this UI are permanent and survive reboots and shutdowns.
Note: Enabling block with no rules defines – blocks ALL network traffic from the container. This is a viable configuration if you wanting to block all communications in and out of the container except those allowed by our redirection settings or other internal droplet processes.
I can make this configuration very restrictive by only allowing port 80 traffic inside the container to work for 192.168.101.101, 192.168.101.104, and 192.168.101.105. This is common when a customer is running a legacy web browser for example IE8 to connect to a legacy backend web service.
In this screengrab below the web service running 192.168.101.101 is accessible (incidentally it's running in a Droplet Server Container protected by secure and encrypted link…) but www.dropletcomputing.com is not accessible – notice also how my mapped network drive to S: no longer works. The Droplet redirected drives still function – which goes to show that for every rule – there’s an exception. So, our firewall does not block our own trusted internal communications – such that drives our file replication service.